playlist stringclasses 160
values | file_name stringlengths 9 102 | content stringlengths 29 329k |
|---|---|---|
MIT_16842_Fundamentals_of_Systems_Engineering_Fall_2015 | Assessing_Student_Work_Completed_as_Teams.txt | OLIVIER DE WECK: Assessing student work as a team has different challenges. Of course, the first thing you do is assess the output of the team's work, whether that's a requirements set, whether it's a set of initial concepts, whether it's a presentation that's given at a preliminary design review, whether it's a physical prototype that they present as a team-- so that always should be the first thing that is assessed is, what is the net output that the team has produced? Within that, it is not always easy to discern what has been the individual contribution since the whole purpose of the team is that the sum that comes out is greater than the individual parts. So in the best-functioning teams, there is a magic that happens. There's a synergy that happens. There's an output that's only produced by these synergistic interactions of all the team members. In a sense, trying to dissect, what's the individual piece that everybody did, is countering this idea of team synergy. That being said, we also need to assess individually the contributions and the learnings of students. And we do this through a couple of mechanisms, for example, a peer review. So you can do a peer-review process. It's trickier in smaller teams than in larger teams. But if it's done properly, peer review can be very effective in not only helping one understand, who are the contributors? And who are the free-riders? This is always a big issue in teams, is that the free-rider syndrome, where some team members contribute significantly less than others. And this often can be a source of conflict. And so understanding this is important. But also, have other mechanisms in the class, such as an individual final exam, and then the other written exam, which is administered online. And the other thing we started doing is oral exams, which seems old-fashioned. But I have to tell you, in 15 to 20 minutes of a one-on-one conversation with a student at the end of the class, you really learn a lot. You learn a lot about what they've learned, where they may still have confusion or misconceptions, and also the feedback that they give you. So I highly recommend to do oral exams if it's possible. |
MIT_16842_Fundamentals_of_Systems_Engineering_Fall_2015 | Teaching_the_Class_as_a_Small_Private_Online_Course_SPOC.txt | PROFESSOR: Many of you have heard of MOOCs. They're very famous, Massively Open Online Courses. MOOCs are very open and large classes, often with thousands or tens of thousands of participants. And MOOCs are essentially a big movement to really democratize education. SPOCs are a little different. So 16.842 was run as a SPOC, which stands for Small Private Online Course. So like a MOOC, a SPOC is also offered online. You can take it in person in a classroom at one of the hosting universities or you can take it entirely online. It's really up to you. It's small because we have a few dozen students instead of thousands, and it's private because in order to take the class it's by invitation. In this particular instance, we had two universities that co-sponsored this class. One is MIT and the other one is called EPFL, Ecole Polytechnique Federale de Lausanne in Switzerland, which is also quite active in the world of online education. The way fundamentally that a SPOC works, is that you have the course management system at each of the hosting universities and institutions. And you essentially post the same materials, the lectures, the readings, the exercises, you post them in parallel at each of the hosting universities. The lecturers themselves, however, are run online using an open collaboration platform, like WebEx for example. You can do it with Google Hangouts. There's a lot of different platforms. And this is important because it is important to have live connections between the faculty and the students, but also the students amongst themselves can discuss. The trickiest part of running a SPOC, of course, is the time zones and the time differences. In this case, the two universities were six time zones apart. And so it was offered in the morning here at MIT and then the afternoon in Europe. If you're running this as a global class, it will be more challenging to find a time slot that works for everybody. The feedback from the students has been very positive, because you can think of a SPOC in this particular class as a blended form of education. Part of the class is live. The lectures are given live, and every lecture is recorded. So even if you miss a lecture as a student, you can watch it later. And so there's a record of it. The assignments are asynchronous, and there are some assignments that are individual. And there are team assignments as well. And so it's a blended form of education, which is an important trend in education where we take the best of the online experience, and we try to combine it with the best of the in-person experience. Teaching a SPOC is fun and challenging at the same time. It's fun because you get to converse and interact with students, not just in a physical classroom at one university but across multiple institutions. And that enhances the class, because students will speak up with experiences from different cultural backgrounds. But it's also challenging, because you have to keep track of what's happening at multiple places. And you have a mix of students from different backgrounds and different institutional constraints as well. And so it's both challenging and fun, but it's a very focused experience. And the feedback from the students has been great. |
MIT_16842_Fundamentals_of_Systems_Engineering_Fall_2015 | Team_Charters.txt | OLIVIER DE WECK: One of the first assignments that I ask students to do in the class is to write a team charter. Now what is a team charter? You can think of a team charter like a manifesto. It's essentially defining the identity and the goals that the team wants to pursue. And in this kind of class, there could be many different goals. For example, as a team you want to learn as much as you can about the process of systems engineering and design and so forth. Another charter could be that you are very ambitious and you want to win the competition. So you want to score as high as you can. And the ultimate goal is to be as highly ranked as possible in the competition. Another goal could be just to have fun, you know. To just have a good time, to learn, but essentially to really be together in this learning environment, to learn and have fun at the same time. And so the flavor of each team is a little bit different. And rather than just letting it happen without much thought or discussion, the purpose of the team charter or project charter is to encourage these discussions early, such that the team is on the same page as they proceed into the project. |
MIT_16842_Fundamentals_of_Systems_Engineering_Fall_2015 | 11_Lifecycle_Management.txt | The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR: Let's talk about the V-Model, here. We are-- interesting, we have a little bit of a lag here. So the lifecycle management is our last topic for today and that sort of completes the V. And then next [AUDIO OUT] manufacturing. So what I'd like to cover today is essentially a definition of [INAUDIBLE] mean by that, and then focus on the lifecycle [INAUDIBLE] also known as the [? ilities. ?] Discuss a little bit where they come from, how much we know about them. The bulk of the discussion is going to be a case study called Reconfiguration of Communication Satellite Constellations. And this sort of explains the idea that system, once you brought a system into operation, that's not the end of the story. The system is going to continue to live and change and be modified over its life. I'll try to summarize some of the key concepts over the whole semester. And then I have one slide on career and study recommendations, if you want to go further into systems engineering. So lifecycle management. So this is my definition of what lifecycle management is. It's essentially the active engagement of-- it's the active engagement of all the stakeholders with the system, between the time when it starts to operate-- so you've done the commissioning, you've done the operations-- until you decide to decommission and retire the system. So it's during the whole operation life. And what you want to do, is you want to maximize the value that you gain from the system's existence. And so life cycle management starts from the very, very, very start. And so I listed here some activities that are included in lifecycle management. So obviously daily operations, you know monitoring, is the system working as expected? Training and certification of the operators. Sometimes, you know, in long live systems, the people who were the early operators, generation one, are not necessarily the people who are going to operate for the whole life. So you have to have new people operate. Maybe one of the most extreme examples of that is an airplane known as the b-52. Have you heard of this? This airplane might be the first plane that actually has a 100 year operational life. And there are apparently Navy-- Air Force guys at MIT, you can confirm this, that there's actually like-- the grandfather was the first generation and there's like third generation pilots now flying on the b-52. That's what I've been told. Did you hear this as well? AUDIENCE: Yeah, That's correct. PROFESSOR: Do some of these people? AUDIENCE: No, I've never actually worked on that airframe. PROFESSOR: OK. But anyway, so that's what I mean by [INAUDIBLE] not just the initial operators, but over multiple generations. Then the third thing here is servicing. And servicing comes in two flavors, of preventive maintenance, you know regular maintenance, and then corrective maintenance, which we also call repair. Dealing with small and large failures, recalls, anomalies. This is a big deal in the automotive industry, as we know. Increasingly protecting the system from either random or targeted attacks. Cyber, physical, right, physical security but also cyber security is becoming a huge topic. Sharing and archiving the data that's being produced by the system. More and more systems produce larger amounts of data. Well, what do you do with it? How do you store that data? Upgrading and retrofitting the system as needed. So retrofitting means we're physically changing the configuration of the system-- right?-- over its life to add new capabilities. And upgrading; upgrading can be done sometimes without having to change the hardware. So you're doing a software upgrade, or software update, software patching. All of this would go under upgrades. Cross strapping the system with other system in a federation of systems. So what I mean by this is a system may have been designed just to operate on its own, but now you're connecting it with another system. This happened, for example, the electrical grid. Right? The early electrical grids were local, regional power grids. And now we've connected them, in the US we have three major grids, the eastern, the western, and then Texas has its own grid. Texas does a lot of its own stuff. It's called ERCCOT. And then here in Europe, you know the European electrical grid is now continental-wide. It didn't used to be. And of course, that has big implications for operations. Reducing the resource consumption and environmental burden of the system over time, and then finally, decommissioning the system when it's time to do so. And in many systems, this is a big challenge as to finding when is the right time to decommission the system. And that's often the case when, you know, the operating costs exceed the value that the system delivers. So this is a pretty long list here. And a lot of tasks, a lot of decisions to be made. Any questions about this list? I don't think it's complete, but this is a lot of things you need to take care of during operations. Here's a more graphical view of this. So this is my sketch of the systems engineering lifecycle. So this is part one, right? Conceptual design-- conception, design, and implementation. You start the lifecycle, you do something like a system requirements review, understand the mission, the requirements, the constraints. You do conceptual design, that's where you do creativity, architecting, trade studies. As a PDR, you choose-- so in this case, you know, we're going to go through this triangle concept and then you design all the details within it. That's where we do modeling, simulation, experiments, MDO. And we iterate-- sometimes you have to iterate this [? dashed ?] duration means abandoning the concept you chose and go for a different concept. That's usually undesirable. After the CDR, we implement the system, we turn information to matter, and this all happens in a technological, economic, social context. So then, the second part of it, which in many systems-- if this is 10 years, this could be 30 years or more-- is the actual operational space. So the system has now been tested, validated, verified, and deployed. And now, you know, we operate the system. Things break, we need to service it, and then I have this-- you see, there's two versions of the system. One is solid, and the other one is kind of faint. That's the virtual version of the system. So one of the big concepts that's being pushed now, certainly in the US by the DOD, but also other places. Is the idea of a digital twin. The idea of a digital twin is that there's a physical system-- right?-- that exists, and then there's a digital version, a digital twin of that system, somewhere on a computer that mirrors exactly what the real system is doing. So if something breaks on the physical system, well that same component fails in the digital twin. And then, before you actually do a repair or any actions, you execute those actions on the digital twin to see whether the system will operate properly. And so this is kind of the latest thinking, is that for any system we should have a digital twin doing operations. Upgrading the system. So in this case, we're adding things to it, we're connecting it. You know, and then at some point, the system does degrade. Because, for example, you know, materials, age, they get brittle. Technology becomes obsolete. You know, it gets harder and harder to maintain spare parts or the suppliers of the original spare parts went out of business. So these really old legacy systems can be very expensive to operate. And then at some point you liquidate and it's the end of the life cycle. So I've already told you about one example of this, which is the space shuttle. And so I'm not going to belabor this again, but I think it's a great opportunity to learn. The space shuttle had a 10 year design life, roughly from 1971 until '81, first flight. And then we operated the shuttle for 30 years and we had a total of 135 launches during that time. We spent $192 billion dollars on the shuttle program. And if you average that, it's $1.5 billion per flight, which is a lot more than was planned. But just yesterday here at the EPFL, we had the annual meeting of the Swiss Space Center. And one of the astronauts here, Claude Niccolier and his colleague, Professor [? Mehron ?] gave a great talk about the Hubble and the servicing of the Hubble and the amazing things we've learned through it. And you guys at MIT, Jeff Hoffman, was part of at least the first servicing mission as well. So there's incredible things that were done by the shuttle. And so I think we need to acknowledge that, not just the fact that it was very expensive. It accomplished great things during its life. But you know, if we had a chance to do it again, would we-- would we come up with the exact same system, would we make all the same design decisions? You know, we lost two of them. Probably not, we could probably do something-- something different. So here's what we wanted, and then finally, we got a much more complex system in the end. And particularly my sense is that a lot of the things that made the shuttle expensive to operate during its life were things related to maintainability. Right? Reliability and so forth. So the lifecycle properties, which I want to talk about next. So I'm just going to repeat the questions, so you guys can hear as well. So the question was about reusability. So the shuttle was-- the orbiter was fully reusable. The external fuel tank was not reusable. Right? And then the solid rocket boosters were partially reusable because they had to be stripped down, you know, and rebuilt, essentially, for every launch. The idea of Blue Origin and then Space-X Stage One, flying it back to the launch site, is absolutely the fact that if you can make it reusable, then you can amortize the capex in that element over multiple flights. And it should drop, you know, the cost by a factor of five or 10 or more. Now, the devil's in the details, right? So if you start and restart an engine multiple times, you can have a big transient every time. So was this system, in fact, designed to withstand these transients? Can it handle multiple starts? What does it do to the materials? And so forth. And so to really model this ahead of time, and then test it over the cycles is the key. So in the shuttle, the big problems-- the two major subsystems in the shuttle, in the orbiter, in particular that made it expensive and the difference between the top picture and the bottom picture was a, the TPS, the thermal protection system. You know, every tile has hit a different curvature in geometry, so really inspecting every tile, replacing tiles, making sure that the TPS-- because just one weakness in the TPS could basically be fatal on re-entry. And so, TPS was very difficult and-- you know, as opposed to an ablative heat shield that you just sacrifice the heat shield completely. And then the other was the main engine-- the shuttle main engine. Originally, the idea was to only do very deep inspections and disassemble-- the disassembly of the shuttle main engine after the first few test flights, and then fly multiple times without having to really re-inspect or disassemble the engine. So they did the disassembly of the engines after the test flights and then they just kept doing it for every flight, where the original intent was to only do it for the test flights. And so the shuttle main engines were the second big cost driver in operations. So re-usability is a great idea, but to the degree to which re-usability actually happens in reality, that's really about detailed design decisions you make. OK. So let me move on then. When it comes to really understanding lifecycle, I do want to point you to the 15288 standard, which is the ISO 15288 standard, which has a fairly complete list of system lifecycle processes. So this is, essentially the system and software engineering, system lifecycle process standard, that has a whole range of processes that are described in quite some detail. And they're not just the technical processes shown here on the right side, we focused a lot on those technical processes in this class, the stakeholders, the requirements, the architectural design-- which is essentially the conceptual design-- but there's also the project processes, right? Executing the project agreement processes. So this would be negotiating contracts, you know supply contracts, acquisition contracts, and then all the organizational things you need to do to create the right organization to execute these projects. So what's nice about the 15288 standard is that it's a pretty broad standard. OK. So let me talk about the ilities of the lifecycle properties in particular. What they are and how we can describe them and how, especially how they relate to each other. And so the paper that underlie this lecture that you need to get some more detail. And this is one of them. So this paper is called Investigating the Relationships and Semantic Sets Among Lifecycle Properties. And we published this about three years ago in Delft at the CESUN Conference. So the background on here is the following, is that as you've seen, complex engineering systems live for decades-- some of them even for centuries-- and the ilities-- by ilities, we mean properties of systems that are not the primary functional properties. In software engineering they're often called nonfunctional properties. And the thing that's tricky about the ilities is that you often only can observe them during operations. So you know, you can test systems in the short term, you can see does it work? Does it fulfill its function? But whether certain ilities are present, it often only shows itself over time. And so most of the research that's been done and sort of quantifying the ilities, has been looking at these properties one at a time. So the questions we wanted to go here, wanted to answer is, which of these lifecycle properties are more prevalent than others? You know, the top 20. And then especially, what's the relationship among lifecycle properties? Do they form what we call semantic sets? And then how could you use this information? And so here, we're going to do this using two different methods. The first method is what I call prevalence analysis. We're going to look in the literature and on the internet, how frequently these lifecycle properties show up, are mentioned, how much we know about them. And then the second method is a cognitive method. Where we ask people to give their understanding of the lifecycle properties and put them into a hierarchy, and then we'll compare the results. So here are essentially the results from the prevalence analysis. And this is ranked according to the number of journal articles. This is scientific papers written, where this particular key word, by quality or reliability, shows up in the title or in the abstract of the paper. And you can see this is in units of thousands. OK? So quality is number one. Right? Quality is number one. The most scientific papers are written about quality. And you can see, it's almost a million journal articles. This is-- I'll show this to you over time, but since 1884 is the first year. And the databases that we use for this are called it's Compendex and Inspect. These are actually combined-- if you go to a website called engineeringvillage.com, this is sort of a master database for scientific papers and engineering. That was the basis for this. Number two, reliability. Number three, safety. Then flexibility, robustness, and so forth. And the other bar, the gray bar, is if you Google essentially for this particular keyword and-- you know, keep in mind, this was done about five years ago-- this is the number-- the millions of hits that you will get. And, so you know, there's a factor of 1,000 difference here between journal articles and hits on internet. But still, you can sort of compare the two. And so, in some sense, the black bar is the totality of scientific knowledge about this lifecycle property. And then the gray bar would be the amount of information or uses-- usage of that knowledge, at least as far as it's represented on the internet. And what's interesting is, in some cases the black bar is smaller than the gray bar, which means that common usage is [? leading. ?] So sustainability would be an example. Everybody talks about sustainability, companies have-- they say, our system is sustainable because, you know, it uses less resources, it produces less emissions. But the actual amount of scientific knowledge, the actual amount of research as to what really is sustainability and how do you design for it, is actually smaller than the usage. And then there's areas where it's the opposite. Like for example, modularity. Right? You see that in modularity. So in the relative sense, academic interest is leading. We see there's quite a lot of literature on modularity. In mathematics, you know, modularity and software, modularity and design, but most people-- you know, modularity per se is not so interesting and so exciting to the general public. So keep in mind that for some ilities, there's an imbalance essentially, between our understanding of-- the scientific knowledge, our understanding to be design, how to design for it, and you know, how frequently at least a keyword is used. Now here, this is a little bit more-- this is a little bit harder to see. This is essentially looking at these lifecycle properties over time. So what this graph those you know is starting here in 1884, cumulatively the number of journal articles published about each lifecycle property. And roughly what way you can think about this is that there's some lifecycle property, the top four-- five that we've been actually working on for a long time. Even safety, you know, there's some interesting articles in the 1890s about safety for example, in mines. You know, coal mines. There's an article about how does the impact of lighting-- better lighting-- on safety and productivity in 1890. And so they show that just providing better lighting, actually has a dual benefit; fewer accidents, fewer fatalities, and better production output. And so that's-- and this was a scientific study that was done by comparing data in different mines and actually performing some experiments. Then you have a group of life cycle properties that we only started really thinking about and publishing about I would say, around World War Two. Such as usability, maintainability, and my interpretation of this is that especially during World War Two, a big difference made logistics. How easy was it was it to use different weapons? How easy was it to maintain equipment? It became a huge determining factor in the outcome of the war. You know, for example, in North Africa in the North African theater, you know tanks, trucks being exposed to the sand and so forth. So people really started thinking about the military, started thinking heavily about the importance of maintainability in the design of these systems. And after the war then, a lot of these concepts like usability and maintainability started spreading into general civilian life and products and so forth. The third type of ilities are the newer ones that we've done research on just since the 70s, in the last 30 years. So here in this category I would put things like sustainability, recyclability, evolvability, even interoperability-- which means the ability of systems to connect together and work across system boundaries. Those are pretty recent lifecycle properties and we're still actively researching them. So another way to show this is by essentially making a network of these lifecycle properties. And so the way-- this is still all using this prevalence data. Essentially what you see here is a network diagram that has these lifecycle properties in a network relationship. Let me just explain, so the size of the nodes relates to how much knowledge we have. Essentially the height of the bars that we saw earlier. And then the strength the thickness of the line relates to the strength of-- really how closely are these two ilities related. And the way this is calculated it is using the so-called 2-tuple correlation. So you take, essentially, articles that have-- you look at articles that have, for example reliability and maintainability in the same article. Right? And then you divide that by the total number of articles on reliability and or maintainability. And that's a ratio between 0 and 1. And then this graph was produced with a cutoff strength of so. 1, right? So if there's more than 10% of articles list these two properties together, then there will be a line here. And the stronger the line, the thicker the line is, the more closely, the more often these concepts are mentioned in the same article or the same piece of work. So that reflects the strength of relationship. Now, what's interesting is when you first look at this, you don't see much. But after you look at this for a while, you start to realize a few things. First of all, in the center of the graph we have the classic ilities of engineering. Quality, safety, reliability, and I would argue flexibility as well. Those are the top four that we saw before. And then around the periphery of the graph, we have lifecycle properties that are more recent. We haven't really thought about them too much, we don't fully know how to design for them yet. And if you look at them in groups, I will argue that there are three major groups here. So the first group in the upper left is things like maintainability, durability, reliability, quality, and so forth. So this is all about, is the system made well, with high quality, particularly early in its life. Right? Then we have a group here on the right, and this is about, is the system easy to change. Is it easy to change the system configuration-- flexibility, extensibility, modularity and scalability, interoperability. Those are all different sub-flavors, if you want, of being able to modify the system over its life. And so that's a group of ilities that that goes together. And then the third one is resilience, robustness, quality, safety. And so this is really related to performance of the system under different types of uncertainty. You know, either environmental variability or failures in the system, and the ability of the system to withstand, or at least perform-- you know, have good residual performance even in the face of failures. So that's, I think, an important way to think about it. And when you're writing requirements for systems-- like the system should be resilient. Then this helps understand well, what are the other properties that are linked to it and that maybe support that? OK, any questions here or at MIT about-- this is sort of method one, is get data about the lifecycle property and put them in relation to each other. Any questions? Johanna, any question there? AUDIENCE: I have a question. So is this chart that you have now actually used in the initial design and like conception [? con-ops ?] to like try to tease out the interdependencies, or is it too abstract and just more of an educational tool. PROFESSOR: It's really at this point-- you know this is fairly recent. And this was just done in the last few years. So I don't think this has fully penetrated system engineering practice yet. But the point-- the point that I want to make here is that there's a huge-- there's a huge gap right now between when people in briefings you know, either at the Pentagon or at a corporate headquarters say, you know we want a sustainable product or we want-- we're a software company, we do optical networking and we want we want to have the most resilient design of the industry. You know, they'll put that as a goal for the project. The question then is, well OK, that's fine. But what does that really mean? What does it really mean, resilient? You have to operationalize that definition such that you can derive from it lower level requirements that you can actually go and design to, that you can test for. So what this helps you to do is understand what may be supporting-- what are supporting elements that will be related to resilience. You know, what are supporting concepts, supporting lifecycle properties that this property that you're looking for is linked to? Right? So it's an evolution-- you know, we know how to design now for speed and energy efficiency, and you know, the sort of things that were really hard to do 20, 30, 50 years ago. It's pretty standard practice now. You know, how do you design for optimal interoperability? We don't quite know yet, but we're finding our way. And so what I believe is that especially the lifecycle properties on the outer periphery of this chart, those are the ones that need more work and those are the ones that we're really learning how to operationalize. Does that make sense? AUDIENCE: Yeah, thanks. That makes sense. PROFESSOR: OK, good. Let me talk about the [? message ?] too. So this was basically trying to get to-- how do people, how do humans-- there's semantics, semantics means the meaning of words, right? Semantics is the science of the meaning of words. How do they interpret these lifecycle properties? And so humans have a deep and possibly varied understanding of the semantics of these lifecycle properties. So what was done here was that a list of 15 life cycle properties, many of them are overlapping with the ones you saw earlier, were presented and then the challenge was to-- the question that these four groups-- 12 participants, four groups-- had to say, is there a hierarchy here? Are some higher level lifecycle properties are some of them lower level properties that support these. So there was a round one, find the parent-child relationships, describe these. Interviews, and then the second round. So here's essentially what was the results of the first round. So four different groups, so each of them basically came up with its own version of a hierarchy. They didn't talk to each other, they were firewalled from each other. And in all cases this notion of value robustness came out on top. So value robustness means the system should deliver value, right, to the stakeholders. Despite failures you know, environmental changes, so value delivery of the system is the top, the most important thing. Then what's different here is how do you achieve that? So for example, you see group one had robustness and changeability. So robustness typically means even if you don't make any changes to the system, it should continue, you know, it should be survivable. It should be versatile. Changeable means you actually modify the system over its life. And then we have lower level properties like modularity or reconfigurability. And you know, there were differences between the groups but not as big as you might think. So in the second round, as a result of the second round, then this so-called means to end hierarchy was constructed. So means means, you know, these are the enabling lifecycle properties and ends means this is the final, sort of this is really what you want to achieve. And so this was the result of that. So again, at the top we have this notion of value robustness. And this is achieved by a combination of survivability robustness and changeability. And then these, in turn, are achieved by lower level lifecycle properties. And then at the lowest level we have things like interoperability, modularity, reconfigurability, and so forth. And the difference between the solid lines and the dashed lines here is that if it's a solid line and these are directed arrows, that means that three or four out of four groups, right? So the majority of groups had this is a particular parent-child relationship. And if it's a dashed line it means only two out of four. And if it's only one out of 4, it's not shown here. OK? So this is kind of a combined result across the four groups. So you know, what can we take from this? So first of all, lifecycle properties are absolutely critical. You'll find us in mission statements, you know, you'll find it in even in advertising. Right? A lot of companies say we have we have a robust solution, we have a sustainable system, we have resilient networks. So it really is a huge, huge selling point. It's very critical. What I encourage you to do as system engineers is take a critical look at this and say, well how resilient is resilient? You know, how many subsequent failures of the system can you tolerate? Sustainability; what does that mean in terms of kilowatt hours per hour of usage? You have to get down to the details to start quantifying what these lifecycle properties mean and compare them among systems. Despite differences, the two methods that we just looked at-- so one is the prevalence analysis and the other one is the human semantic exercise-- despite differences, the high level conclusions were similar. So some ilities are closely related to each other and form semantic sets, meaning they're tied together by both [? synonymie ?] or polysemi relationships. [? Synonymie ?] means they essentially mean the same-- they're synonymous-- they essentially mean the same thing. polysemi means-- it is one word, but it means you can have possible sub meanings. Right? So the idea of groups of semantic sets. And those groups essentially are robustness-- so this is the ability of the system to perform its job despite either internal or exogenous disturbances. Right? That's robustness. Flexible or changeable, which means that you can modify the system easily. You know, if you operate the system for a while and then you realize, Ah, I need the system to do something else. Or I need to adapt it, or make it bigger, or make it smaller, or add some function. So you would modify the system and that's flexibility or changeability. And then resilient and survivable is very specifically the ability of this system to continue performing despite failures or attacks. Right? That's what resilience really means. And so those seem to be the big three semantic clusters that we see in life cycle properties. And then the third point here is there appears to be a hierarchy of life cycle properties with two or three levels. Where we have the lower level properties of systems like modularity, for example. Really your customer probably does not care about modularity, right? If you advertise the modularity of the product, some of the more educated, some of the more technically savvy customers, they may understand what that means. But most of your customers really won't appreciate modularity because it's kind of a low level form-oriented technical property of the system. But what they will appreciate is the ability of the system to be reconfigured or adaptive for different pieces, so interoperability, modularity, et cetera are low level lifecycle properties that act as enablers of a higher level lifecycle properties. And so future work-- future work in this area is to both just to apply these methods to a broader set abilities, larger group of test subjects and more data, but also from a practical standpoint to operationalize better. How do we write requirements? How do we actually design for the lifecycle properties like resilience, flexibility, changeability. How do you really design it? So go from the keyword to real engineering by operationalizing the sub-attributes or factors in the system. OK? So that's a quick summary of life-- you know, I could go on for hours about this. I'm very passionate about this topic and I will say that people who have dealt with large complex systems you know, whether you're operating the transportation system of a city or in airline operations, or you're running the IP infrastructure of a major corporation. These words, these words are real. These words are you know, real dollars, real challenges. This is really where the action is in a lot of these large complex systems. OK. Any comments or questions about lifecycle properties? What they are, how they relate to each other, why they're important. Voelker, did you want to maybe say something? OK. All right. So let me talk about-- try to make this a little more real. Let me talk about a case study, a very specific case study about communications satellite constellations. And this is the second paper that underlies today's lecture. And so let me give you a little context for this first. So this work here was originally published about a decade ago, in March 2004. And that was a few years after the Iridium and Globalstar-- these the two well known communication satellite constellations-- had been launched. And I have to give you a little context, when I first came to MIT in the mid 1990s, there was a huge interest in this area of satellite constellations. You know commercially, scientifically, in fact, the impression was there were so many applications for satellite compilations filed, that we're going to have so many satellites up there you won't even see the sun anymore. It was just like thousands and thousands of satellites. And the Iridium and Globalstar were really the first two constellations that were fully developed, launched, and both of them failed commercially within a very short time, a couple of years. And so after this happened the whole market and interest in satellite constellations collapsed for a long time, until about two years ago, two or three years ago. Now people talk about these constellations again. You know, constellations of [INAUDIBLE],, Iridium Next. You know, there's sort of new enthusiasm , new wave of enthusiasm for constellations. So this paper, and this case that I want to tell you about is really about the first wave. And in terms of the ilities, the one that that I'd like to explain to you about here is flexibility and scalability. Rather than thinking about a system as something that you design and build all at once, how do you design a system such that you can gradually deploy it? We call that staged deployment approach. And what's the benefit of that? So here's some pictures you see on the right side, this is what the original iridium satellites look like. They actually use phased array antennas-- and actually both use phased array antennas. So you have individual elements here. And by differentially phasing the signal, you can actually steer the beam. This was a very new technology at time. And then the Globalstar satellite shown at the lower picture. Here is a little bit of that data. So both of these were launched in the late 1990s. Iridium has 66 satellites, Globalstar 48. Iridium is a polar constellation, so the satellites go almost directly over the poles. Globalstar is a walker constellation, so inclined. Doesn't quite give you full global coverage, it's about plus or minus [? 78 ?] degrees, so you can't use Globalstar at the poles. The altitudes are a bit different too. Iridium is at 780 kilometers, while Globalstar is at 1,400. Which-- I know at least one or two of you are working on the Van Allen-- the Van Allen Belt Commission. Right? [? The cube stats ?] to measure the Van Allen Belts. Somebody mentioned that today, who was that? No? Did I hear that wrong? It was mentioned, right? So who was that? Arnold, and he's not here. Ah, see. So 1,400 kilometers, are you actually pushing, you're starting to push the lower edges of the Van Allen Belt. So one of the big, you know, in some sense, it's easier to be higher because you need fewer satellites to cover the whole earth. But the higher you go, the more exposed you are to the radiation environment. You get closer to the Van Allen Belt, so that's the big trade-off there. You know, the mass of the satellites, they're between 450 and 700 kilograms, transmitter power, around 400 watts. You know, which is not that much, if you think about it, right? That's four very strong light bulbs, the old style light bulbs. And then what's very different, again, is the multi-access schemes. So Iridium used time division multiplexing and Globalstar, which was supported by Qualcomm, used essentially CDMA. So you don't chop your frequency band into separate channels, you use the whole frequency band and then you use a pseudo random access code that essentially de-convolves the signal for each channel. The number of channels, about 72,000, about 120,000 duplex channels so that you can actually-- duplex mean you can carry on a two way conversation as opposed to just be asynchronous. And then you can see the data rates, quite low, like per channel. 4.8 or kilobits per second, 2.4, 4.8, 9.6 for Globalstar it's enough for having a conversation. And total system cost, Iridium was about $5.7 billion dollars and Globalstar about $3.3 million. Not including the cost of the ground stations. Both went bankrupt relatively quickly after they launched. However, they've been operating really since then, right? Since this time. Iridium Next is currently under development and is scheduled to launch in 2017. Globalstar is publicly traded and actually valued as a company at $1.9 billion dollars. So they're actually both-- you could almost argue that they were a decade ahead of their time. And I want to tell you a little bit about the story of, especially Iridium. So here's a couple of press releases, so things that have been written in the press. So look at this one, 26th of June, 1990. Motorola unveiled a new concept for global personal communication. [? Bases ?] a constellation of low earth orbit cellular satellites. August 18th 1999, nine years later, last week Iridium LLC filed bankruptcy court protection last investments are estimated at $5 billion. So the question is why did it happen? The technology actually worked quite well, it was not a technological failure, it was a business failure-- but I would argue a system's failure. To think about the problem differently, so the fundamental challenge is to properly size the capacity of a large system. So if you're designing, you know, a car factory, a new power system and for a future uncertain demand, it's very difficult to do this because demand is uncertain. So is it better to oversized the system or are you conservative and you make it smaller? Market assumptions can change, right? In a seven to eight years. So essentially the v, right? Like getting back to the v. Each of those two systems, Iridium and Globalstar, it took them essentially a decade, almost 10 years-- right?-- to go across the whole v. And when you make your requirements, your stakeholders, all the stuff in the upper left portion of the v, there's so many years that elapse between when you make those assumptions, you write those requirements, and when you actually go to market a lot of things can change. And that's fundamentally the challenge here. Just to illustrate this for you, showing some data, this is cellular this is not space space, this is on the ground. Cellular subscribers for mobile phones. I know this is hard for your generation to sort of understand this, but we actually didn't have mobile phones or there were these clunky bricks in your car. You know, it's really remarkable what happened. So look at this data, in 1991 which is when the system was just being developed, there were less than 10 million mobile phone users in the US. It just wasn't-- it was very expensive, it was just not widely-- that technology hadn't been, the networks weren't there. You know, so the green bars where the forecasts that was done in 1991, as to how quickly the mobile user market would evolve in the US. So their prediction was that by 2000, a decade later, there'd be just shy of 40 million users. OK? Now the dark blue bar is the actual evolution. So actually by 2000 you know the US now has 310 or so million inhabitants. So by 2000 there were 120 million users. So the forecast that was done 10 years earlier was off by a factor of three. That grounds, the terrestrial mobile networks, developed three times faster than had been predicted. Now that's great for the terrestrial people, right? AT&T and Comcast-- not Comcast, Qualcomm, et cetera. But the problem is, of course, that because you know ground based communications was so much easier, a lot of the market that had been anticipated for the satellite based communications was essentially eaten away by this competitor, by the ground based competitor. VOELKER: [INAUDIBLE] PROFESSOR: Yeah, OK. Can you still hear it, MIT, are you still with us? AUDIENCE: Yes. PROFESSOR: Yes, you are. OK. Voelker wants to say something. VOELKER: Actually, the figures that you showed [INAUDIBLE] and for once, Americans did not look over the pond. Because at that time in Europe, the GSM systems came really strong and very quickly. In the early 90s, you could have already had your Sim card in your cell phone [INAUDIBLE] and in fact in America then, you could only buy the telephone with all the Sim card in-- you had to buy the whole system. And so as the system was controlled by the large companies and not by the users, the large companies, Motorola, thought that they could impose their satellite system on the markets. And in this case, the country to the-- video recorders, hi fi systems earlier, Europe went much faster than America. And actually European GSM market-- cell phones, as you know them-- went much quicker. And then Nokia, which is actually European, overtook and it took probably the next decade for Americans to catch up. So finally here this was, a combination of people had the product and they just couldn't get in the US. So they were limited to their own markets, and not to sell their products to the rest of the world as they had done in the past. PROFESSOR: Yeah, good point. And in fact, you know one of the things that-- I think both Globalstar and Iridium had to do-- and this was a very late decision, just in the last couple of years here before launch, 97, 98, is to make their handsets dual use. Like that if you were in an area where there was a cellular network, the phone would switch to that because that would be cheaper, and if you didn't have cellular network access it would automatically try to communicate to the satellite. So there was a lot of-- there were a lot of issues that came up by essentially the markets, I think in Europe, your point, well for-- about the GSM, and in the US just developing quite differently than had been anticipated. OK so I want to give you a little bit about sort of economics. In the end a lot of this is driven by money, by economics. And so this is satellite economics 101. The key question here is how expensive is it-- what's the cost, and then what is the price that you can charge for one minute of service, right? One minute, one unit of service. And so look at this equation here. This is CPF, stands for cost per function. And so in the numerator we have the lifecycle cost, which is your initial investment. And then we will essentially capitalize that with some interest rate k. Plus then, for each year of operation, you have the operational costs for that year you have to add. So that's your initial-- that's your development costs, your manufacturing costs for the satellite, including the launch costs. And then the ops costs would be operating your ground stations, your networks, any replenishment costs, they would be in your ops costs. And then you divide this by what's called here the number of billable minutes. That doesn't mean you're actually going to bill people, it means just they are potentially billable minutes. So that's the capacity of the system, [? see ?] the best times you know, 365 days, times 24 hours, times 60 minutes, times what's the load factor. So the load factor is essentially the capacity utilization of the system that you anticipate. OK. So that's the basic equation for calculating CPF, cost per function. Plug-in some numbers here. These are the numbers that have been assumed, you know a $3 billion investment, 5% interest rate, $300 million per year of ops cost, 15 year life cycle-- this is the capital T, over 15 years. 100,000 channels-- that's your capacity. Number of users in this case, so the load factor is simply the number of users times the average activity per user. In this case, it seems to be 1,200 minutes per year, about 100 minutes per month. OK? So that gets you-- that gets you a CPF of $0.20 per minute. And based on that you can-- the business case was made based on these kind of numbers. And so 3 million users, just 3 million subscribers, right? Not 3 million users at the same time, that number would be much smaller. That number can't be bigger than 100,000 because that's your capacity. So for example, if you run-- if you run a fitness club, right? At any given time in your fitness club you can have 50 or 100 people actually working out, but your number of customers, your subscribers, should be 1,000 or 2,000. And if they all show up at the same time you're in big trouble, right? So that's the difference between number of users or the number of subscribers and number of active users at any given time. That's a big part of managing these kind of systems. However, what actually happened is the number of actual users grew much slower. So if you plug-in some different numbers, let's keep all the numbers the same, except for the subscriber base-- the number of users. In this case we're going to assume 50,000 users instead of the three million. This is closer to what they had after about a year of operation. Now your CPF goes to $12 per minute, which is noncompetitive. Right? Except for some extremely, like, military applications or making some emergency phone calls on an oil rig, you know, on the ocean. Most people at the time-- now and at that time-- would never pay that for one minute of service. And so that was the fundamental problem, is that the user base did not materialize as fast as planned. Therefore this cost per function was way higher and they did charge, you know, $3 to $5 per minute of usage of the system, which as you try to squeeze your existing users more, you're not going to get the ramp up and scale up in the system that you need. That was the fundamental problem with the economics of the system. So let me talk a little bit about the design decisions the conceptual design of what this design space looks like. So fundamental-- oh, the other thing I should mention to you, what was interesting is after the bankruptcy, both of Iridium and Globalstar, both of the chief engineers for both of these systems took refuge at MIT. Essentially, they came to like-- Joel Schindall, who was the chief engineer for Globalstar, great guy, both very competent people, came to MIT-- still there, still a professor there. And then Ray Leopold was one of the three architects for Iridium, also came to MIT during the time. So I had extensive discussions with them, and what you see here on slide 25 is one result of those discussions. Which is, the key design decisions that they have to make, fundamentally, there is-- you won't be surprised to see this, this is the magic number seven, right? Seven key design decisions. When you design a satellite consultation for communication purposes, constellation type-- polar or walker, orbital altitude, minimum elevation angle above the horizon-- that you can communicate-- satellite transmitting power, the size of your primary antenna, your multi-access scheme. So this is time division or code division multiplexing. And then the last one is about the network architecture-- do you have inter-satellite links, yes or no? Inter-satellite links means satellites can talk to each other. And Iridium chose that, Globalstar did not. So in Globalstar, the Globalstar satellites cannot talk to each other directly in space. They can only talk-- it's a bent pipe system to a ground station. OK? So if you this is like the morphological matrix that you learned about, you just pick design decisions in this morphological matrix and you come up with an architecture. In fact, a full factorial search of the space would reveal 1,440 different satellite architectures. So that's on the input side, what's the output vector? What do you care about in terms of output of the system? Well first of all, performance. So the performance-- this is for voice communications. In a sense it applies to data communications as well. It is your data rate per channel, right? 4.8 Kbps, your bit error rate, like what is the average number of bits that are wrong? 10 to the minus 3. That's actually pretty-- not a very-- that's not a very stringent requirement. And the reason is this is for voice communication, this is not for sending, you know, commands to a spacecraft going to Mars. If you send commands, it would have to be 10 to the minus 10, or 10 to the minus 9. Much, much better bit error rate. But this is OK for voice. And then the link fading margin, 16 DB. This is the strength of the signal, which which will dictate whether or not you can use the phone under trees or in buildings. So you want this number to be higher, but the higher it is, you know given the power you have on the satellite, the fewer channels you have. So there's trade-offs. So what was done here is to keep the performance foxed, and so that you can compare architectures, compare apples to apples. Then we have capacity, which is the number of simultaneous duplex channels. And then finally, the lifecycle cost of the system, which includes research development, test, and evaluation, satellite construction and test, launch and orbital insertion, and then the operations and replenishment. So if the satellite fails in orbit, either you have to already have prepositioned to spare, or you're going to launch a replacement. And this actually happened in both cases, both constellations. So in order to then, connect the input to the output, you need to build a simulator or a model of this system. And this is a high-level view of what that model looks like. So we take our input, our design decisions, and certain constants that we assume-- a constant vector. And cascade this information from the constellation module, takes the altitude and the minimum elevation angle, and produces-- t is the number of satellites and p is the number of orbital planes, so this is sort of orbital dynamics. The spacecraft module calculates the satellite mass, the satellite network builds essentially the communication pathways. The link budget will calculate the capacity of the system. You know, given those performance constraints. The launch module will determine how many launches do you need, from where. And then the cost module basically calculates the total cost of the system, the lifecycle cost, and finally you get essentially a trade-off of lifecycle cost versus capacity of the system. Let me just show you some, you know, you say, well that's tying together a lot of information into a multi-disciplinary model. So what kind of information do you have? You have a mix, really of physics based models-- so this is a very well known equation, the eB/N0, that this is the energy bit over noise ratio. This is a closed form physics-based equation that tells you how much energy is there per bit-- what's the signal to noise ratio on a per bit basis. And this is a function of transmitter power, receiver and transmission gains, various losses in the system. And then some of the equations-- some of the information is empirical. For example, the relationship between spacecraft [? wet ?] mass and payload power. You know, if wanted to have a closed form or you wanted to have more detail, you would have to almost build a CAD model, right? A separate model. You'd have to-- for each of the 1,440 architectures, you'd have to manually construct an individual detailed design. And that's not feasible, so what you do instead is you use some prior data-- and you can see here a scaling relationship. It's not perfect, but we have error bars, we know how good it is. Satellite mass, satellite wet mass is a function of transmitter power and, in this case, propellant mass. So two kinds of equations. Benchmarking. So once you have this model, you need to ask the question, how can I trust this model? Does it give me a reasonable answers? And in this case, benchmarking the process of validating simulation by comparing the predictive response against reality. So just quickly here, showing you four kinds of data. So one is the simultaneous channels of the constellation, this is the prediction of capacity. You can see it's pretty good. In this case, the model is actually a little conservative. So the blue bars are the actual planned capacities. The red ones or magenta is simulated, and you can see that the simulation under-predicts, slightly, the true capacity of the system. Life cycle costs, you saw that Iridium was just a bit more than $5 billion. Globalstar was between $3 million and $4 million. And so here we're in the right ballpark. And then in terms of satellite mass and the number of satellites in the constellation required, we matched that very closely. And the reason for this is, fundamentally this is just geometry. Right? If I can tell you the altitude and the minimum elevation angle, and I'm telling you I need I need global coverage and I want dual redundancy, so you can always see at least two satellites, it's just geometry to be able to figure out how many satellites and how many orbital planes you need. And that's why the model, or the simulation, and reality match very, very closely. So this gives you some confidence that this model is reasonable. So what you can do with it is now what we call Trade Space exploration. Which is, in a sense, what you did for the [INAUDIBLE] competition. So this picture here, this graph shows you the life cycle costs over those 15 years, versus global capacity of a system. Each of these blue dots represents one of those 1,440 architectures. And you can even see on this, where Iridium actual versus simulated falls and where Globalstar actual versus simulated falls. So both of them were actually off the Pareto frontier, which was interesting and led to quite some discussion. And you can-- the Pareto frontier itself was, of course, very useful. One of the reasons that Iridium is not on the Pareto frontier is that fundamentally, polar constellations are inefficient. And, if you think about it, when the satellites are crossing over the poles, they're very close to each other. And they're actually not crossing exactly over the poles, or you could actually collide, right? So you offset them slightly to avoid collisions. But some of the satellites actually get turned off when they cross the poles. So you're not utilizing your assets super efficiently in the polar constellation. Which is one of the reasons why they are not on Pareto frontier. Now, the way you would use this in a traditional system engineering approach, which is traditional system engineering approach means give me the requirement, write down the requirements, and then find the minimum cost design for that requirement. So a requirement could be we need a capacity of 50,000, for example, and then you find this intersection with the Pareto front, and that's the minimum lifecycle cost design that gives you that capacity. Right? And that's the system you pick, and that's what you go and build. And that's essentially what they did. The problem with this if there is high uncertainty is, the true requirement is kind of unknown. The market will tell us in the future what the true capacity should be. So how do you deal with this? Well, you could be-- we're going to be on the safe side, right? We're going to oversize-- so if the true demand, the true capacity that you would need, is higher than what you just had, your system is going to be under capacity, right? Your system is going to be undersized for what it should be. And so what happens in practice? What do you think happens in an under capacity situation? What would you say? The system is too small relative to the market that you're in. You're not capturing as much as you can, meaning a competitor will probably capture. I mean, I guess you can change your price policy, but fundamentally, you're missing out on market opportunity. The other situation, which is actually what happened, is that the demand is much less than we anticipated. Your system is oversized and remember, once you launch a satellite constellation, it's not like a fleet of taxis, right? Where you just park them in a parking lot. You've already-- the fixed cost is very high. So if demand is below capacity, you have all this investment here, lifecycle cost has been wasted because you oversized the system. And the challenge is that the true requirement is kind of like this-- there's this probability, density function. Right? It's a probabilistic requirement, essentially. And there's no set of experiments because of all the [INAUDIBLE] uncertainty. It's not a systemic uncertainty. There's no set of experiments you could do today that will help you refine the requirements. Now they did do market studies, but again those market studies were years ahead of when the actual system was launched. So they were unreliable, fundamentally. So what I'm arguing here is in this kind of system where you have large uncertainties as to the true requirements, you shouldn't just guess and then put billions of dollars on a guess. This is like playing in the casino, right? Essentially. What you should do is think about the problem differently. And in this case, the answer is staged deployment, or one of the answers is staged deployments. So you build a system such that it can adapt itself to the uncertain demand. So you build, initially, a smaller, more affordable system. But-- and then the system has already built into it the flexibility or scalability to expand if needed. But only if needed. And there are two major economic advantages to that. One is that you don't need to spend all the capital up front. As so you're deferring capital investment. And you retain the decision freedom to reconfigure or not, and that's typically what we call the real option. You've created a real option for the future. So the question, then, is well how valuable is that? It is worthwhile doing this staged deployment approach? So there's probably different ways of doing this, and I just want to share with you-- and this is from the paper-- how this was done. So step one is you partition your design vector. So you basically decide which part of the system is flexible-- because you can't make everything flexible typically-- what part is flexible and what part is fixed, is the base, you can't change it? So in this case usually the idea is that the satellites themselves, the design of the satellite, the transmitter power, that their protocol, and the network architecture should be fixed, right? Those things are difficult to change. We're going to allow the astrodynamics, the actual shape of the constellation, to be flexible. So keep the satellites the same, only change the arrangement in space. So what you do is you partition the design vector into the flexible and inflexible parts. And when you do this, in step two, what's nice about this is you can then actually find families of designs, right? You can find families of designs where the-- we have a little lag here-- that you can find families of design that share the constant parts. That have the common-- So what's shown in this graph here-- this is again, our design space, kind of zoomed in a little bit more-- every one of these points that are connected with these lines uses the same types of satellite [INAUDIBLE].. Same transmitter power, same antenna diameter, and all of them have inter-satellite links. So effectively, you're partitioning the design say into subsets that share common features, and then we call this a family of designs. And the idea is to start small, so on the left side, and then you grow the system over time right? But only if needed. So when you do this, you 1440- strong design space turns out to be decomposable into 40 different paths. Now, which path would you choose here? Well you want to be as close to the Pareto front possible. So this is an example of all of a path. In this path, if we started on the lower we would start with 24 satellites and then we would actually move them to a lower altitude. And add 30 more in step two, and then you would gradually grow the constellation over time. You see that? This is actually not too different from how GPS was done, right? The GPS constellation was deployed in phases, but it was not market driven, it was just sort of phasing the development for risk reduction, and spreading off the capital. The other thing that's interesting here is that as you grow the constellation, it moves further away. It moves further away from the front. So when it's small, this particular [INAUDIBLE] is close to being optimal, and as you scale it up it becomes more sub-optimal. And there's other paths that have the opposite behavior; where when small it's kind of suboptimal, and as you make it bigger, it gets closer to the optimality. Pretty interesting. So that's step two, find the paths. Step three, you now have to model the actual uncertainty. And in this case there are different ways to model uncertainty. In this case, what we used was a GBM model-- geometric Brownian motion. This is well-known in physics and applied to statistics, physics, this has been applied to the stock market, right? And the idea here is that in this case, the man behaves like a particle in a fluid. Right? That's kind of moving in an unpredictable fashion. So this is the basic equation for-- this is a discrete version of geometric Brownian motion, you have some trends nu, nu times delta-t, so this is the delta and demand, or the change in demand, divided by demand, so this is normalized change in demand, is nu times delta-t, so your trend times delta-t plus sigma times epsilon times square root of delta-t. Epsilon is a standard, normally distributed random variable between 0 and 1, and epsilon and sigma is your volatility, essentially. So here's some examples, if you start with an initial demand of 50,000, which is what they actually saw as an initial demand early on, you have a growth of about 8% per year. This is your trend. Plus in this case, a volatility of 40% per year, which seems high but in this kind of very new technology, new system, is actually not that far fetched. You get three different-- these are just three examples of how demand might evolve. These are just three scenarios. So demand can go up, it can go down, and-- GBM is very nice, but one of the downsides of GBM is there's infinitely many scenarios that you can generate because it's fundamentally-- even though it's discretized in time, it's not discretized and in demand. So a simpler version of this, a more discrete version, is a so-called binomial lattice model. So the way this works is again, you start with some initial demand here on the left and then you discretize this such that moving through time, you can look at different scenarios. Right? These scenarios could be-- the best scenario is things just keep growing. Right? Grow, grow, grow. Here, time, the 15 years, have been sub-divided into five three-year periods. So here's a sample scenario, demand goes up, goes up, and then it goes down twice, and then it goes up again in the period. And so this is a discretized random walk. And you can choose the numbers, so the sigma, the volatility, the amount of the up and down movement, and the probability, p, of going up, and then the probability of 1 minus p of going down, are chosen such that they're consistent with the GBM model. So this model is the equivalent of the GBM model, it's just discretized. So the beauty of this is that you can now simplify this to 32 different scenarios instead of infinitely many. And each of those scenarios is not equally probable. There is actually probability weighted scenarios, depending on what nu and sigma are that are underlined. So now we have 40 paths, right? We have 40 evolution paths of the system and we have 32 different future scenarios. So what we do now in the next step, four, is to put the two together and calculate the cost of each path. Calculate the cost of each path with respect to each of the demand scenarios, look at the weighted average of all the possible paths, and the one thing-- the one tricky thing about this is you need to build in a decision rule. Such that, if the system-- if demand exceeds capacity the system, you're going to expand and move to the next stage, you're going to stage deploy the next phase. And there's many different ways of doing this decision rule. So the simplest one was chosen here, and of course costs are discounted. So let me explain to you how this works. So we start the simulation, so the first initial deployment. This is our initial stage one-- right-- for the constellation. And then we start operating for two years, and demand in this period goes up. You see the ops cost in this case goes slightly down, this is due to the discounting. This is a discounting effect. We arrive at the end of our first three year period and then the question is here is our capacity of the system. What would you do in this situation? Let me ask somebody at MIT, make sure you guys are still with me. So what would you do in this situation? You're now at the end of year three. Anybody? AUDIENCE: We lost you for a second halfway through your statement, could you repeat? PROFESSOR: Yes, so you've done your initial three years, you've deployed your initial constellation. And you now have the situation shown on this chart. What's the right decision, according to the decision rule? What's the right decision? AUDIENCE: OK, I'd keep it the same. Because-- PROFESSOR: Yes, exactly. AUDIENCE: --it has not exceeded demand yet. But prepare to change-- PROFESSOR: Exactly, you wait, right? AUDIENCE: Because you have to lower your capacity. PROFESSOR: You keep it the same, you don't do anything. You just keep operating. Exactly, right. So you have another three years, right? During these three years, demand keeps growing. And you now, at year six, arrive at this point here. So what's the right decision now? AUDIENCE: Can I ask a question about the decision to stay the same? PROFESSOR: Would you say the same? What's happening to the demand line? Veronica, what's happening to the demand line? AUDIENCE: Can I ask a quick question first, about the decision to stay the same? PROFESSOR: Yes. AUDIENCE: I feel like by waiting until we've exceeded demand to choose to expand the constellation, we're introducing a lag between demand and capability that creates an inefficiency that may actually drive users away from the system. And I see this kind of oscillatory effect, where you would expand, and then people would meet, and then the system would be insufficient and then people would move to a different system. And you kind of have a yo-yo around the maximum carrying capacity. And that doesn't seem efficient to me, so I'm wondering if you could speak to that. PROFESSOR: Yeah. AUDIENCE: So what you're essentially looking for is what I would [INAUDIBLE] decision rule. So you would actually-- so the decision here is to deploy, right? You've got to deploy your second stage because you've saturated the system. What you're arguing for is before this occurs, you know like at 80% saturation, that's when you trigger the next stage, right? In order to anticipate the saturation. And so, absolutely, you could do this. There are many different decision rules that you can try when you design a flexible, deployable, scalable system. And that's part of the decision space. So in this particular example, we just implemented the simplest possible decision rule, which was when saturation has occurred you do deploy the next stage, but not before. AUDIENCE: OK, thank you. PROFESSOR: Is that clear? AUDIENCE: Yes, thank you. PROFESSOR: OK, so we deploy the second stage now, and you can see there's another spike here-- right-- of capital that's needed. It's not as high as the first initial stage, but it is substantial. So now as we deploy the next stage, our capacity went to the higher limit. So we're now at capacity level two. We're now operating the system. Demand keeps growing, that's good. Now we have year nine, you can see. But we haven't yet saturated our new capacity. So again, the optimal decision is-- or I shouldn't say the optimal decision, but the decision according to the rule is you just wait, right? You keep operating. Ah, now from year nine to year 12, demand starts to go down. Right? And this happens in some systems, right? They are growing and they peak, and then things go down. So in this case, we wait. And then in this scenario it goes down again. You see how that works? So what's nice about this is you can now-- in this sense, all the 32 scenarios of possible futures, you can run them against the 40 different families of designs or evolution paths. And out of that you can find the best path, right? This is the path, the path that on average-- I should point out on average-- will satisfy your demand at minimum lifecycle costs, given the uncertainty models that you have composed. Yes, [INAUDIBLE] AUDIENCE: [INAUDIBLE] PROFESSOR: Why did it go down? AUDIENCE: Like, [INAUDIBLE] PROFESSOR: This here? AUDIENCE: [INAUDIBLE] PROFESSOR: No, the capacity stays the same. You don't go down in capacity. The reason is-- this is a good question-- this particular system, a satellite constellation is dominated by fixed costs, right? So if you wouldn't retire half your satellite, because then you would lose coverage also, potentially. Because you've moved them through orbits that give you the right coverage, you just increased your capacity. This is different from a system that is dominated by variable costs. Like for example, if you operate a fleet of taxis and and for whatever reason, demand goes way down, you can go and park half your taxis and just not operate them. You may still have to pay a lease on them, but essentially you can downward that the capacity of the system. You can't do that here, it doesn't make sense. Right? So that's the big difference between systems that are fundamentally fixed cost dominated versus variable cost. So in this case, we just don't use it as efficiently in the last six years. The answer here is that there's an optimal path, and that's shown here, that will, for a given targeted capacity, you can compare essentially this blue path against the traditional fixed design. And in this case, the traditional architecture-- fixed architecture-- would cost about $2 billion dollars to build, both and on average the red point is the average lifecycle cost of the evolvable system, is 1.36. Right? Lifecycle cost of the rigid design versus the expected lifecycle cost of the best deployment strategy or staged deployment strategy, which is-- essentially the way that's calculated is the probability weighted lifecycle cost of each of the scenarios, right? Against this path. And the difference between those two numbers is about a third, $650 million. And that is the value of the real option. That's the value of designing the system with scalability, with flexibility. OK? Yeah? AUDIENCE: [INAUDIBLE] PROFESSOR: Yes. That's true. So the lifetime-- the question was, is the life expectancy similar? Of course-- so the 15 year life of the whole system is the same for both. Of course, in the staged deployment strategy, some of the satellites are going to be younger at the end of the 15 years. And so they may have longer residual life left, which is actually not included. That's not even included in this real option value. AUDIENCE: [INAUDIBLE] PROFESSOR: Right. AUDIENCE: [INAUDIBLE] PROFESSOR: That-- that's right. So you could sort of refine this model to include a more-- a staged decommissions, right? Or staged transition to a next generation system. So in this case, in this model, after 15 years-- boom-- you know everything, you just finish. No more revenues, you just decommission. A hard end, a hard stop after 15 years. But you could actually build transition and decommissioning models as well. Good point. So that's essentially the case study that I wanted to show you. And, you know, you say that's kind of like in the US we talk about Monday morning quarterbacking, you know? You come in and all the football games that happened and the mistakes that the coaches made, and I would have done this or I would have done that. So yes, you know, this is sort of like looking at this problem in hindsight, but the reality is that this really has had a big impact and a big dampening effect and that the new generation of systems-- I think-- are built much more intelligently with this kind of evolution in mind. I will also mention to you that Iridium is actually the system that went bankrupt first. And the reason for the bankruptcy-- well, one of the immediate reasons for the bankruptcy-- is because of the way that the project was financed. So about one third of the funds-- the $5 billion-- one third of the funds for Iridium came from equity from Motorola. So that was their own money that they lost. About one third was an IPO, initial public offering, you know, shares sold to the public. And about one third was bank loans. And these bank loans were relatively expensive and the banks expect that they get paid back at a certain speed, depending on the market evolution. And it's essentially the inability to pay for and service their loans, that caused the Iridium bankruptcy. And it turns out that the loans were about one third of the total capitalization of the project. So what I would argue-- I would argue that if they had done this more flexible, staged approach, they could have saved one third, which is about the value of this option and capitalization, and just build the system only with equity and with the money from the IPO. And that would have given them a lot more time, you know they wouldn't have had to service those loans and it would have given them a lot more time to wait for the market to develop, which it eventually did. So the whole financial architecture was poorly done as well. It's a whole other-- that's a whole other question. AUDIENCE: Professor, quick question? PROFESSOR: Yeah. AUDIENCE: Are there situations where [INAUDIBLE] PROFESSOR: Yes. So the question, I'm just going to repeat it for you guys at MIT, the question was are there cases where the flexible approach is not the best? And the answer is yes. So for example, if you're in-- I know that you're studying the energy market-- if you're in a situation where you have guarantees, a guaranteed customer, so the government or one of the big utilities is entering into a long-term agreement with you, that they will buy the power that you produce at a certain price for the next 20 years. And so you're not subject to that market volatility, but you have a long term agreement, including clauses that say well if the government doesn't do this then they'll compensate you for any losses, this is a financial contraction. If you basically have eliminated demand uncertainty, because of the particular contractual arrangements, then there's no reason you shouldn't build the system for a fixed capacity. And then you can build something right on the pareto front, because that particular uncertainty has been eliminated for policy or contractual reasons. So, good question. OK, so we're almost there. Let me just try to summarize some-- AUDIENCE: Can I ask you question on that last slide? PROFESSOR: Please, go ahead. AUDIENCE: On the slide before, 37. It seemed like the probability weighting was for a general capacity. Like something that could be adapted over the life cycle but here we're pinning down a specific capacity. So why is it the sum of probabilities, then? Haven't, kind of, the probabilities been realized, don't you know exactly what your capacity is that you're operating at? PROFESSOR: Yes, so just to be clear on this. So the probability, those p's of i's, they come from the lattice model. They are determined by the lattice model. So what you do is for those 32 scenarios of the future, this is the pi, right? The i is the index, is the-- for this particular future scenario, for each future scenario, you always start with a1, which is your initial configuration of the constellation. You always start with a1. And for each of those future scenarios, you then simulate that future and for those futures, where the demand doesn't ever really take off and materializes, you never actually trigger that next expansion stage. And it's actually that asymmetry that gives you the advantage. And it's essentially not deploying capacity when you really don't need it. And so you do this for all of this n scenarios for the future, and the pi, the probability, weighting of each future scenario comes directly out of the lattice model. That's not something that you have to manually determine. AUDIENCE: Right, no, I understand that. But here, since we've kind of pinned down a capacity already, we know what endpoint we're at. I'm just wondering how the probabilities come into that, or is it kind of working backwards now, of all the paths that go there on this specific plot? PROFESSOR: I see what you're saying. OK. So the last point, the a1 point, so the a1 point is driven by-- you need to make some assumptions about initial capacity and then the endpoint, the a4, is driven by how large you made the trade space. So you could make the trade space, you could make the design space even larger, right? This is a discretized full factorial design space, you could make it even larger. And then a4 would not be the end form. This is just sort of a finite size effect, given to the size of the trade space. AUDIENCE: All right, thanks. PROFESSOR: All right, so the way I want to do the summary here is just go back to the learning objectives. You know, I always do this in every class, you know, let's close the loop. What did I promise you in lecture one that you would learn in this class? And, you know, each of you, you're going to have to decide for yourself, you know, did I actually learn this? Did we actually do this? So this is sort of due diligence on the learning objectives. And I have one slide on each of them. So the first objective was to-- for you to really understand the system engineering standards and best practices, as well as more approaches. Number two is understand the key steps in the system engineering process. Number three was analyze and understand the role of humans in the system as beneficiaries, designers, operators, and so forth. Number four was being honest and characterizing the limitations of system engineering as we practice it today. And then objectified learning and objectifiable was applying these methods and tools to a real-- even if not so complex-- cyber electromechanical system. So I see one, essentially, hopefully-- no, we didn't check the readings but I'm hoping that you did your readings, that you really feel like you understand the NASA System Engineering Handbook, which is our quasi-textbook for this class. But there's other standards as well, such as the [INAUDIBLE] handbook, which is actually the basis for certification. So several of you-- your [? EPS ?] fellow mentioned to me that you're interested in certification, so I encourage that. Most of that is based on the number two. I did mention the ISO standard 15288. That's probably the best known standard; ISO is a very-- they're located right here in Geneva. ISO is a very important organization, you know? I know ISO standards are not the most exciting things to read, but they have huge impact and so 15288 is something that worldwide-known and across all industries really. And then I want to mention also the European Systems Engineering Standard. And Foster Voelker and I had a discussion about this earlier today. This is issued by the European Space Agency. And it is a little-- there are some subtle differences between ISO approach and the NASA approach. And so for those of you on this side of the Atlantic, I do encourage you to take a look at that [? ESES ?] standard as well. And then we augmented this with different papers, and readings, and so forth. So I want to do a very quick-- this is our last concert question for the class. So here's the link, [INAUDIBLE] and it's essentially I want to get some feedback what you think about the standard. All outdated and surpassed by the digital revolution, one of you thinks that's true. 90% of you think they're useful codification and best practice. 14% think they can be dangerous if you adhere to them too closely. About 60% of you think they're essential reading for any serious system engineer. And 10% said that you would never use them as a daily reference book. VOELKER: OK, so that's good. I think that's-- what do you think, Voelker, pretty reasonable outcomes? OK. But what is true is that they're going to be referenced, in contracts. You know if-- if you're doing-- you know, it depends, again, on the industry that you're in. But you're going to adhere to some of these things in contracts, and if things don't go well, people will actually check whether, did you do this step, did you do that step. So it does have real consequences. PROFESSOR: OK, great. Thank you very much. That was that was good. Step two, you know the key steps, v model, I think I'm not going to explain it again. You know, hopefully this is something that you will not forget. And just one point about the v model is it does not imply that system engineering is always sequential. You know, it's possible to iterate between steps and across the whole v as well. SE3, stakeholders and value of network. So this is the role of humans. We talked about this very early on, several of you talked about it during the oral exam here today. You know, the hub and spoke versus the stakeholder value network. The method underlying this is critical, but I do want to mention, very quickly-- and this is something we didn't spend a lot of time on-- is human factors. Right? How do you design systems so that humans can effectively and safely use them? You know, interfaces, procedures, and so this is a couple of slides from one of my colleagues, Missy Cummings. This is actually some-- in a nuclear plant-- Katya, you're going to like this-- lots of these dials, and this is actually a Russian nuclear plant. So it has it has a very specific layout. And so the important questions in human factors are, how do you best display status information? What tasks do humans do? What level of automation? What are the training requirements? And these are also very important questions. And so you can apply the human systems engineering process, very much analogous to what we covered in class so far. So you do mission and scenario analysis, function allocation. And then in human factors, you talk more about task analysis. But essentially, eventually, leads to system design. Like what are the buttons that you push, what does the layout look like, or the user interface, and so forth. So, you know, I'm not I'm not presenting human factors as a separate topic, but the human factors requirements should be built in right from the start. And essentially the functional allocation is sort of a key question here. How much is done by automation, how much is done by humans, and how do you split between the two? And one of the most important things to consider here is this human performance curve. And we know this pretty well now, that when humans are extremely highly loaded-- you know, when you have a huge workload-- your performance goes down. But interestingly, if you're under challenge your performance goes down as well. And you know, so for example, people who are in power stations or mission control centers where nothing is happening for days and weeks, you know their attention goes down, and they don't perform at a very high level. Humans perform best at a moderate workload, and this is well-known. And it should influence how you define the human interface and the split among automation and humans. SE4, system complexity. Last time I will mention the magic number seven, plus or minus two. The real limitations come in and when we operate at levels, you know, three, four, five, and six. And the main reason for this is because now a single human cannot understand, cannot remember, cannot deal with all the detailed components. And you need to split the work among teams, among organizations, and creates extra interfaces and complexities. OK? And just to show you, Iridium, we talked about Iridium. This is a very recent news story this is from October 29th. The new Iridium Next has again been pushed back by four months. This gentleman here is the CEO of Iridium, chief executive for Iridium Next, Mr. Desh. And he's talking here about a particular [INAUDIBLE] and then Viasat is one of the contractors. There's a particular component that's been giving trouble, and that's sort of in level three or four in the hierarchy, and that's essentially delaying the whole system. So there's an example of how the large complexity of the system is causing issues. AUDIENCE: [INAUDIBLE] PROFESSOR: But they're actually a supplier to [INAUDIBLE] through the whole chain, right? Oh you're saying there's more going on than meets the eye. Yeah, who knows? Who knows? OK, and then finally here I want to mention application to a case study. You know, we used the CanSat 2016 competition as a quote, unquote safe case study. And I have to say I'm really pleased with how this worked out. You had the 47 requirements as a starter, you approved, them you group them, and my comments about the PDR that we had a week ago are all teams passed successfully, the PDR. You know, if this was a real PDR, we would have issued a couple of [? ridds, ?] I think. There was some-- a couple of teams were over budget, or needed to work out their aerodynamics in more detail. But by and large I thought this was an excellent application of system engineering concepts. Went beyond my expectations. Several of you mentioned the importance of concept generation, you know hybrid use of structured and unstructured creativity techniques. I know at least one team applied to actually go to the competition. And this shows you a couple examples. So we have one team here at the EPSL using a bio-inspired design. This is an actual seed airfoil that actually occurs in nature. And then here's an example from MIT, the Rogallo Wing Team 7. So, I'm sorry I didn't mention all the teams here, but it was really great to see the application of this in the CanSat competition case. So the last thing I want to do-- and I only have one minute left, actually I'm already slightly over time-- but I just want to give you some career and study recommendations. So first of all, you know, I want to make sure that you-- and I said this in the first lecture-- there's a lot more to the system engineering and systems research than we were able to cover in this class. So this class is really what I call a door opener to the world of system engineering. If you want to go deeper, there are deeper subjects. You know, model based system engineering is the big trend. System safety. And then in the spring I'm teaching a class called Multi-Disciplinary System Optimization. And there is actually a WebEx access to it as well. So it's not going to be officially offered as a [INAUDIBLE] class, but if individual students here are interested, please contact me. Self-study, you know the System Engineering Journal, there's IEEE journals, there's also-- you know, for some of you this is maybe a little too soon-- but there's professional degrees in system engineering. For example at MIT we have the SDN program, System Design and Management. The average age of the students in that program is 33, but it is-- they're sort of coming back to get their masters in system design and management. And there's quite a bit of SLOAN content as well. So finance, you know, understanding the financial-- how to build a business case around systems as well. Professional experience. You know, getting experience in actual projects. Like at MIT, for example Rexus. You know, you've got the Clean Space One project, we heard about [? Optaneous ?] One, Solar Impulse, there are a lot of opportunities. Or-- we also had a couple of people here mention that you're starting your own company, your own venture. And you know, when you're doing, that you have to be the system engineer. You have to understand all of these things, interfaces, suppliers, requirements, markets, all that has to be integrated. Finally, you did hear about INCOSE. We had a quick dial-in with at an earlier session. So if you're interested, you can join either as a student or a professional member. And also certification. This class was not [INAUDIBLE] for that, but if you're interested in this site, [INAUDIBLE] number of years of experience at the [? CCF ?] level. And then finally, please keep in touch. And last but not least, I want to thank all of you, the students at MIT, at EPFL, our TAs, [INAUDIBLE] here at EPFL, Johana at MIT, the technical staff has been helping run the technology, Voelker, thank you. And-- and that's it. So I want to thank all of you. Next week we do have a voluntary seminar on sort of future trends in manufacturing, but again, it's not mandatory. It's going to be in the same place, but like I said, it's not part of the official class. So with that, sorry for running a little bit over time but it's been a joy to teach this class in this kind of new format. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181113_Machine_Learning_Neural_Networks_and_Decision_Trees.txt | PROFESSOR: Hi, everyone. Welcome to the 23rd lecture of CS 188. Today we're going to wrap up our coverage of neural nets, and we're going to cover decision trees and also a little bit of hints at learning theory. Before we dive in, a couple quick announcements. Midterm 2 is happening on Thursday. It's coming up very soon. There's a practice midterm which is due tonight. And there is also homework 10 which is due tonight and which has three parts as usual. Any questions about logistics? Let's wrap up neural nets. So here was our cartoon neural network. It has some inputs x1 through xl, which get passed on to a first layer of neurons, which then each neuron process is a weighted combination of its inputs, applies a non-linear activation to that weighted combination which is this equation over here, and then passes it on to the next layer. And this repeats, repeats, repeats until at some point here we effectively have generated the features for the problem that we're trying to solve for the current input. Those features are fed into this thing here, which is a multi-class logistic regression, and which then hopefully gets trained to provide the right classification. The way you find the strength of the connections between the neurons, the weights, is by optimizing the log probability of your label, for example i, given the inputs, for example i. And this here is the weights. It corresponds to the weights of all the neurons in the entire network. This is a lot like what we saw before from multi-class logistic regression. It's a continuous optimization problem which you can solve by taking a gradient, taking a step in the direction of the gradient, which will, if you take a correctly sized step-- not too big-- it will increase your objective. And then you compute a gradient at the new spot, and you repeat, gradually climbing up to, hopefully, some good local maximum. While you're doing this, you keep track of the accuracy you achieve on some hold-out data. And whenever you see the hold-out data, accuracy starts going down. That means you are starting to over-fit your training data. That is, you're starting to memorize it rather than recognizing the pattern in the training data. And you should stop training and call it done, and that's your trained neural network. We also had a theorem that said that neural networks are universal function approximators. A little more precisely, if your neural network is big enough and you need to have non-linear activations in your units, then it can approximate any continuous mapping from input to output up to arbitrary precision. And that's also why we need to be careful about over-fitting, because if we keep going, we'll be able to fit any pattern, including over-fit whatever happens to be in the training data. So how well does this all work? Let's look at computer vision, where some of the big breakthroughs have happened thanks to training deep neural networks. So a standard problem in computer vision is to try to find objects in an image. So maybe you're trying to find the human in the lineup of robots, or maybe you're trying to find a car when you're a self-driving car or a pedestrian or a stop sign and so forth. So the problem here would be, at the time you get evaluated, an image gets fed in to the network, and hopefully it says, there is a person in this region, or something. The way people used to solve this problem is by doing a very careful manual feature design. And solve is not the right way to phrase it. The way people used to try to solve the problem was by doing manual feature design. And there are some interesting intuitions that went into this. So you'd say, well, a picture that's red, green, blue pixel values, these values are between 0 and 255, but now if I turn up the light or turn down the light, all these values will drastically change. So I'm not so sure about processing those pixels. What can we do to avoid that we're sensitive to the lighting conditions? Actually, if we just extract edges and only pass on where the edges are, then lighting going up or down will not matter, short of, of course, making it pitch black. But as long as you have some light, the edges will stay in the same place, and so that might be more stable. And then you might say, well, but what now if I move my camera around? I move it a little bit, then all the pixel values change, and even all the edge values change. They've all shifted a little bit. Can I account for that? And they were like, well, after a lot of thinking, many, many papers, people come up with something called HOG, Histogram of Gradients, which turns the image first into edge image, and from there into this thing, which is a Histogram of Edges. For every region in the image, how much is there a diagonal this way? How much diagonal that way? How much vertical? How much horizontal? And now, all of a sudden when you move your camera a little bit, this HOG thing barely changes. So you're invariant to moving your camera and changing your lighting conditions. And that's nice because that way you can maybe learn something more robust than if you were sensitive to those things, assuming you're trying to classify, let's say, a person. Now of course, you try to classify how bright it is in the room, it's a different question. But most of the challenges are around classifying semantic things like person or dog or car or cat or stop sign, things that don't have to do anything with lighting or precise positioning. And then you'd say, well, now I've got this HOG thing. Maybe now I can feed this thing into a logistic regression or a support vector machine, which we didn't cover but which is very similar to logistic regression but doesn't as directly generalize to neural nets. So it would feed into a logistic regression or a support vector machine and hope it would get some reasonable performance. And typically, by designing some version of Histogram of Edges, you would hope that you could write a paper saying your accuracy improved by 0.5% or something. And that happened. And in fact, there is something you can see in those things, for example. It's not a bad idea. If you look at this, what do you think is in the original image? STUDENT: Bike. PROFESSOR: A lot of people saying bike. That seems pretty natural to think. But it doesn't look exactly like a bike. And in fact, you can imagine that if the lighting conditions change in that room where that picture was taken, this would still look the same. If the bike was moved a little bit, this would essentially still look the same. And so maybe you have a good feature representation here that you could train a logistic regression on to output a classification. It was indeed a bike, and this is the original image. And the official name for the feature extractor uses HOG for Histogram of Gradients. Well, here's how well people were faring with this approach. So there is a competition called ImageNet. In this competition, to participate you send in a program. And when you send in your program, the organizers will run the program on a secret stash of images, and they will report back out a number, how many errors your program made on classifying what's in the secret stash of images. And so you can see in this competition in 2010, the best entry in the competition had a error rate here of about 30%. And people, of course, who design new Histogram of Edges and so forth and hope to do better. And indeed, in 2011, somebody might have had like half a percent better error rate. But this wasn't really moving very fast. Then, in 2012, things changed. Out of Geoff Hinton's laboratory at the University of Toronto, there was an entry into the competition that landed right there. So even though traditional approaches were essentially flat-lining, by training a deep neural network called AlexNet, it was possible to halve the error rate. It wasn't just possible to halve the error rate. People realized this general style of approach might have more promise than what they had been doing so far and switched approaches, and there was a big acceleration in progress. And in fact, by now, this competition has been retired because deep neural nets have been trained to achieve human level error rates. And so to do more interesting research, people moved to different kind of challenges than this one. It's still one that often people benchmark on to just sanity check things, and also often benchmark speed on because, for example, the training of AlexNet here took six days of training time. That's a lot of gradient steps, you can imagine. Now, these days, the fastest training times I believe are around 10 minutes, between 10 and 20 minutes somewhere, which is a lot faster. They don't necessarily use a whole lot less compute, but they just parallelize a whole lot more, and so the wall clock time is a lot faster. But so this is a drastic change. And at a high level, you might wonder, how is this possible that this kind of jump so far forward-- why were there not, let's say, some neural nets maybe here already and maybe here also doing better? And how come it was just a big leap forward? Big reason is that these neural nets, they just need a lot of data and a lot of compute to train with that lots of data. And if it took six days in 2012 for one of the best programmers in the world to program a GPU to train a neural network on this data set, if he had tried it a year before it would have taken even longer. And as you imagine, with anything machine learning, when you do a run, it usually doesn't succeed. You go in, you look, you change something, you run again. And if that turnaround time is long-- six days is already pretty long, but if it's even longer than that, that might be hard to get everything right so you get a good result. And so it's a confluence of a very large data set which existed since 2010 and enough compute, GPU compute in this days, and the ability of someone, namely Alex Krizhevsky, to program the GPU to train neural nets on the GPU at that time. Today, you might say, oh, that sounds easy. Just fire up TensorFlow, Power Torch, and it's on a machine with GPUs and it'll train with GPUs. But back then, those things didn't exist. The reason they exist is exactly these kind of successes that motivated Facebook and Google to say, oh, we should build a general tool that can do this in more generality so not everybody needs Alex Krizhevsky to get a neural net trained. And so Alex, of course, didn't call this network AlexNet. That would be pretty crazy to do. But everybody else in the community was like, oh, well, Alex made this happen, and they called it Alex. So big leap forward, and then big acceleration. Have we covered everything to understand what happened in that paper? There is one part, one fundamental part we haven't covered, but at a high level it's easy to cover. This network is gigantic, has 60 million parameters. But actually, in some sense, the way we've been looking at networks, it would be even more. So there's a lot of parameter sharing happening in this network. So when we talk about old entries in the weight vector, each entry in a weight factor corresponding to one connection, in this network, many of the connections have the same strength. So the number is the same on a bunch of connections, on a lot of connections. The rationale here is that if you're looking for a diagonal edge in the image, whether you're looking at top, bottom, left, right-- it doesn't really matter-- you should have the same pattern recognizer. So you might want to share what you do in terms of calculation over the entire image in some translation invariant way. Those are called convolutional layers. And all of this really is parameter sharing such that you do the same calculation in every region of the image. So you have to do a little bit of trickery to get that working, but that's essentially it. And then, of course, everything we covered. And run the optimization, in this case, originally for six days. Now, at the time when this happened, this was a big breakthrough in image recognition. And you could think of it that way. OK, now we can categorize into 1,000 categories. That's pretty cool, because before it didn't seem like it's going to work any time soon, and now maybe it's going to work quickly. But you could also ask the question, can we do more? And so what more could you do with images? Well, maybe instead of just recognizing what's in an image, which is a 1,000 output neurons for this case, maybe you can generate a sentence describing what's in the image. You might wonder, well, what do we have to change? Well, what we have to change is we have to have a new data set. So now the data set is not just image and label but imagine and sequence of words. What does it mean, sequence of words? Well, there's 200,000 words in English, so it's like having 200,000 labels to choose from. Then again 200,000 labels, again 200,000 1,000 labels. And that's effectively how the network is set up to train for captioning. Here are some example captions automatically generated by neural networks. So this is pretty impressive, right? Look at this one here. "Girl in pink dress is jumping in air." That's a very good caption for what's going on. "Black and white dog jumps over bar." Again, very good caption. Not everything is perfectly accurate, but keep in mind this was achieved in 2015, whereas in 2012, it was even impractical to just say anything reasonable about what it might be in an image. Interestingly, this actually happened in eight research labs at the same time. So eight labs within a month of each other had the same results-- a lab at Berkeley, a lab at Stanford, a lab at Toronto, Montreal, NYU, Facebook, Baidu, Google. How is it possible that eight labs have essentially the same result and with roughly the same methodology, and not identical but pretty close? Well, it was the time that this data set came out. And it's a total change in mindset in some sense. At this point, if somebody has a data set that's big enough, and you have some experts who can train neural networks on that data set, then you can have a network that recognizes the pattern. Now, it depends on the kind of challenge how much data you might need, but here the data sets came out, and soon thereafter many, many labs had figured out how to train from image to caption. Now, some skeptics at the time would still say things like-- especially in 2012, would say things like, oh, well, it's nice to have another trick in our bag. This whole neural net, it has better features than we could design by hand. My Histograms of Edges look pretty good. But I guess this has found better features than I could do, so I'm going to use those features. But that's a misunderstanding. It's not just about learning some features you can use, which is of course important. It's about the fact that you can actually, for any data set, start pattern recognizing pretty reliably with this methodology. You might then wonder, does that solve computer vision? And by no means is computer vision solved. But you could ask the question, what would it take to convince somebody that computer vision is solved? And can pattern recognition solve computer vision? Well, what would it mean? Well, what if we do the following? What if we say, we have a neural network, and it gets to look at an image, and then you get to ask a question about that image. And if it answers the question correctly, that's good. If it can do that for essentially any image and any question about any image, then maybe we can call that network capable of computer vision, and even some natural language processing to know what's being asked about. So people who strongly believe that this might work out start building a data set for exactly that setting. So now the input to the network is an image and a sequence of words, and the output would be an answer. I particularly like this one over here. The question is, how many school buses? The neural net says two. And the ground truth indeed is two because there's one hiding in the back here. So it knows that, and it answers correctly. What sport is this? Baseball. And so forth. By no means does this work at the level humans can answer questions about images. But if you look at this and you're like, wow, this is pretty good, and then you go back to early 2012 before the deep learning revolution kicked in, people were like, it might take another career's worth of work before we recognize what's in an image. Things moved a lot faster than expected. And in parallel, the same thing happened in speech recognition. So in speech recognition right around the same time, things weren't exactly progressing with more traditional approaches. The deep learning approach, which means train a gigantic network on a large amount of data, of course requiring a large amount of compute, was able to learn a mapping from a sequence of numbers corresponding to the pressure wave coming in to a transcription of what was said. Where did this one happen? Also out of Geoff Hinton's lab. A different student, but also out of Geoff's lab in Toronto. Now, this is not the only pattern recognition task. How about machine translation? And we might get into more detail on these tasks in one of the next lectures. But just at a high level, it's essentially the same problem. You have a sentence in one language going in, and you need to produce a sentence in another language going out. What do you do? You set up a large neural network, feed it enough paired up sentences of language one into language two, and it learns the pattern. And again, what does it mean to take in a symbol? Taking in a symbol is, let's say, I don't know if there is maybe 10,000 symbols, then you can have a one hot vector where all entries are 0 except for the one indexing into that symbol being fed in as same for words. If there is 200,000 words, then an output of a word means that you output one hot sector, effectively an activation for which word you want right now out of 200,000 choices. There are also some transcriptions, not only for machine translation but definitely for speech recognition, that don't output words, that just directly output character by character. And then you have a smaller output channel, but you need to do it more frequently to get all the characters sequenced together. This is actually put in production for Google machine translation in August 2017. That's when they-- the original result I believe happened in 2015 by Ilya Sutskever and collaborators at Google to train neural network for machine translation that was better than other approaches. And then it took another two years to really productionize it, make it stable for release at large scale, and that happened in 2017. So that's a few highlights of what is possible with the neural networks and the related ideas of, essentially, you're going to learn a function that goes from input to output, and the neural network formalism gives you a general function class where you can learn any continuous function of the arbitrary accuracy. And so if you have enough data, enough compute, you can run gradient ascent long enough to find a good setting of the parameters and get a good result. Any questions about neural nets before I move on to some learning theory and decision trees? Yes? STUDENT: How should I or anyone learn about stuff like back propagation? PROFESSOR: So I suspect 189 and 182 cover back propagation. I would be surprised if not. Just to clarify, at a high level, back propagation is the efficient algorithm to compute all the derivatives that you need to do a gradient update. And so, essentially, there's some kind of backward pass through the network that gives you that. You could check the 189, 182 lectures. You could also check Mr. Andrej Karpathy's course at Stanford, which is a PhD level course but which also covers it in one of the earlier lectures in the course. So those would be the resources I'd suggest if you want to learn it soon. Yes? STUDENT: Isn't it general that the more layers you put in, the more accurate [INAUDIBLE] you don't have a lot of [INAUDIBLE]? PROFESSOR: So it's a good question. Is it the more layers the better? Some of the most successful networks have a large number of layers, but maybe five years from now we'll call it not large. People thought eight layers, like AlexNet, was deep. But then ResNet-101 has 101 layers, which is much deeper than eight. Maybe soon people will have like 1,000 layer networks or 100,000 layer networks. Who knows? In some sense, there is a few factors at play here. One is that the bigger the network, the more you can represent, both by being deeper and by being wider. You can think of the network as essentially running a computer program. And the width is the parallelization of your program, and the depth is what has to be serialized in your program. And so if you think about, OK, if I need to do a certain calculation, how much can up parallelize versus how much will really need to happen one after the other? That is one way, one analog to think of how much width and depth you might need. Another way to potentially think of it is, what if we just want human level performance? Maybe we want to just do human type tasks. And then people study the brain and they say, well, the human visual system might have maybe, I don't know, four or five layers. Each layer actually has, well, four or five sheets. Then each sheet might have six layers in it. So then maybe you say, well, 24 should be enough. But we don't know exactly how it's wired, and it's wired a lot more randomly than the typical networks people set up in artificial networks. So in practice, the deeper networks than just 24 layers have worked better than restricting to 24. Initially, people were worried about, if the network's really, really big, maybe you won't generalize as well. And we'll touch upon that a little bit when we do a learning theory. But the high level-- it is true. The bigger your network, the more you can represent. And in principle, you might over fit more badly, but it turns out that if you are careful about early stopping, that ends up not being an issue in practice. And so even though your network is big, you're not using all the capacity of your network because once the held-out data accuracy starts decreasing, you stop. And so you don't get into the outskirts of the funky functions that you could learn in principle with this gigantic network. You stay far away from that. You stop long before that. And so in practice, actually people have often found that the bigger the network, the easier to train because somehow the optimization landscape becomes simpler. One way to think of it is that if you optimize directly over all possible functions in the world, then you could maybe very easily steer onto the correct one, whereas if you're restricted and you can only have certain types of functions, then you've got to find your path from one to the other, which is more difficult to do. Another way to think of it is that if you have a really big network and it's all randomly initialized, and it's really, really big, like infinitely big, then some subnetwork of that network is already the thing that you want. And if that's already present-- nobody has that big a network, but if that's already present, then you can imagine when you run the back propagation which says in which direction [INAUDIBLE] parameters, it will just say, oh, just pay attention to this subnetwork. And that'll make the biggest improvement, and you'll learn very quickly. Of course, there's all caveats to that and you need to worry about over fitting, but in general, the optimization landscape is seemingly better conditioned with bigger networks. There are some tricks you have to play with initialization. So because of the non-linearities, if you have a sigmoid, it's flat and flat and then transitions. You only get single on the transition, not on the flats. Same for the [INAUDIBLE]. You don't get signal on the flats, only in a transition, for the value only on the steep part. Now, you might wonder, well, should we then just keep it all just linear? That doesn't work either because we need a non-linearity in there. And it's OK if, let's say, 90% of the time you're in the flat part, but maybe 10% of the time you should be in the steep part so you get some signal for that neuron, what it should be doing. And so there's initialization tricks and re-normalization of the data tricks that ensure that usually you're-- for most neurons, ideally all neurons, you're at least a certain fraction of the time in the regime where you get signal and your optimization will work well. Was there a question there? STUDENT: Yeah. My question is how do we [INAUDIBLE]?? PROFESSOR: Yeah. So that goes back to what we just talked about, the slight variation. So the question here is, if your network is really, really deep and-- what happens when you compute the derivative, you get a-- we saw the chain rule for derivatives shows that derivative of a function of a function of a function is a product of derivatives. And so if you have many, many layers, then that means you'll end up with a product of many, many derivatives multiplied together to get a derivative with respect to especially the weights early on. That product, if all the numbers are above one, will explode to infinity if your network is deeper and deeper. If they're all below one, it'll start going towards 0 easily. And so that's one of the tricky parts. The deeper your network, the more relevant it becomes to play tricks that help your optimization. So one standard trick is to essentially, rather than having by default just what the network does, as you set it up, you have an identity function passing everything on, and the network is just an adjustment to the identity function. And that way you get centered around the identity, which is a good place to be centered around, and you have less to worry about in terms of propagation going up or down too quickly. In really deep networks that are, let's say, recurrent, like HMMs where you can keep running them forever, they never really stop, people often just cut it off. They'll say, define 30 for this weight here. I'm only going to start back propagating from maybe 20 in the future. And back propagate over 20 and not anything further. You can imagine doing some discounting like we do in reinforcement learning, different things you can do to try to attenuate the effect of what comes earlier. Yes? STUDENT: Quick question. You mentioned that [INAUDIBLE] data [INAUDIBLE].. Is it possible that [INAUDIBLE]? PROFESSOR: Yeah, it's definitely possible. That would be a little quirky. But if your held-out data set is on the smaller side, if you have some kind of funny random seed effect where that happens, it's definitely possible. So often, you would actually not literally stop when it starts going down. You'd run for quite a while and then go back and see where it was highest. And at some point, you'll see it doesn't make sense to keep running because it's been going down for such a long time it's very unlikely going to come back up. That's it for neural nets. Let's think a little bit about learning theory. So one reference frame you could potentially have is a scientific discovery. Let's say you do physics. We never directly measure the laws of physics. The equations are not-- we cannot go measure the equations. We just measure data points, and those data points then allow us to conclude, well, maybe this equation is a really good equation to use to predict what future measurements will be like. And so that's the analogy that we're going to be thinking about here, is, OK, well, how can we think about this? And are we really doing the right thing? We get some data points. And are we getting out a hypothesis about how this data set works, or in science how physics might work, that might have a chance of being correct? So formalizing learning a little bit, in the simplest form, what we're doing is we learn a function from examples. So we have a target function g, which we try to learn from pairs. x g of x. g of x we have often called y, the label. And then x could be an email, and g of x is spam or ham. x could be a house, and g of x could be the selling price. So that's a standard machine learning problem we've been looking at for a few lectures now. We're trying to find this target function, but we never have direct access to the target function. Nobody allows us to look at g. But what we get to see is samples. The problem we're trying to solve is we want to somehow find g or something close to g. Since we don't know g, we might not be able to know what representation we should use to be able to find it. So maybe we have a hypothesis space shown by a blue set here. And we say, within this hypothesis space, those are all the functions we're willing to consider. For example, we have a three-layer network. Each layer has 10 units. All functions representable by any choice of weights in this three-layer networks with 10 units in every layer. All those functions that are possible to represent are our hypothesis space. The real function g might be outside of that. And then we're trying to find h's as close as possible to the real function g. And what we have to try to find that h is the training examples. And this drawing here then, we'd hope we find this h over here. We've most looked in the context of classification, where there is discrete outputs of the function, but it could also be with real numbers. Same ideas apply. So how do the ideas fit in that we've covered so far? I've alluded to the neural network version, where hypothesis space contains all the functions from input to output you can represent with your choice of network. How about naive Bayes? How does that fit in? Well, naive Bayes, we train a Bayesian network, and then we make a decision to go from features to a label y prediction. So what would be the hypothesis space for naive Bayes? It would be something along the lines of I'm willing to consider any network where there's a y here and then maybe feature 1, feature 2, feature 3. Now, I'm willing to consider any hypothesis that is of the form p of y given some feature values under my choice of the-- we'll call it theta, then. Theta, which is the parameters and is Bayes net. If we chose a different Bayes net structure, that hypothesis space could become larger or smaller. For example, if we added additional connections, maybe here and here and here, now we have a full Bayes net fully connected. So now the hypothesis space will be maximal for anything we can represent with a Bayes net over those variables and bigger than what we can represent with just the naive Bayes network. You can imagine that learning theory is essentially about the notion of, OK, just using the biggest hypothesis space and using the best fit ti the training data is not actually the best thing to do because then things would be simple. You need to be a little more careful than that. And so just fully connecting the Bayes net might not be the right thing to do. Even though it grows your hypothesis space, you might not find a better result in terms of predictive capabilities after learning. So for example, let's say we're doing regression. If we have this set of points fitting a line, yeah, it's not perfect, that's for sure-- it's missing a bunch of points-- but maybe that's OK. It has good predictive capability for future things we might want to do. Maybe this is a little better. Maybe it's worse. Hard to say. Ideally, we'd have kept aside some hold-out data and we then compare. Is the parabola or the line better on the hold-out data? And maybe we'd go with whichever one is doing better. This, probably worse. It would be very unlikely that this would be the way to go for this data set, but it does hit more of the points more closely. And so what this shows is that choosing from a wider hypothesis space, something that then fits more closely your data, might not be the thing you want to do. This one, even worse. So there is a trade-off here between consistency and simplicity. Usually, simple functions we expect will generalize to new data better. And consistency-- well, things that are consistent with the training data, that's a good thing of course. But often, achieving both at the same time is hard to do, and you need to make a trade-off. A standard trade-off people try to make is called Occam's Razor. What it says is that try to find the simplest explanation of your data. So if a line fits your data well, well, then a line might be better bit of a parabola, which might be better than a high order polynomial. Of course, a little subtle, because who says a line is simpler than a parabola? And how do you even quantify the notion that a line is simpler than on a parabola? They both are in some sense simple. That often ties into how many parameters you get to learn. A parabola has three coefficients. A line only has two coefficients. So a parabola has a bigger hypothesis space, and if you choose from a bigger hypothesis space to find your solution, then it's more likely you're over fitting than if you choose from a smaller hypothesis space. So that's a trade-off involved, bias versus variance. If you take too small a hypothesis space, you have something called bias. It means that you can't even match the training data. You are guaranteed to be off because you're not rich enough to hit the training data points even. But you might have low variance. Variance here refers to the fact that, if you look at how well you did on the training data, it's likely you're going to do about equally well on future data because your hypothesis space was so small, and so it's likely that whatever you picked from that small space, whatever performance it has, it'll be representative of performance it'll have in the future. Now, if you take something very complex, let's say massive, biggest neural network ever to fit your data, then your bias will likely be very small. It'll perfectly fit to your data. You might have even zero bias. But what you expect could happen on some other data will have very high variance because you chose from such a massive hypothesis space. And you chose the one that looks best in your training so you really picked a very particular one to get it to fit your training data. That might have some weird quirks like we saw here on other data. So most algorithms tend to favor consistency or tend to try to drive bias as low as possible. Why is that? Well, usually the way that we define the algorithms essentially drive training error to 0. So essentially, just asking it to do that, the way we define the algorithms, we're trying to drive error to 0. So to make sure we don't have too much variance, we need to operationalize this Occam's Razor or this notion of keeping things simple. And this will be a little different for every learning approach that you consider, and we'll see something yet different with decision trees in the second half of this lecture. But it could mean two things. Reducing the hypothesis space, that's a hard decision. You say, these are the only functions you're allowed to learn when you're fitting to this data. And if you make that space small enough, then it's less likely you're over-fitting. So in naive Bayes, this was done by, well, a lot of edges were missing from being fully capable of representing any distribution. Those are assumptions you make that reduce your hypothesis space, hence favor simplicity. Now, when you train with maximum likelihood, you'll still try to favor accuracy, but the hypothesis space is too small possibly to over-fit. Another thing people might do is reduce the number of features, because the more features you have, the higher chance you might over-fit. The polynomial fitting essentially is an expansion of features. A line has as a feature the bias term 1 and just the x. A parabola has as features 1, x, and x squared. A cubic polynomial has as features 1, x, x squared, x to the third, and so forth. The more features, the more chance of over-fitting. You might also want to use other structural imitations we will look at in decision trees or in neural networks. You might say, I want my network only to be this big, not bigger than that. And that will limit your hypothesis space. What's the other thing you can do? In regularization, you have in principle a large hypothesis space, but you keep away from the weird parts of that hypothesis space in some sense, the outskirts where you think it's more likely you're over-fitting. So what's an example? In naive Bayes, we did smoothing of the counts because we thought having 0 parabolas and parabolas of 1 are not very likely to capture. That's in the outskirts of the hypothesis space, and we want to avoid that. Today we'll see something about pruning with decision trees. In neural nets, you essentially get the last thing here, you're training. If you have a gigantic neural net, you could get anywhere, in principle, if you keep training forever. But if you stop training after, let's say, 100 updates, you have not had the chance to hit that many points of the hypothesis space you on principle would get to with infinitely many updates. And so your hypothesis space becomes restricted by the number of updates you allow, which you could either set by hand, only this many updates, or often you do, of course, adaptively. Whenever your held-out accuracy starts going down, you say, OK, at this point I'm probably starting hitting parts of the space that I should not allow anymore, and you effectively keep your hypothesis space small by stopping early. Any questions about these ideas? So current is short, but want to get the main intuitions across. OK. Let's take a break here, and then in the second half let's look at decision trees. STUDENT: Excuse me, Professor? PROFESSOR: All right. Let's restart. Any questions about the first half? OK. Let's look at decision trees then. Actually, let's take one step back for a moment. So decision trees are a alternative to things we've already covered. We've covered naive Bayes. We've covered perceptron. We've covered logistic regression. We've covered neural networks. Now we'll cover decision trees. So it's one of several methods available to us to do classification or potentially regression, though we don't really cover regression in 188. OK. So let's say we want to solve this problem here. What is this problem? We're trying to build a predictor for whether somebody is willing to wait to get seated or they will just go somewhere else at a restaurant. So there is an example with a first patron, second patron, and so forth. And then there is whether they waited or didn't wait to get seated. And then there's a bunch of attributes that we have. Is there an alternative option nearby? Is there a bar at the restaurant they can hang out at while waiting? Is it Friday or Saturday? Is it a hungry patron or not? Is it a restaurant that nobody's in there and you're made to wait? You might not want to wait if even if nobody else is there they're not going to let you in versus when there is some people there or it's completely full. Price might matter. Whether it's raining or not. Whether they had a reservation or not. The type of restaurant. And maybe also what you say. You say, well, the estimated wait time is 0 to 10 minutes might affect whether they wait or not. So you could imagine that you collect a bunch of data. This happens to be your relatively small data set that fits on a slide with 12 samples of whether people waited or not and the attributes in that situation. So in principle, you could either train naive Bayes or we could train a perceptron or we could train logistic aggressively. We could train a neural network to try to learn the mapping from attributes to target value, the label. But now we'll use this as a running example to explain decision trees. And in the process, we'll also touch a bit on learning theory and also information theory. What is a decision tree look like? Well, it's something like this. It's a lot like maybe the way you might write a program by hand to decide what a pattern is in some data. You have a if statement. Based on what value patrons take on, non, some, or full, you go down a branch. Then it seems like when there is no patrons and somebody has to wait, they never wait in this decision tree. Just not taking that for an answer. Then some, it seems like, people were always waiting according to this decision tree, the way it describes it. And then when the restaurant is full, it's not easy to say whether people will wait or not. You need to look at more attributes. And it might depend then on the wait estimate. After you know the wait estimate, then you might be able to predict pretty well they'll leave if it's more than an hour, they'll stay if it's less than 10 minutes. If it's between 10 and 60 minutes, well, then there's other questions to be asked to be able to predict what might happen. And so that's how this tree works. You can refine how many things you need to know before you come to your decision based on what you've found out so far. What this is representing is a function that for any vector of attribute values, this decision tree will tell you true or false is the decision. In principle you can make them probabilistic too. You could have a leaf of a decision treat that says it's 50% true, 50% falls, or 70/30. But key here is that for any combination of attribute values, you can use that to traverse the tree to hit the leaf node, and the leaf node tells you a value. If this is really what's happening and if this is really how we believe people decide whether to wait or not, then this decision tree represents the true function. And if we allow any decision tree over these attributes, then our hypothesis space is big enough to capture this decision tree, and we can realize consistency with any data, any training data we get. So let's think about how expressive these trees are. How about XOR function? Yes, a decision tree could represent a XOR function. It's a nice example because it's something that a perceptron could not represent but a decision tree can. Here is the table representation, and here's the decision tree representation of a XOR. The first bit is false. The second bit is false. Then overall answer is false. False and true is true. And similarly on the other side. Now, what we did here is build a tree that is as big as the table. If our trees are as big as the tables, we don't gain anything. Tabular representations tend to be notoriously bad, actually, for learning a pattern, because if you have a table representation, you allow for every possible hypothesis that could possibly exist. Then it's less likely you generalize. So we'd like to find small trees that capture patterns in there. So let's revisit this notion a little bit and compare it to perceptrons. Let's say we have the same problem again. What is the expressiveness of a perceptron over these features? Well, for a perceptron, every feature is either weighted positively or negatively. And so a feature either can [INAUDIBLE] positively or negatively, but there's no interaction between features. Something like patrons equal full and wait equals 60, if that combined means one thing, maybe a positive, but otherwise, independently, they're a negative, a perception cannot capture that because independent contributions of each. Same for naive Bayes. Everything contributes independently. Neural network can combine things with multiple layers to contribute through combinations. Same thing for decision trees. We can have combined feature values have one sign, maybe positive contribution, whereas each individual might have a negative contribution. With decision trees, as we build that tree, effectively you are inventing the features as you go along. You're saying, first I'll look at this attribute, and then if this one is true, I'll also look at this one. And you find a combination of features that allows you to make a decision. So it's very, very different and much more expressive than the linear classifiers. It's much more comparable to neural networks. So let's count how many hypotheses you might consider. Let's say we are allowed to build decision trees with n Boolean attributes. So we have n Boolean attribute, and we say any decision tree over these n Boolean attributes is OK. How many different trees can we build? Or maybe asked in a different way, this is not so much about how many trees we can build, it's about how many functions can we present? How many functions do there exist over n Boolean attributes? Well, what does it mean to write out one function? To write out one function over n Boolean attributes, we need to write a table. And that table will have 2 to the n entries because every possible combination of attributes will need to have a function value. So that's 2 to the n. Then, for each of those rows, you get to choose either true or false. So 2 to the n times. You get to make a choice of true or false. So this is number of rows. And the number of functions we have, we get to make a choice for every row out of two possibilities to the power number of rows, which if we write it out fully again will be 2 to the 2 to the n. You might wonder, is that a big number? A small number? Well, we can see. If n is just 6, this is how high that number is. So this number grows pretty quickly because you exponentiate the exponent, just like it's making it doubly exponential, which makes it grow a lot faster even than a regular exponential. So you might say, oh, any decision tree over 6 attributes is a pretty big hypothesis space already. We need to be careful about over-fitting even when we just consider 6 attributes and allow for any decision tree. So maybe we should not allow for any decision tree. Maybe we should only allow for short decision trees where you can't make conditioning on as many attributes. How many trees are there of depth 1, often called decision stumps? Well, depth 1 means you only get to choose one attribute, but to choose that attribute, you get to choose out of n. So you have n choices of that one attribute. Then, after you make that choice of the attribute, then you have to choose for the true value of that attribute and the false value, so there's two rows. So then there's two rows, and for each of those rows you have to choose true or false. So we have true or false, which is two choices which you make twice. And so we have four n possible decision stumps with n attributes. So for 6, that would be 24, which is a lot smaller than this gigantic number we have on top. So that might be one way to go. And often people do this. They limit the depth of the tree to make sure the hypothesis space doesn't grow too large because a hypothesis space too large, you might be over-fitting and not generalize to new data. So going back to the learning theory, if you have a large hypothesis space, you allow for any tree let's say, then you have low bias because you're not forced to stay away from fitting your training data. But the flip side, you might have pretty high variance because if there was a small change in your training data, you might have come up with a completely different answer when fitting to your training data. What's the procedure for decision tree learning? Let's start in the main loop over here, and then we'll look at the exit conditions later. You look at the set of attributes you have and the set of examples and you choose a best attribute. What might that be? We don't know yet. It's a bit like in A* search where we had a heuristic to decide which node should we expand. Same thing here. Which attribute should we split on? Then, based on an attribute, we're going to build a new tree in that location. That's going to be sub-tree from what we already have built so far. Then we'll look at each value that attribute can take on. We'll look at the examples that correspond to that value, and we feed them into a sub-tree. We're going to build a sub-tree that uses only those examples. So as we build a tree, we start with all examples. We have a split. We channel examples based on the attribute values and keep repeating that. Keep going around this loop. At some point, we're down a branch. Maybe, I don't know, restaurant is empty and the wait time was 60 minutes or more, and it's a French restaurant and this and that, and all of a sudden, there is no examples there. That's of course, you stop building the tree. There's nothing you can do there. If all examples have the same classification, there is no need to keep splitting on more attributes because they all have the same label, so you can just stop and put all those examples in that spot in the tree. Then, if the attributes are empty, you've looked at every attribute-- So we'll say there was 10 attributes. You worked your way down building that tree, and you're somewhere here in the tree. You looked at all 10 already. There's nothing you can do anymore. You already looked at all 10, and it is what it is. If you still have a 70/30, then it is 70/30 for that leaf. That's just what you can do. Or you need to go find new features. How do we choose the attribute? Well, let's look at this example here. Which one would do better, left or right as a split? Well, clearly we'd prefer the left because in the left there's a little bit of information coming out of it that, if the answer is none, it's all red, if the answer is some, then it's all green. And for full, it's a mix. Whereas on the right, it stays 50/50 all along. That's a completely non-informative attribute split, the type split there. So intuitively, this is easy for this example. But the question, of course, is can you write a piece of code that can look at splits and decide by looking at the potential split based on one attribute versus another attribute which attribute performs the better split? And ideally, we'd have some kind of number. We'd be able to say, this attribute has a quality of whatever, 5 or something, this one has a quality of 7. And higher quality is better, and so we'd go with 7, or something along those lines. So some kind of metric quantifying the quality of split. OK. To get to that metric, we're going to do a little detour into information theory. Information theory is a pretty big field. A lot of interesting things there. We're not going to do it justice in two slides because it could be multiple semesters of only information theory. But there are some interesting ideas in here even in just two slides. So let's think about communication theory. Let's say you want to answer a Boolean question. What do you need to do to ask a question? Need to send either a 0 or a 1. So you need one bit of information to go across the channel to ask that question. What if it's a four-way question? How many bits do you need to send across? Well, for a four-way question, well, if you only send 1 bit, you can only send a 0 or 1. You cannot distinguish between sufficiently many things. But if you get to send two bits, you can send 00, 01, 10, 11. You have four options, and you have enough. So here we need one bit. Here we need two bits. How about a four-way question where you know ahead of time the information that's going to be sent is always the fourth entry? Well, we don't even have to communicate. If you know the answer here, 0 bits is enough. What this shows is that it depends on your distribution over possible things you might want to send maybe how many bits you really need. For example, let's say we have a three-way question. With this distribution, what could be our best strategy here? You could say, oh, well, if I'm either sending one or two bits, one is not enough, so it's going to be two bits, and then I just always send two bits. And maybe it's 0001 and a 10. That's not optimal. You can instead say, hey, this one's really likely. I'm going to send a 0 for this one. These ones are less likely. I'm going to use a 10 and a 11 for those. And when I do that, how many bits do I need on average? Well, half the time I send one bit, then one fourth of the time I send two bits, and another one fourth of the time I send two bits. And so this comes down to we send 1/2 plus 1/2 plus 1/2 3/2 bits on average. Do we ever send 3 and 1/2-- 3 1/2, so 1 and 1/2 bits. Do we ever send 1 and 1/2 bits? No, it's always one or two, but on average we send 1 and 1/2. That's a more efficient encoding scheme than a fixed length encoding. There is a generalization of this idea. And I'm not going to claim this is entirely obvious from just this one example, but the generalization is that if you have a distribution and something has a probability p, then you can get away with sending a code worth of length log 1 over p. So if something is extremely likely, p is close to one. If p is 1, you send 0 bits because you always have the same information is no information, really. If p is close to 0, then 1 over 0 goes to infinity. Log of infinity is still infinity. There's going to be a very large number. But that's OK because the probability of that thing is so low that the number of times you have to send a large code word is very small. And so you get a notion here where on average, you send sum over all possible things, the probability of that thing times the log of 1 over the probability of that thing. That's how much you have to send. This is the length of the message, the encoding of message i, and this is how often you have to send message i. And so it's a redistribution of bits where you use a very small number of bits for highly likely things for a large number of bits for unlikely things. So that measure is called entropy. So the summation I just wrote, log of 1 over pi is minus log of pi, this thing here is the entropy denoted by h. And so every distribution has an entropy. And there's a theorem that says that, if indeed your data comes from that distribution and you choose the optimal encoding scheme, on average the number of bits you have to send is equal to the entropy, that is the weighted sum of the log 1 over p's for each one of them. Look at these. OK. Which one has high entropy? Which one has low entropy? Well, high entropy means that you need to send a lot of bits, meaning that there is a lot of uncertainty about things. So this one's high entropy because you know nothing really ahead of time. This one is really low entropy because you know everything ahead of time. And this one here, entropy is somewhere in the middle. You know a little bit ahead of time, but not everything ahead of time. So what we have now is a way of quantifying the amount of uncertainty we have in a distribution. That is the criterion we're going to use to decide on attributes to do splits on. Because what we want, as we do splits, we want to reduce uncertainty. And that was the intuition here. We liked the one on the left because there was more certainty in the different pockets, whereas the one on the right retained all uncertainty in every pocket. We now have a way to measure that. Entropy is the way to measure how much uncertainty there is when we know we're going to get a sample from a distribution but we don't know what the sample is going to be yet. So it's very easy to compute, just a weighted sum of log 1 over p's. And so when we look at this, we can say, OK, what is the resulting entropy here versus here? Well, this guy has 0, this guy has 0, and this guy has something in the middle. And these are all somewhere in the middle. Now of course, there's multiple pockets, and when you choose an attribute, you generate all those pockets. So we're going to take an average of the entropy in each one of the pockets, in each one of the branches, to decide which one has the better resulting entropy. So we compute the expected entropy. We might produce something called information gain, which might remind you of value of perfect information. Here what it is, you compute the-- information gain is entropy of this guy minus weighted entropy of all of these guys. And if that entropy drops by a lot, it means you chose a good split because your uncertainty is reduced. If entropy goes up, that would be pretty bad, but that typically shouldn't be possible or we shouldn't let that happen. Here on the right, it stayed the same, everything uniform. That's as bad as it can be for entropy. That's definitely the worst possible attribute to pick. So now we can do this. We can decide split on patrons makes more sense because the entropy drops. We have information gain. After we do that, we can repeat. We could run through this algorithm that we covered a few slides ago. The only thing we did in those which attribute to pick. We now do it based on information gate. Here is the tree that we end up with. Remember the tree at the very beginning of the second half of lecture that was the true hypothesis, it was bigger. It was not exactly like this. It was a little different. But we didn't have enough data to recover that tree. If we had infinite data, we might have found that tree, but we didn't have infinite data. We're ended up finding this tree. So there's no guarantee when you run this you find that ground truth hypothesis, just like any machine learning. You're just fitting the data. So it's actually a lot simpler, which might be a good thing, because it might generalize well, but it's not correct for all scenarios. Here's another example. Let's say we want to predict whether a car has good miles per gallon. And we can predict it from cylinders, displacement, horsepower, weight, acceleration, model year, and maker. Again, in principle, if this is a out in the wild thing you're trying to solve, you would think about, oh, maybe I'll try naive Bayes. Maybe I'll try logistic regression. Maybe I'll try a neural network, or maybe I'll try a decision tree. And this lecture right now, we're going to do it with a decision tree to cover more ideas centered around decision trees. So we look at the information gain for each attribute. We could say, OK, if we split on cylinders, what you end up with? Here are the distributions shown, and then this is the information gain, the entropy we had before the split, and the weighted entropy we have after the split. Information gain if we split on displacement, on horsepower, on weight, and acceleration, a model year, on maker. Information gain is highest if splitting on cylinders, so we split on cylinders. Now here is what we end up with as we run this algorithm. We started with all the data at the root, but this data now has been distributed over different branches depending on how many cylinders was the value of the attribute for that data sample. Over here, there's zero data, so nothing left to be done. Over here there's still a lot of data. Here there's not much data left. Probably can stop branching there. There's only one left. Here also, everything's the same. Everything's bad. So that's the prediction we stick with. Here-- well, here it's more debatable should you continue or not Because 9 to 1 is already pretty extreme. Might be over-fitting if we do one more split or not. Not clear. Let's ignore that for now. Let's think about this one here and see where we want to do there. Let's split it. And here, maker had the highest information gain. And then over here, horsepower had the highest information gain. You have more splits to do to get to all the way where everything is deterministic. Let's see. Do some more splits. After you keep running the algorithm I described, this is what you end up with. Definitely you should be worried about over-fitting here because we fitted all the training data perfectly, except for this guy over here. That one is unexpandable. What does that mean? It means that no matter which attribute we split on, it's not going to make any difference anymore. We can't improve this anymore. It's just the function happens to be stochastic, apparently, which can happen. Not all functions are deterministic. And that's what this would be modeling. Or maybe somebody mislabeled something. So this is our big final tree. Will it generalize? Your guess should be not. You're probably over-fitting at this point because you fit every pattern that's in the training data. In naive Bayes, we did smoothing to avoid that. In perceptron we did early stopping. Well, let's take a look here. Percentage wrong on training data, 2.5%. Percentage wrong on test data, 21%. Clearly we're over-fitting. We do much better on the training data than it would in test data. That's exactly what means over-fitting. OK. So what now? Consider this split over here. Put an extra number in here, p chance. What does that thing mean? You could do a calculation and say, let's say I had an attribute and I assigned values to that attribute randomly. So for each row in my table, I just have this extra column, and I just assign the value of that attribute randomly. And I do a split on that random attribute. How much information gain would I get? If the information gain's really good, it means you're doing information gain on what is really memorizing the training data, not on the pattern in your data. We're not getting into the specifics of how you compute this probability, but the intuition should be clear that if somebody introduces a random feature and you think splitting based on that random feature is equivalent to splitting on this thing that you think is a real feature, well, maybe it's not that real a feature. Maybe you should think of it as also a random feature and not split on it anymore. So you can do a comparison. What is the information gain when you split on a random feature versus split on one of the real attributes? And if it's comparable, maybe you should stop splitting. So we're not going in the math of how to do this, but we don't want to do extra splits when just chance would also have led to similar information gains. So we'll attach significance values, p chance, which were in the tree, which is the probability of a split that turns two blues and one red into one blue, one blue, one red, and an empty. What's the probability of this kind of outcome if it was a chance attribute split. And then, if that probability is too high, too close to chance, then we just don't do the split. So in practice, people do it slightly differently. What they do is they build the entire tree and then do a second pass up the tree removing splits that don't satisfied the criterion. You might say, why not just do it on the way down and just stop the work? Think back to, for example, the XOR. The XOR you could not-- in the first split, you gained nothing. It's only when you did the second split you could see the correlation between two attributes. And so whenever there's attributes that might be correlated and that's the information you need to get, you need to allow for two consecutive splits before you see any signal. And so by allowing you to go all the way to the bottom since you over-fit your training data, but make sure you don't miss any signal that way. You at least capture all signal, but also some non-signal, and then work your way back. You might get something better that can capture more of the pattern in your data while still not over-fitting. So you would start from the bottom, and for each split, check, OK, what the probability of this split compared to probably of doing it under chance? And if it's on the wrong side of that equation, then just get rid of the split and combine all the data into the node above. Here's this example in action. What it ends up with is a very short decision tree which only splits on cylinders and no other splits. Training set accuracy used to be-- training set error used to be 2.5%. Now it's 12.5%, which is worse, but test set error used to be something-- 40%, I think. Something pretty bad. I forgot what the number is. Let's see, what was it? 21%. And now it is 16%. So we actually improved our decision making capabilities on future data, or at least we expect that to be the case on future data. This is a form of regularization we've seen a few different forms of regularization now, and that's really one of the key ideas that you should keep in mind as you do anything machine learning, is you want to make sure your hypothesis space isn't too big. You either keep it small by design. You say, no bigger trees than this. No bigger network than this. No more features than this. Or you regularize. In principle, your hypothesis space is big, but then you don't let things get all the way to the outskirts of that hypothesis space. For decision trees, the outskirts is fully built-out decision trees with branch in every possible attribute and every path down to a leaf. That's too big. That's likely over-fitting. You want to re-prune that. That's the regularization. So if you are-- the further you go in training, the higher variance, the bigger the tree you end up with, and then as you go back pruning, you might have used a little bit of bias. You might reduce a little bit of signal that you could have extracted, but you'll reduce variance. So there are two ways of controlling over-fitting, limiting the hypothesis space or regularizing how you select from the hypothesis space. Both of them are used. Often when you want something really expressive, it's the second one. You make it real expressive and regularized to make sure that you don't get into the weird parts of the space. OK. That's it for the core materials of 188. You have a midterm on Thursday, and in the next couple of lectures, we'll look at applications of the materials we already covered. OK. Good luck on Thursday. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181108_Machine_Learning_Optimization_and_Neural_Networks.txt | PROFESSOR: OK. Welcome, everyone. Welcome to the 22nd lecture of CS 188. A couple of announcements. Actually, the same ones as last lecture. Your project four is due tomorrow at 4:00 PM. Your homework 10, actually has been released, I hope, and will be due on Tuesday. And then your midterm, perhaps the most important thing to keep in mind, is going to be next week Thursday. There will be a prep page coming out later today or tomorrow with a practice midterm two and also a list of office hours and sections dedicated to midterm two. Any questions about logistics? OK. So today's topic is Optimization and Neural Nets. In general in 188, we try to cover topics that are pretty timeless. So that if 20 years from now, you decide after 20 years of no AI you get back into AI, that the materials you covered are still relevant. I think we do that pretty well. It happens to be that today the material we cover also is the fashion of the day. So not only what we think of as a timeless topic, but also the most fashionable topic in AI for the past five years, and maybe for another five years to go. Who knows how long? So we'll cover both the most fashionable topic and what we think of as a timeless topic today. So it's a lot more mathematical than most of what we've done in 188. So that was already the case in the last lecture. So I'm going to rework through some of the things we covered last lecture to make sure we have the right foundation, and then from there build up to neural networks. So reminder. What are linear classifiers? Linear classifiers have inputs, which we call feature values, which might come in, let's say here, here, here, and so forth. Each feature has a weight, which is how much attention you pay to that feature. It could be a positive weight or a negative weight. And the weighted sum of the inputs is the activation of the unit. So mathematically, we have a weighted sum happening here. i is indexing over the different input channels. And so we have, let's say, feature 1, 2, 3, and so forth all indexed by i, weighted some of those features. In shorthand notation, it's this dot product notation. Then if the activation is positive, bigger than 0, we might say it's the positive class. If it's smaller than 0, we might say it's the negative class. Pictorially, it might look something like this. So that's a linear classifier. Then last time we said, well, if all we do is this kind of deterministic decision making, it's not always obvious what is the optimal choice of w. It's harder to define, and there might not be one that gets everything right. So we transition to a probabilistic decision-making process, where we said, OK, we still have an activation just like before. That hasn't changed. We call it z. And it's the inner product of the weight factor with the feature factor that's coming in. It's very positive. We want the probability of the positive class to go to 1. If it's very negative, we want the probability of the positive class going to 0. So the question is, can we find a function that maps a real number that can be anywhere between negative infinity and positive infinity to the range 0-1? In fact, there are many such functions that can do this. The one that we picked is sigmoid. What's key about it is that it's monotonically going from 0 to 1. The higher z becomes, the closer to 1. The more negative z becomes, the closer to 0. There were some questions after lecture. Could you use another function there? You definitely can. The key properties is this transition from 0 to 1 that's monotonic. One reason people use this one a lot is because when we optimize, we'll need to take derivatives. And this one happens to also have convenient derivatives. And it's convenient to compute in itself. So there's some practical reasons why you might want to pick this one. But for what we've covered so far, the main reason you'd pick something like this, it goes from 0 to 1 as you go from negative infinity to positive infinity. So that's the sigmoid. Then the question was, what's the best w when we're going to make probabilistic decisions? Well, we've seen that principle in a previous lecture when covering naive Bayes. It's the parameter vector that maximizes the likelihood of the data. In this case, the maximum likelihood estimation comes down to maximizing the conditional probability of the label yi. So we have a data point i, which consists of a label yi, and an input xi. And our model will predict the distribution over possible labels. And we want the probability placed on the correct label, the one that's in the data. yi to be as high as possible. There's a log in front of that. The log does not change things in terms of whether we want it to be high or low. It's just a monotonic transformation. And what this is is a sum of log probabilities of all the data points; outputs given the inputs. And the score here, the value achieved, depends on w. Different choices of w will achieve different sums of log probabilities. And what we'd like to find is a w that maximizes this, because that would be the w that best explains the data. Diving into the specifics under the p, the probability of the positive label, given the input xi and some weight w, is this thing over here. It's the linear classifier calculation, the dot product between weight vector and feature vector. And then the sigmoid is applied to that, which is the remainder of that expression, 1 over 1 plus e to the negative activation. And then the probability of the other class is 1 minus the probability of the first class when there's only two classes. Any questions about this? This notation will keep coming back and back and back the entire lecture. OK. That's two class. How about multiclass? So this is two-class logistic regression. Multiclass, we could say, well, we can not have a weight vector per class. And the weight vector points in the direction that you think the data points for that class inhabit. So then to the multiclass linear classification, you'd say, well, let's look at each weight vector for each possible class i. We can compute the dot product with a feature vector, which describes the input. And then we can see which one maximizes that dot product. And that's the label we assign. But again, that's a deterministic decision rule. And if things are not perfectly linearly classifiable, then it becomes less clear what your right choice is. And so we're again going to define an objective function that characterizes how good a choice of weight vectors is. So we're going to turn it this into probabilities. How do we turn-- let's say if we have a three-class problem, we have three activations, z1, z2, z3, into probabilities. This is the equation we're going to be using, which is called a softmax. Let's again interpret this. The first three numbers are the original activations, just the results of the inner products. And then these are the softmax activations. Let's imagine z1 is a lot bigger than z2, and z1 is also a lot bigger than z3. Remember, an exponential looks like this and grows very quickly as you move further on that horizontal axis. So if z1 is a lot bigger than z2 and a lot bigger than z3, what will happen? e to the z1 will be way, way, way bigger than e to the z2. And e to the z1 will also be a lot bigger than e to the z3. And what will happen is-- and what we have at the bottom here-- e to the z1 will dominate. And we'll have-- it's really a big difference-- a probability close to 1 on the class label 1, and close to 0 on the others. No matter exactly how these numbers, the z1, z2, z3, are configured, what happens here is they all get turned into a positive number, because exponentiating makes it positive. And by dividing by the sum of all three, we normalize. And so these three numbers that we get out will always sum to 1, will always be positive. And the more you're activated, the higher your probability will be in this resulting softmax. Now, what z1, z2, z3 will be for a given data point will depend on our choices of the weight factors w for class one, w for class two, w for class three, altogether w. What we'd like to find is the best w. So again, we can use the same principle of maximum likelihood estimation, which is the principle where we say we want to maximize the likelihood of the data we have overall choices of parameter vector available to us. So in this case, maximize the likelihood of data. Because we're interested in classification, making a decision will mean maximizing the likelihood of the labels condition on the input. Remember, in naive Bayes, you maximize the likelihood of both x and y together. That's a different optimization. Here we're just interested in y given x, and we'll try to find w's that are maximally good to optimize this objective over here. Underneath what's happening? These probabilities are all softmaxes. That is you had activations, which is weight vector for, let's say, the label class in a product with the feature vector exponentiated at the top. And then at the bottom, you normalize by looking at all possible class labels. And you look at the inner product of the weight vector for each possible class label with the feature vector exponentiated. Sum it altogether to normalize. That's multiclass logistic regression. And that's what we'll be building up most directly in this lecture. Any questions about this? OK. Then what we want to do now in this lecture is find a way to solve for w. The naive way to solve for w is to assume that you have infinite computes, and you try every possible w. And if you do have infinite compute, that is actually a possibility. You can just cycle through every possible w. And then figure out which one maximizes the score. And that would give you the same answer as we're hoping to find with the procedure we're going to describe. But the procedure we'll describe in this lecture will not require infinite compute. So we've actually seen something similar before. When solving CSPs, we have to find assignments to variables. Think of w as a vector with each entry being a variable. And we're trying to find an assignment to each entry in the vector w. With CSP, the score would have been how many of the constraints are satisfied. The more constraints satisfied, the better. And we saw something called local search, which was fairly scalable, where we said we start with a random assignment of the variables. And then we just do a random perturbation of a variable. And we see maybe if it's better. More constraints are satisfied. Let's keep it. Less constraints satisfied, maybe let's reject it, and repeat. So that could be a starting point. And it's, philosophically speaking, the starting point for this lecture. You could just pick a random w, and then perturb an entry. See if it's better or worse. If it's better, keep it. If it's worse, go back to what you had before, and repeat. That's actually an algorithm you could run. And you might find a pretty decent w if you run it long enough. Some trickiness, though. What we have here is not a discrete domain for the variable. We have continuous domains. So when we pick another value for a variable, there is infinitely many choices. And so we need to maybe be a little more clever about how we're going to pick a reassignment. When we have infinitely many choices, then went back. We did CSPs, and there might have been three colors to choose from to color a map. So how do we do this efficiently? One of things that is actually pretty interesting here is that even though it seems like continuous is our enemy, in that it makes it infinite, and there is so much more to choose from and to search over, actually, it'll help us a lot, because the beauty about continuity is that you can actually locally get some signal and see in which direction things are getting better, and which direction they're getting worse. Whereas with CSPs, you have to make a discrete change. It doesn't give you as much local signal as we will get here. So even though for now it looks like, oh, no, continuous is bad, you'll see actually it's a good thing for our optimization process. Let's start with a very simple continuous optimization, one dimensional. We have a function g of w. And we'd like to find where this function is maximized. If it's really a one dimensional function, you could just plot it like we did here. That's effectively having the infinite compute. You evaluate the function everywhere. You look at it, and you say, oh, wow, this looks good. What's w? This is my w star, the best possible choice. So sure, if your real problem is 1D, just do that, and you're good to go. I'm not advocating against doing that. But what we want to do now is we want to build some intuition from the 1D case that we can reuse in the higher dimensional cases. And in the higher dimensional cases, the intuition I've just plotted and looked at is just not going to work out. So what else can we do rather than just plotting and looking at it? Let's say we randomly initialize. And this is our w0, random initialization. We can look at the function value gw. You might say, well, it's good enough. I call it done. My log likelihood has achieved a high enough score. But you can also look around and say, what if I change it? Can it become any better? Maybe you can perturb it by a small amount h to the right and to the left. So it would be a w0 plus h over here, achieving this-- well, this value here, and maybe a w0 minus h achieving this value over here. You might say, well, moving to the right seems better than moving to the left, because the value went up and we're trying to maximize the log likelihood. So then let's step in that direction and maybe repeat this process. Now you could probably do this. You could-- then again, if you did exactly minus h and plus h, you would end up here and here. And then you'd choose to go here. Next time you'd end up here. Next time here. And then maybe next time if you're lucky, you land right on the top. And after you do that, you look in both directions. You'll see it doesn't get better. You call it done. You found a local optimum. It doesn't mean you've found the global optimum. In this case, we did. But you could imagine if you started, let's say, here and you did the same procedure, you'd end up somewhere over here. We'll be OK with that in the optimizations that we're doing. We're going to be fine with finding local optimums. We might just randomly initialize a few times and see if we find local optims that are better or worse with a different initializations, and then keep the best one. So in this case, we initialize twice; once here, once here. We would have found this one and this one. And we'd have retained this one here and said, that's the one that maximizes our score. So that would mean we can discreetly perturb a little bit in each direction. You might wonder how much should we perturb in each direction. And do we really need to do it in both directions? Because if we look in one direction, doesn't that already tell us something about the slope and tell us it's going uphill in that direction so then it will be downhill in the other direction if this function is smooth. So maybe we can only look in one direction, or maybe we can actually do something a little different, which is compute a derivative. Compute a derivative, which is this equation over here. What is this doing? It's saying we are at w0. And we're taking the limit of-- taking a step to the right, to the left, looking at the difference, and dividing by how far we stepped. So this is computing the slope. So it's the linear approximation to the function right here that we're getting. That's what the derivative is doing. You probably covered derivatives a long time ago. Maybe you've seen them again since. The intuition that matters the most for this lecture is that it's computing a linear approximation to your function locally. And it turns out that it typically is not-- we'll see more about this later-- it's not that hard to compute a derivative. There are rulebooks for that that tell you, if this is your function, this is your derivative. So you can just kind of use that rulebook and say, OK, this is my derivative. Let's say you do this for this function. Then you could say, well, it's sloping up to the right. So I'm going to make a step to the right. And you would make your step to the right. Maybe now you're here, then here. Again, you compute the derivative. You see it's sloping up to the right. You do another step. Now you're here. Compute the derivative. Again, it's sloping up to the right. Oops. I'm not here yet. You take another step. Now you're here. It's starting to flatten out a little bit. You'll take another step. And then if you land at the top, you'll see that the derivative is 0. It's a flat fit, which means in no direction can you improve. And you might call it done. So that would be doing a kind of procedure that will generalize to what we're going to do in higher dimensions. At every point we compute the linear approximation. And we just trust the linear approximation to tell us in which direction to step. We take a step, and then we repeat this process. And notice that this would not be possible if your variables were discrete. If you have discrete variables that take on the values red, green, blue, there's no way to take a derivative against red, green, blue the way you can with continuous variables. So it should start to become clear that the fact that this is a continuous space actually helps a lot in doing this kind of optimization. Any questions about this? Yes. STUDENT: Do you have a particular reason [INAUDIBLE]?? PROFESSOR: Say it again. STUDENT: [INAUDIBLE] PROFESSOR: So there are different definitions of derivatives. The question is, does it matter how we define it here? Not really. If you prefer a right derivative or a left derivative or a symmetric clock, as shown here, it doesn't matter too much. Symmetric maybe makes the most sense. It's kind of cleanest. But don't worry about those differences. What's key is that the derivative is something that is a local approximation, a local linear approximation, to your function that tells you in which direction is up, which direction is down. And in fact, it also tells you whether it thinks you're going up quickly, or only gradually, or it's flattening out. OK. How about 2D version of this? And 2D will be the highest dimension that we're able to draw. But once we have 2D, hopefully the intuition will generalize to higher dimensions in your minds. One thing I want to point out here is when we have a 2D function we're optimizing, often the way we're going to visualize it is by looking at contour maps drawn underneath here. So if you look at those, these are concentric circles shown at the bottom underneath this hill shape. They're showing height contours. On a given line, the value of the function is the same. And in fact, the more you go to the middle, the higher the value of the function. And the more you go to the outside, the lower the value of the function. And often we won't even draw the thing on top. We'll just show those contours. And that should tell you something about where the good parts are. If I could look at those contours, somewhere underneath here in the middle should be the right spot, because that's where things get higher, higher, higher. The procedure we'll use is a generalization of what we just did for one dimension, gradient ascent. Now, if you read a bunch of literature, you'll see gradients appear. Often, you'll see something appear called-- instead of ascent, people will have gradient descent. That's if you want to find the bottom of a function. You descend if you want to find the bottom. You ascend if you want to find the top. We're going to try to find the top. So for us, it's gradient ascent. But if you see gradient descent, it's essentially the same kind of idea. It's just applied in reverse to go to the bottom, rather than going to the top. In many ways they're equivalent. It's just a choice whether you're maximizing or minimizing the objective you care about. So we want to perform an update in the uphill direction for each coordinate, because we have multiple coordinates now. The steeper the slope, so the higher the derivative along a coordinate, the bigger the step we're going to take for that coordinate. That's a choice. That's just the way this is set up. But some intuition as to why this might make sense, it's very steep uphill in a direction. That's a very promising direction. You might want to put a lot of effort in going in that direction, whereas in some other direction, things that really, really flat. Well, why would you spend a lot of time in that direction? It seems like there's not much action in that direction. That's the intuition underneath here, but we'll see a little more formal intuition later. So consider a function with two variables, w1, w2. And we'll do updates that are just of the form that we saw in the previous slide, effectively. We'll say I'm updating my first entry, w1, by whatever it was before. I'm keeping that plus a small change. There will be a learning rate again, like we saw in Q-learning. And this thing here is the derivative with respect to w1. So it's when you are on this landscape here. Let's say this is w1. And you're at some point over here. You would say, well, let's see what happens as I move in the w1 direction, which would be effectively along this curve here. Keep moving in w1 direction. You move along that curve. You measure the slope along that curve. And that tells you whether you should move to a higher w1 or a lower w1, depending on whether it's an upslope or a downslope aligned with w1. And then you take that step. The same thing for w2. So going back to the picture, we're still here. Let's assume this is w2. We will then consider walking along this curve, but we're not considering the entire curve, of course. We're just looking at the local derivative over here to see what the slope is along the w2 direction. And that will inform us whether we want to increase or decrease w2. And then let's think about this. Let's say the derivatives are all positive, meaning if we move in a positive direction for w1, things go uphill. Then indeed we increase w1. The same for w2. If we move in a positive direction for w1 and that happens to bring us downhill, that means the derivative is negative. Then that negative derivative here will ensure that we move in the negative direction for w1. Let's do this in vector notation. So we have a weight vector w. We update it by some learning rate times this thing over here. This might be new notation to you. What does that mean? It means it is a vector with entries of the vector corresponding to derivatives. In this case, it's a two dimensional function. So we have a derivative with respect to the first variable and a derivative with respect to the second variable inside this vector. But keep in mind this thing here is nothing more than a shorthand for saying, I'm going to build a vector with all the partial derivatives. And this thing is called the gradient. So when we say gradient ascent, what it means is we compute the gradient, which is a vector with all the coordinate-wise derivatives. And then we take a step in the direction of that gradient. And based on the reasoning we did here, we know that taking a step in the direction of the gradient moves us uphill, because if for a certain coordinate the entry is positive, that means we'll move in the positive direction for that coordinate. And that's the right direction. And if for a certain coordinate the entry is negative, it means we'll move in the negative direction for that coordinate. And that's also the right direction for that coordinate. So pictorially looking from the top, we have some contours here. The highest scoring point is in the middle there. If we run gradient ascent, then we start here. We'll take a step, and up over. Another step, and up over here, here, here, and so forth. These steps happen to become smaller. There could be two reasons when you see a picture like this that the steps are becoming smaller. One reason could be that you're learning rate was decaying. And so that makes your step smaller. And that's often going to be the case. Another reason could be things are flattening out. And as things flatten out, well, your derivatives are smaller. And so the steps that you compute based on those derivatives are smaller steps. Now, let's ask a question. We've seen the procedure. What is the steepest direction? And I put this at the bottom here just to make sure we keep the notation in mind, which might be new to you. Your hill climbing. You're in the fog. You're on a mountain. And you're saying, well, it's good to measure in each coordinate direction what the derivative is, and then maybe step in that direction or opposite direction based on that, but I'd rather follow the steepest direction uphill rather than doing this per coordinate thing. Can I find the steepest direction? Because that might lead me along the shortest path to get to the top of the hill rather than all these credit-wise calculations. Well, we can try to compute that. We're going to be at some point w right now, which is some coordinates that we're at. And we're considering changing our coordinates. And we're only allowed to change our coordinates a little bit. What is this thing here? Delta 1 squared plus delta 2 squared. That is the amount that we move. Remember, if you have, say, delta 1 on this axis, delta 2 on this axis, you have a point over here. This distance over here is the square root of delta 1 squared plus delta 2 squared. So how far we move is the sum of the square, the square root of that. So we're limiting how far we're allowed to move. What this is defining is effectively a circular region around where we're currently at. And we're only allowed to move within that circle. Now, I'm going to say, we're asking the question here, which move is going to increase our objective the most as long as we're required to stay within that circle? And if that circle is really, really small, then effectively what we're finding based on that is the direction in which the slope is the steepest, because we can make only a very small change. And it has to have maximal effect. We've got to take the steepest direction. Also, if we're only making a very small change, we can approximate our possibly very complex function with a linear approximation. Assuming it's a smooth function, we can have a linear approximation locally. So our original function gw, g evaluated at w plus delta is equal to g at w plus these two terms, which correspond to the linear approximation locally to the function. And notice that these are derivatives again. It's how much does the function change when I changed w1 times delta 1, which is how much we changed w1, plus how much does the function change when we change w2 times delta 2, which is how much we are changing w2. We're looking for the delta that has the most effect here while keeping delta small, because we can't move very far. Notice this guy does not have a delta. So it will not affect our calculations. So I actually want to find the steepest ascent direction, not descent. So we want to maximize over delta s, where we stay within a small epsilon circle, this linearized approximation over there. Remember, if we have two vectors-- let's say we have a vector a here, and now we have another choice of vector delta, which could be maybe here, or maybe we put delta here, or maybe we put it here. But if delta has to stay within a circle, the way we can make the inner product the highest between delta and a is by making delta completely aligned with a. So we should pick delta pointing this way. And of course, can't leave the circle. But this is our optimal choice of delta right there to maximize this inner product. This here is our a1. This is a2. And a transpose delta is a1 delta 1 plus a2 delta 2. So what we see is we have to somehow choose a delta that is of this form factor to maximize the objective. What are these a1 and a2? They're actually entries in the gradient. In fact, our a is equal to this guy over here. And so what we find is that we should point epsilon-- point delta in the direction of the gradient, and it can be epsilon long. So the solution will be this over here. It's a normalized vector, the gradient over the norm of the gradient. So this thing here is normalized. It has norm 1 scaled by epsilon, because we cannot move more than epsilon. Which direction is it pointing in? Well, the direction of the gradient. What does that mean? The update we did on the previous slide where we said we move in direction one based on the derivative along direction one and move in direction two based on the derivative along direction two is actually giving us the optimal local improvement direction. So even though we were thinking about it with coordinates and we said, "Well, let's look at this coordinate. Let's look at that coordinate," the way we happened to scale the steps along each coordinate-- namely, we scaled them by the derivatives-- gives us the direction of that steepest uphill. So we were already doing the right thing to go in the steepest direction. So that's great. We have this optimization problem. We want to find a good w. We can compute a vector of derivatives. That vector of derivatives tells us which direction to step. We step, and we repeat. And over time we climb the hill until we find, hopefully, some good local optimum. We've seen it in two dimensions. In more dimensions, it's essentially the same. The gradient vector will now have more entries. In n dimensions, it will have n entries. But other than that, procedurally, it's the same. You compute a derivative with respect to w1. With respect to w2 all the way till wn. Those are all derivatives with respect to just one variable. If you know how to take a derivative of a function, stick to one variable, you know how to do this. You have to just do it many times, because there's n variables to do it for. You build your vector, take a step in that direction, and repeat. So this is the procedure. We initialize maybe randomly, and then we iterate. We compute the gradient at the current position that we're at, w, take a step in that direction, and we keep repeating. And the closer we get to an optimum, then the lower the gradient entries will become, because things will start to flatten out. And this will start moving around less and less and less, and hopefully converge. You might wonder about also the learning rate. It is a tweaking parameter. You might have to play with that a little bit to make it work. What would be a rule of thumb when you're, let's say, optimizing a new function. You're like, OK, what should my alpha really be? Because if you think about it, if you think of your function g of w, and somebody else gives you a new function that is 10 to the 6 times g of w, you're solving the same problem in many ways. But if the function is 10 to the 6 times g of w, your gradients will all be scaled up by a 10 to the 6. And your steps will be a million times bigger, even though we're effectively trying to solve the same problem. So you can't just have a general learning rate that will work all the time, because somebody could come in and just rescale your function. That learning rate will not work anymore for you. So how do you get this to work out? It's by looking into the w space. So you might say, ah, let's compute a gradient update. Let's see. OK. How do I rescale this gradient vector such that my updated w changes w about 0.1% to 1%? If it does that, then it's reasonably sized, and I'll repeat this, repeat this, repeat this, and over time get to a good point. If you do this to decide on your alpha, somebody can rescale your function by a billion, a trillion. And it will not affect what you do with your w. It will all be the same update, because you'll measure the size of your update based on what happens in w space, not in however somebody might have rescaled your function. So now we know how to optimize. In fact, we've covered something much more general than maximizing likelihoods. We've covered the notion that if you have a function g that depends on some variables that are continuous variables, how you can start from a random point and from there improve, improve, improve to land at the better point for that square that you have. Let's go back to learning. We want to maximize the log likelihood of the data. In this case, labels given the inputs. OK. Same thing. We can write out the gradient descent procedure. Our g is now the sum over all data points of the log probability of label given input under the current w. Other than that, nothing has changed. Now, the gradient of a sum is the sum of the gradients. So actually, that's rewritten this way over here. That's our general procedure. Now, if you look at this, you might say, well, it looks like we do a bunch of updates, some together, to update w. So I have to go through all my data and compute these gradients, and then I do an update. But what if I just computed one gradient on one data point? Wouldn't I already have some information that I can use to improve w right away and only then go look at the next data point, then go look at the next one? So I don't have to, let's say, if I have a million data points do a million calculations of gradients before I can do a change to w. In fact, you'd be right. You can do that. That would be called stochastic gradient descent. What you do there, once you compute a gradient, you right away incorporate it. And the upside here is that when you compute the next gradient, you're computing it at a better point on expectation than where you would have computed it otherwise. So you don't waste your time computing a million gradients at the same point. Those last gradients are already computed at a way, way better point than you started from. So what does that look like? You still have a kind of infinite loop here till you decide you're happy. You pick now a random data point, and just compute the gradient for that data point of the conditional log probability of label given input. Do an update. You repeat. Now, you might say, well, that means I compute this gradient. And I'm waiting there before I can compute my next gradient. But maybe you're computing on a computer that has a lot of parallel compute power. Maybe your GPU can feed through many things in one go in which you do effectively the same calculation. Then you'd be waiting and not utilizing that. So actually what people mostly use in practice is something in between the two versions we just saw. We first saw batch gradient descent. We just look at all your data. Sum the gradient together. Take a step. Then we look at stochastic gradient descent, where we look at one sample, do an update, another sample, do an update. And in some sense that's optimal, except that if you have parallel compute, you're not utilizing it. If you have parallel compute, you'd say, well, let's take a batch of things. So instead of taking one, I take a batch of examples. Let's call that batch j. And then I sum together the gradients for each data point in that batch, which I can do all in parallel, those gradients. Then I just reduce, sum it altogether, and do an update based on that. So that way, if you think about wall clock time, you're not wasting wall clock time waiting for one to be done and then be able to do the next one. You can keep running and utilize your GPU kind of to maximum cycles it provides you. And so typically, if you ask, how big should this batch be? Typically, the batch will be determined by how much you can compute in parallel. If you can feed 100 examples in parallel, you might feed 100 through. If you can compute gradients on 1,000 in parallel, you'll feed 1,000 through. If you can feed a million in parallel, sure, a million go through. Whatever you can feed through in parallel is what you would feed through to optimize your wall clock time when you do this. And this one is called mini batch gradient descent, where a mini batch refers to the fact that batch means everything, your entire training data. Mini batch means that you take still a batch, but smaller than your entire batch of training data on which you do the gradient calculations to get your update. Any questions about this? How about computing all the derivatives? Because we've been assuming that we can just say, hey, give me a gradient, and then we just take a step in that direction. We'll look at that later. So let's not worry about that for now. I'll revisit that at the end of lecture. So once we've seen also neural networks, which are a generalization of logistic regression, we'll beg the same question. OK. Let's take a two-minute break here, and then let's generalize this all to neural networks. All right. Any questions about the first half? Yes. STUDENT: Can we use something like linear programming to solve for this? PROFESSOR: So the question is, can we use linear programming to solve this kind of problem? It's an interesting question. So we think about linear programming. What is linear programming? Linear programming is a methodology to-- well, if you think about it, it's a methodology, or a problem formulation. So linear programs are optimization objectives like we saw here, but where the objective has to be linear. So the function itself you're optimizing as linear. Now, the reason it still is interesting-- because if it's just linear you just run to infinity, and that's where the highest number is-- is that there are also constraints. You have to stay in a certain region. And the boundaries of that region are also linearly defined. And so linear programs are optimization problems like this with an objective, but also have constraints. And the format that is available to you is very limited. The objective has to be linear, and the constraints have to be linear. If your objective is of that form and possibly you have constraints, then definitely you can use linear programming. No problem. But for many, many problems, the objective will not be linear. And if you have constraints, which we don't have here. But if you had constraints, they might not be linear, which might limit how much you can use the kind of off-the-shelf linear programming toolboxes. But if their assumptions apply, they're highly optimized to do a great job at solving those problems. If you look at what's underneath, some of them use very specialized ideas, like simplex. Others use gradient-based methods underneath, too. And then it would come back to something very similar to this. STUDENT: [INAUDIBLE] PROFESSOR: Yeah. So interior point methods are ones that turn your linear program with constraints into an optimization problem where there's no constraints anymore, but the objective has changed from linear into something that rises to infinity at the boundaries. And then you're back. And you're minimizing in that case. And you're back to a similar setting we have here where a gradient method can find the optimal solution. And so it would end up being extremely similar to what we cover here. You'd have to do first a transformation step. Turn some constraints into objective terms. And then you'd be in a very similar situation. Any other questions about what we saw so far? Because we're going to strictly generalize this now. OK. Neural nets. We've kind of already seen a neural network. Namely, the multiclass logistic regression reshown here drawn out more explicitly as a network is an example of a neural network. We have inputs; feature 1, feature 2, feature 3, up to feature k. They then get somehow fed into this network thing that computes activations z1, z2, z3. Remember, what was z1? z1 is just some weight vector for class 1 inner product with f of x. And so what does this drawing mean? Well, weight vector 1, the first entry lives here. Weight vector 1, the second entry lives. Weight vector 1, the third entry lives here. Weight vector 1, k-th entry lives here. Those weight vectors are used to weight the input features. Compute a weighted sum. Get your z1. Something similar is true for how z2 is attained, and z3. These activations can be between anywhere negative infinity and positive infinity. We want to turn it into probabilities. That's where the softmax comes in. And we'll get out conditional probabilities for each possible class given the input. Now, we still have some work to do here in practice. I mean, we now know how to optimize, but when you apply this in practice, you'd have to think about what should be good features. Should I count the number of loops that I detect in my image to do a digit classification? Or should I count the number of edges that are vertical in this part of the image? Should I have as a feature whether this pixel is dark or not and so forth? So those are choices you make. And those define your feature vector. You might wonder, can we maybe more automatically figure that part out, too? And that's where bigger, deeper neural networks come into play. So instead of just having this, what we just have here, we're going to make it a lot bigger. We're going to make it a deep neural network. And we're going to learn the features. So we have our original-- whoops. What happened? We have our original inputs over here. Let's say pixel values. And we want to turn that into interesting features on which we can do a multiclass logistic regression calculation to classify what's in the image. What needs to happen there? Traditionally, what would happen is you would think very hard about what you think matters, then write a piece of code that turns your x's into f of x's. But ultimately, that piece of code, that's all it is. It's some code. It's some calculation. The main idea in deep neural networks is to stop counting on people coming up with that calculation. Just let it figure it out on its own. Let's put a massive amount of calculation there. That looks a lot like what we already saw. And that massive amount of calculation, we're not going to program by hand what's happening there. We're going to leave it flexible. And hopefully, it results in good features that can then be used in a multiclass logistic regression to find a good classifier. And so the massive amount of computation is what lives over here. It turns the input values x into feature values f of x. What's actually happening here? Each circle corresponds to effectively a neuron, loosely inspired by neurons. So it's a network of neurons. And each one of them does a calculation that looks like this. So it's saying, I'm taking a weighted sum-- so a sum that's weighted-- of my inputs to compute an activation of a neuron in layer k. We use the neurons in layer k minus 1 and take a weighted sum of the activations in layer k minus 1. You might wonder, is this a good choice? Is this really what we want? If we want to compute features, do we really want kind of just like a bunch of neurons that take weighted sums of what's in the previous layer and pass that on and repeat? If you just take weighted sums, pass on, and repeat, that's actually not going to work very well. That's why there is still this little g up front here. g looks at what comes out of the original calculation, which could be anywhere between negative infinity and positive infinity, and then rescales it in some nonlinear way. Why? If everything is linear, if you don't have g and everything is just linear, everything stays linear. And you get nothing new by having many, many layers. Because linear times linear times linear times linear, however many times linear, still linear. But linear followed by this g thing is non-linear. And then you do a linear again followed by g. It's more non-linear, more non-linear, more non-linear, more and more flexibility in what you calculate. And we'll see something later that if this thing is big enough and indeed you have a g in there, you will get a very expressive program living there. And that's the best way to think of this. This is your feature calculation program. If you prefer to calculate the features by a program you write by writing lines of code, that's fine. That's the traditional approach. The more popular approach these days is that you don't write lines of code for that feature calculation program, you just give it a massive network. And how does it get programmed? Well, you find the w's, the weightings, and all the connections that result in good features such that you do get a good classifier on your daa. To unify notation a little bit, instead of calling these f1 through fk-- and note that the number of-- actually, two things. First, we will unify notation. These are all z's early on. And that's f. They're the features, but just unified, we'll call those z for the n-th layer. So the n-th layer in the neural network happens to be the features that go into the multiclass logistic regression. Then when you look at the bottom here, we have capital L input numbers, then we have-- or pixel values, whatever it is-- we have layer 1 has capital K1 number of hidden units z. Layer 2 has capital K2. Layer n minus 1 is capital K n minus 1, and then layer n has capital Kn. So it could be that you have 10 inputs, then the next layer has maybe 100 z's. The next layer might have 1,000, then a million. Then maybe again 10,000, 1,000, down to 500. And then maybe at the end, you're down to three that are over here that are your output activations, which get turned into a probability through the softmax. So even though the drawing here is very symmetric, keep that in mind. It doesn't have to be the same number in each column of hidden units. In fact, it can vary very wildly. You might wonder about this g. What might it look like? The popular choices in practice look like the blue curves shown over here. These figures also have a yellow curve. The yellow curve is the derivative of the blue curve. Don't worry about it too much. If you can imagine, given we're going to have to compute derivatives, it matters that these functions have derivatives that we know about, because we're going to need derivatives. But look at the blue one, sigmoid. We've looked at them a lot before. That's how to squash something between 0 and 1. Negative infinity goes to 0, positive affinity goes to 1, and a gradual transition in between. How about hyperbolic tangent? Actually, it looks very similar. Look at those two. Look at it very carefully. You'll see they actually kind of look the same, except that one of them is kind of just shifted a little bit and expanded. This one here, hyperbolic tangent, goes from negative 1 to positive 1 instead of 0 to 1, but otherwise has the same shape as the sigmoid. You might wonder why might people care about having both of those. If you have one of them, isn't that enough, like if they're the same shape? Sometimes when you think about your problem, it makes more sense to think about things as being on versus off. And that's where the 0-1 can make a lot of sense, because you're multiplying a 0. That's going to have an off thing. Nothing comes through. Sometimes it makes more sense to think of it as negative versus positive activation. And then the hyperbolic tangent can make more sense. There's also some practical reasons why often the hyperbolic tangent works a little better. It's because your numbers tend to be centered around 0, which for a lot of optimizing when you do gradient descent, it tends to work a little better when things are naturally centered around 0 than when they're offset away from 0, which you have in the sigmoid. So your default choice might more likely be this one here than the one on the left, but both can have their merits. You might wonder, if I have to compute derivatives-- it's a lot of work-- what's the simplest thing we can do to make sure things are not linear, but as simple as possible? That's the rectified leaner unit. The function runs here. If you're low at n0, you stay 0, or you become 0. And if you're higher than 0, you stay what you are. So this is very similar to linear. Any positive thing essentially just passes through as is, and your negative thing becomes zeroed out. So that's also very fast. You don't have to compute any exponentials to compute this thing. You just check if you have 0 or not, and set the 0 as needed. So it can be computed very fast. Derivatives are very easy to compute. No exponentials involved either. I mean, this thing contributes either multiplication with 0 or a multiplication with 1 to your derivative calculation. So very, very simple. It is probably the most popular one right now for people to use. But all three of these are reasonable. What's key about them is that none of them is just a line. If it's just a line, it's not going to work, because that is linear. And if things are linear and linear and linear, you don't get enough flexibility to capture non-linear patterns in your data. So training the weights-- question. Yes. STUDENT: [INAUDIBLE] PROFESSOR: Sure. STUDENT: If we need a [INAUDIBLE],, we'd just multiply the sigmoid function by 2 and the minus 1. [INAUDIBLE]? PROFESSOR: That's pretty much-- I don't think it's exactly the same thing, but it would get pretty close. STUDENT: So it'd just be [INAUDIBLE] or there are special features for [INAUDIBLE]?? PROFESSOR: It might even be the same. You'd have to do some calculation. But there is-- yeah, it's roughly something like that. You scale it by a factor two and shift it. So you might be right that times 2 minus 1 does it. Or it might be something very close to that. So define w, which is now not just the w for the multiclass logistic regression, but the w's that live in the entire network. We can just have the same objective again. We've already seen this pattern many times now. We just try to maximize the probability of the labels in our data given the inputs for the corresponding data points. Different choices of w will lead to different scores. We want to find a w that maximizes the score. This again is a continuous optimization, because the w's live in continuous space. So we can follow the same procedures. The w will be a lot larger vector. So competing a derivative of with respect to every entry of w will take a little more time than when you have a smaller w. But it's the same principle. You just run gradient descent, and you stop when the log likelihood of your holdout data starts to go down. That's always what we do in this kind of training. We train. We train. We train. And our training data, that's what gives us our gradients. We've got to be careful we're not memorizing our training. We've got to learn the pattern in the train data that generalizes to other data. So once on the other data the holdout data things start going down, we call it done. That's our result. What are some properties? Here's a very interesting property. And this is part of why neural nets are sometimes pretty well justified and so popular. It's the universal function approximation theorem. That says that even just a two-layer network, if just had one layer of z's added on to what we used to have, if that layer is big enough, that network can approximate any mapping that's continuous from input to output to arbitrary precision. So no matter what function you're trying to learn from your input to output of your data, if your network is big enough, you'll be able to learn that pattern. Now, again, do keep in mind you don't just want to memorize that pattern, of course. You want to stop the training once the holdout accuracy starts going down. But the network, in principle, can learn the entire pattern in your training data if your network is big enough. What's happening underneath? Effectively, you're learning the features. You're turning your original raw pixel values or your word counts or whatever it is into something more meaningful to do a classification on. More formally, and the star here means that you're not expected to understand even the phrasings of these theorems, let alone the proofs. But there are a bunch of formal theorems saying that if your network is big enough, it can capture up to arbitrary precision any mapping from input to output that's continuous. And there's a bunch of papers that came out around the same time, late '80s, early '90s stating this. If you've taken a signal processing class, you might actually have seen something similar. If you saw a Fourier analysis, you would say, OK, if you have a function that's periodic, you can approximate it to arbitrary accuracy with a sum of sinusoids. Well, the same thing is happening here. We're saying if you have a function that's smooth-- nothing about periodic-- if you have a function that's smooth, if you have a big enough network, that network can approximate that function to arbitrary precision. So that's a good thing, because that means that if somehow our network is not capturing the pattern, we know what to do. We can say, let's just make the network bigger. Now, again, you might end up overfitting. You ought to be careful there. But if your network does not capture the pattern of what's in your data and the accuracy doesn't really go up anywhere, you might just want to make your network bigger. Let's play around with this. So here's a demo site where we can look at some examples. So here is our-- let's choose a linearly separable dataset here. Two classes, linearly separable. Now, it's not finding-- the initialization does not have the right classifier. We have two inputs, x1 and x2. And this is the output of the classifier. Let's train it. So we take a step. Well, after one step, it's already gotten pretty far. And it's aligning itself a little more to make those probabilities higher for those data points. And let's run this faster. You see it's making those probabilities even higher. It's going to make that sigmoid steeper. And it got everything right. Now, let's take a look at if we add some noise in our data. So let's put a bunch of noise. So now there is no linear separation for the other points. We can again run logistic regression, which is happening underneath. We do an update, update. The mini batch size here is 10. So every update is taking the gradient according to 10 data points, doing an update, repeat. And so keep going. It starts aligning this thing. It's not going to find a solution that's perfectly precise in the training data. But it is finding a solution that you might argue captures the correct pattern from that training data. Now, let's reduce noise again all the way to 0. Let's take a more difficult dataset. This one here. What if you run logistic regression now? Well, let's run it and see what happens. It doesn't do too well. You see nothing is very dark blue. Nothing is very dark yellow, meaning nowhere it's very confident about the labels. The labels are never confident. And rightfully so, because it's often wrong about the label. So it's splitting the probabilities closer to 0.5 for most of them. And that's the best it can do here. But we've seen if we have a neural network, we might be able to do something more expressive. Let's add some layers. Let's add a hidden layer. How many units do you want here? Well, let's start with two units. Let's train and see what happens. You find something where for two of the regions it does pretty well. And then the other region, it assigns kind of like a 0.5 probability it doesn't know. Well, you've seen the universal function approximation theorem. It says if your network is big enough, it should be able to capture the pattern in the data. Well, if you're not happy with this, let's make the network bigger. Let's add more units. Actually, let's keep the same number of layers for now, but add more neurons in this particular one layer we have. Actually, think about it. How many neurons do you think you need to capture the pattern in this data? Four. Why? Well, intuitively, four makes a lot of sense, right? Think about it. There are four regions. So you kind of want to be able to point in all those four directions. That's what your first layer could do. And then the next layer could say, well, if one of these two is active, it's blue. If one of these two is active, it's yellow. So the first region splits into four, and then the next one consolidates. So let's see what happens when we have four. Let's run the training. And we see that's indeed starting to capture the pattern in the data pretty well. What's this blue one doing at the top here? Not clear. That's probably overfitting. There's nothing blue there. There's also nothing yellow there. That's the kind of thing where probably we've overtrained at this point, that it popped out with something blue over there. What happens if we make the network even bigger? We add more neurons, more neurons. Let's train again. So far the non-linearity we've been using is 10h. That was one of the choices. What if we changed this into linear? We've got a big network. Everything's linear. The g function is doing nothing, just identity. What'll happen when we train? Let's see. Your prediction, hopefully, was it's not going to find a good solution, because if everything is linear, it's like having a linear classifier that has only just one layer, no hidden units, and that's the best you can do. And it assigns roughly 0.5 probability most places here. How about another non-linearity, ReLU, which was a thing down below 0, you become 0. Above 0, you stay what you are. The minimal thing to become non-linear is like have essentially two lines, right? Let's see what happens. ReLU, no problem. It finds a nice solution here. How about another dataset? This one here, concentric circle. Let's remove some hidden layers. Let's think about what will happen when we just try to learn a linear classifier. It's not going to do too well, right? No matter how long we train. And it'll probably make sure that it's not too confident anywhere, because that would give it a very bad log likelihood score. If you're confident about the wrong thing, that gives you a bad score. So you want to be somewhat unconfident in most places to make sure you don't make bad mistakes. What if we add hidden layers? Let's see. Let's add one hidden layer. We have ReLUs. Let's train. And it's able to parcel out that middle region. If it hadn't been able to do it, then we could have just grown the network more. What if we only have one unit here? What will happen if you just had one unit in your hidden layer? That doesn't allow you to do much, because that one unit, effectively, is like one classifier, one linear classifier. You're saying, I'm going to first run one linear classifier on my data. And all I get to do later is then use the result of that one linear classifier. So if I do something like that, if I do something like that training, it's not going to do very well. It's not going to help much to have that one hidden layer, because it's not giving you additional expressiveness. It's essentially still just a classifier. You need a lot more to be able to get the expressiveness to fit to this kind of data. Not a lot more. In this case, just three more. How about something even more complicated? This spiral, can we fit that? Well, let's see. With four hidden layer-- four hidden units, can we do it? Let's try. It's trying. It's getting some of the points, but not that many. Well, let's just make the network bigger. Let's add some neurons here. Let's add some more layers. Let's make those layers big enough, because this is a pretty complicated pattern to capture. And let's see what happens. Train this thing. It's not easy. This is a complicated function to fit. So you see it takes a long time for it to make progress. It's hard to figure out how to move w. And it's very subtle, what you have to do to make progress on this, but it is making some progress. If you look at the learning curve over here, here going down is good, because they count the number of errors rather than the accuracy. You see that over time it's really starting to capture the pattern. There's still some weird overfitting happening, for sure, if you look at this, like the blue ray that kept shooting out of here. But at this point, it's actually pretty much there. So even though it is a very complicated non-linear pattern, having a network where each unit has its own local non-linearity is enough to be able to capture this. So this is publicly available, this demo. It's linked from the slides. It's a lot of fun to play around with. You get intuition for what a neural network can capture or not capture in terms of data patterns. So how about computing all the derivatives that we need? Well, you probably have all taken classes at some point that resulted in a table like this. This table effectively tells you for every function that's in a table what's the derivative. We might say, well, nowhere in that table do I see my neural network. It's just not in there. Will there be a book that has your neural network in that table? Probably not. Usually, these tables have sines, cosines, and x squared, and x to the third, and so forth. They don't have this massive neural network in there, and then saying, here's the derivative. What these books do have, though, is something called chain rule. Chain rule says if you have a function that consists of the composition of multiple functions, you can compute the derivative of that composed function by computing derivatives of the components and multiply them together. And that's what a neural network is. You have a sequence of functions being composed together to go all the way from the input to the output. And you just apply the chain rule many, many times. If the network is big, you'll have to apply the chain rule-- if it's, let's say, 100 layers deep, you'll have to apply the chain rule 100 times, because you're composing 100 times. But that's all you need to do. So there are rules for this. It doesn't mean it's painless to work through this. If we give you a network and we say give us the derivative with respect to weight three in layer two of this network given this data, that would be a lot of work for you to write out by hand. But the rules are very well defined. So actually, there's no reason to be doing this by hand. These rules are very well defined. Any composed function that only reduces to the things that are in the table here, you can just apply the rules and call it done. It'll automatically generate a solution. If you go to Mathematica, something like that, input a function, it outputs often a massive derivative thing of that function. So there's something called automatic differentiation, which does that for you. It'll say, you give me a function. I will look it up in those tables, and I'll apply the chain rule for you. And here is the result for you. That's what these tools do. Some tools or brand names of the tools are Theano out of the University of Montreal, TensorFlow out of Google, PyTorch out of Facebook, Chainer out of Preferred Networks. So these are alternative tools. There are, of course, subtle differences, but at a high level, do the same thing. You can feed in a function, and out will come the derivative of that function with respect to whichever variable that you want the derivative with respect to. In fact, if you have a function in our case of the form gxyw-- x the input, y the output, and w all the weights-- you can ask for the entire gradient vector in one go, and there's an efficient way to reuse calculations you do for the different entries such that actually you can compute the entire gradient vector at the computational cost of roughly two to three times the cost of just the forward calculation through the function. So you might have thought derivatives are expensive. We're going to have to wait a long time to get those out. No, if you can do the forward calculation through the network, there's a backward calculation that only takes two, three times longer that will give you the entire gradient vector. For this class, you just need to know that this exists, that this is the case. You don't need to know the details of how it's done. I mean, if you want to go look it up, there are a lot of tutorials online telling about, what is back variation, what is automatic differentiation, and so forth. We're not going to quiz you on that. We do not expect that you know that, but know it's possible. We can do a forward calculation. Just at two, three times the cost, you can do a backward one. Summary of the key ideas that we've covered at this point. The way we've formulated learning classifiers is by optimizing the probability of labels given the input, optimized the probability of all the labels. The product of the probabilities of the labels is the same as the-- it's equivalent to optimizing the log of the product of the probabilities, which is the sum of the logs, and that's what we've been doing. So we've maximized the sum of the log probabilities of all the labels given the inputs. Against what? Against w, which is all the weightings that we have in our network. w is a continuous vector. It turns out that helps us. We can do a gradient ascent. We can locally look at the derivative of our objective with respect to each of the entries in w and use that to choose a direction in which to step. And that direction, the gradient direction, is the steepest increase in our objective direction. So it's the fastest way to increase our objective with a local step. We keep doing this till we hit early stopping, meaning we hit a point where on the holdout data accuracy starts dropping, because that means we're memorizing, rather than learning the pattern. What does it mean to have a deep neural net? It means that your last layer would still be a logistic regression in what we covered, but now there are many, many more layers coming before that. And those replace, in some sense, human ingenuity of deciding what is a feature due to a word count, the log of a word count. You do loop counting. You do connected components counting. All that stuff gets replaced by here is a massive neural net, and we know it can approximate any function from input to output. Well, let it find the one that has the right features from looking at the data. That's what the universal function approximation theorem tells us. Network large enough, then the neural net can approximate any continuous mapping, including the one that we want. And hopefully, this gradient decent procedure will find that one. Sometimes we need multiple initializations, multiple runs. But often it will find a very good one. This gradient itself, well, that's just a vector of derivatives. A vector of derivatives, in principle, nothing special about that. You just take a derivative in respect to each of the entries, each of the weight vector entries. In practice, it could be a lot of tedious work to do that if you're going to do it by hand, but luckily there are automatic differentiation tools, which allow you to just input the forward calculation, specify the network, and then it'll do the backward calculation that gives you the derivatives. How is that done? That's outside of the scope of 188. How well does it work? Let's see. We've got three minutes left, which is not enough to really cover how well this works. So let's shift those slides to Tuesday. All right. That's it for today. See you on Tuesday. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180830_A_Search_and_Heuristics.txt | [SIDE CONVERSATIONS] PROFESSOR: All right, let's get started. Welcome to the third lecture of 188. Today's topic is Informed Search. Couple of announcements first. Homework One Search has been released. It's due on Tuesday. But keep in mind that typically homework will be due on Mondays. So don't fall into the pattern of thinking it's going to be due on Tuesdays. It's pretty much always going to be Mondays. But because of Labor Day, it's on Tuesday next week. There's two components, electronic and written. You should do both. It's not a choice, one or the other, both are for you to do. Electronic is on Gradescope. You solve things. And you can try as many times as you like until we get it right and until the deadline hits. The written component is an exam style two-pager. We highly recommend you print it out, try to work it on your own. And then after you've done that for let's say, half an hour, or an hour on your own, feel free to discuss with other students and so forth to try to get a better understanding. But then still, you have to submit your own work not some other students' work of course. These will be graded on effort/completion, which means that we will check if you-- it looks like you tried as hard as you would try on an exam to solve these problems. That's what it means. And the next week, you'll get to make corrections to anything you got wrong by grading it yourself. And again, grading yourself will be graded on whether you did it precisely, not on whatever grades you would have given yourself. Project One Search has been released. It's due next week, Friday at 4 PM. You might wonder why 4 PM. It seems pretty arbitrary. We picked PM because a good fraction of students like to work last minute. And if we pick 4 PM, then-- I mean we don't recommend working last minute, but if we pick 4 PM, the students who do work until the last minute still have a Friday evening free for doing something else and weekend. So that's our rationale here to try to help you have Friday evening free. One important thing is that Project One is representative of the projects we'll have throughout the remainder of the course. Project Zero is not. So if you thought Project Zero is easy, you got it done very quickly, Project One is different, and it's representative of everything that follows. If you're worried about your coding skills, your coding background, Project One will tell you how ready you are. The entire teaching staff is there to help you if you have questions. Post on Piazza, come to office hours. We want you to solve these problems. You can work in a team of two. We recommend that you pair code, or in some other way work together. We don't recommend you just split the work. Because you'll only learn half of the material that way. When you submit into Gradescope, one person submits and has to mark the other student their partner, if you work with a partner to make sure that they get a grade too. So don't forget to mark your partner if you had one or they will not get that grade. Section started this week, as I mentioned before, you can go to any BF priority and the one you signed up for on Piazza. Any questions about logistics? Yes? There's only-- so there's only one type of discussion per week. There isn't many, many that are meant to be the same. But they're at different times and different GSI's so there will always be a little bit of variation in exactly how each GSI will do things. And also on Fridays, we will try to record a webcast of the material covered in the section that week. So you'll also have a webcast coverage of whatever was that week's section. Any other questions about logistics? Over there. STUDENT: Can homework be submitted as a picture [? or letter? ?] PROFESSOR: So for homework, we want you to submit something that matches the template that we have. If you want to relay ticket, that's fine. But we want it to match up with a template that we have. Because that's important for our grading. We recommend you print it out and write on it. Because that's most similar to exams. But ultimately if you prefer to type it up, that's fine too. But follow the template structure that is in our PDF. Other questions about logistics? OK. Today's topic is Informed Search. What that means, we're going to cover something called the heuristics, Greedy Search, A Star Search and then we'll see something called Graph Search, which will be an improvement to the Tree Search we've already seen towards the end of lecture. Let's first recap what is search. Well, in Search, what we're typically interested in is somehow capturing something about the real world inside a computer. So, and the way we'll do that is by defining a search problem. The search problem is defined by a set of states, which corresponds to the configurations of the world, not necessarily all details of the configurations of the world, but the abstraction of the world that is relevant for the agent's decision making. There are actions that can be taken. And there are typically costs associated with those actions. There is a successor function, which defines for each state and each action that's available in that state, where you would end up, and also the cost associated with it. So that's modeling how the world works, again, at some level of abstraction that you choose as right for your problem. And then there's a start state. That's where the agent starts. And there is a gold test, which allows you to check if a current state that you feed into that test satisfies the goal condition or not. If it does, it's a goal state and you can declare success if you get there. That's the problem structure. The computation we tend to do is building up a search tree, not fully, but partially hopefully. The nodes in a search area correspond to paths to states. We'll see some examples again soon. And these paths or plans have costs associated with them, which is the sum of all the costs of all the actions you took along that path. A search algorithm is an algorithm that systematically builds out the search tree, hopefully only a fraction of the entire search tree, but maybe worst case the entire search tree. And it has to choose an ordering of what to currently expand. This is what is-- ready to be expanded is called the fringe. But you have to choose which one to expand first. Then a optimal search algorithm is one that finds least cost plans. So not all search is about pathing. So we'll use a slightly different example here. Pancake flipping. What is the setup here? This is a setup where there is four pancakes. They have different sizes. And the goal is to get the pancakes stacked with the biggest one at the bottom, second biggest one on top of that, and so forth, smallest one at the top. Your action space is you can put your spatula between two pancakes or underneath the bottom one and then decide to flip everything that's above your spatula. For example, if the spatula goes right there where it's shown, it would flip the top two and the bottom two would stay in place. So for a stack of four pancakes here, you effectively have three actions available to you. You can go between number 2 and number 3, between number 3 and number 4, or below number 4. Going between 1 and 2 doesn't do anything for you. Because you're just keeping the top one on top. So that's the successor states from this particular state. You might wonder who might care about pancakes? Well, the cost will be the number of pancakes as well. Pancake flipping robots is one type of species that cares a lot about this. But there are actually other people who cared about this. And people you, unless you looked at the slides ahead of time, definitely wouldn't guess. Bill Gates and Christos Papadimitriou wrote a paper this past Tuesday exactly 40 years ago about pancake flipping as an abstraction for sorting things. So now back to the problem. Here is the state space graph, well, part of the state space graph. There is more states. How many states are there, you think in the state space total if we drew the entire graph? Any thoughts? anyone? Over there. Did you say 24? 24? OK, how do you get to that? STUDENT: Oh, [INAUDIBLE] anywhere and then the [INAUDIBLE]. PROFESSOR: Exactly. So let me say that again just because of the acoustics. It may be hard for people behind you to hear you. The answer was 24 total. So you can see, we're not showing them all on this slide. And how do we get to that? Well, you first have four choices as to what goes at the bottom. Then you have three choices left for what goes on top of that. Two choices left what goes on top of that. And then one choice left before it goes in the top and that multiplication is 24. So part of the state space graph, the costs here correspond to how many pancakes you're flipping. And we can imagine that maybe the robot has some energy costs depending on how many pancakes it has to lift and expends energy. And it prefers to lift us as few as possible and get to a goal. And the goal is to get into this configuration here where they're nicely lined up. So not all Search is padding. It could be pancake flipping. And there will be other examples in the future. How does Tree Search work. I've illustrated with that pancake flipping problem. Tree Search goes through a loop. It initializes before going into the loop with putting on the fringe the start state, the current situation in the world. Then it checks. Are there any candidates for expansion? That is, is there anything on the fringe? And the answer is yes. This is our fringe. Then it decides to expand that. Well, it'll first check if it's the goal. If it's not the goal, then it will decide to expand it. This will be our new fringe. And then it'll go back around. It'll check is there anything left on the fringe? The answer is yes. Then pick something with some strategy, one of those three. Then it will check, does it achieve the goal condition? The answer will be no. It will then expand and this process repeats. And then at some point, we might expand this one over here and declare success. This algorithm is underneath all the search arguments we've seen so far, Depth-first, Breadth-first, uniform cost. Today we'll see Greedy and A star. They will still use this. The only difference is the choice of strategy used to expand things from the fringe. And at the end of lecture, we'll see a modification to this called Graph Search where we'll put in one extra check here. But otherwise, it will be the same. And we'll also apply it to all the other-- all the strategies that we've seen. So in this case, the total cost would be 7 to achieve the goal state if this is how you got there. The way you can implement this, no matter what the strategy is you can implement this with a priority queue. Your fringe is stored as a priority queue. When you pick something from priority queue, your strategy determines what the priorities are. And you pick based on whoever is highest priority. Now if you specifically are interested in Depth-first search, you could also use a stack instead of a priority queue. If you're specifically interested in Breadth-first search, you could also use a regular queue instead of a priority queue, which will make it slightly more efficient but will make your implementation less unified. For project purposes either way is fine. You can choose whether you want to do it in a unified way or have slightly special purpose versions for Depth-first, then Breadth-first. So last lecture what we covered was Uninformed Search. And what that meant, let's say, it was Uniform Cost Search but the strategy correspondent to expand the lowest path cost from what's currently on the fringe. The good news, it's complete and optimal, which means if a solution exists, it will find it. And it will actually find the optimal solution meaning the way to achieve the goal at lowest cost. The bad news, it searches in every direction equally hard. For example, in this 2D space over here, it would expand in all directions at equal pace, even though the goal is on this side. It doesn't have any information about the goal. All it does with the goal is every now and then check, do I satisfy the goal condition? But nothing else about the goal is used, so it doesn't know that it's going in the right or wrong direction. So pictorially what this corresponds to is, let's say we run uniform cost search on a empty grid. Then it would look-- and so the way we visualize this whenever a state gets expanded for the first time, we show it highlighted here, uniform cost would just equally radiate out in all directions and then finally finds the goal. And then if we have a-- if we have a maze that runs so fast that we didn't see it in action. So what happened here is uniform cost search was run. And it highlighted in red nodes that are expanded during uniform cost search. You are bright red if you're expanded early on. You're darker red if you're expanded later on. So the last one expanded would be around here where-- does that cursor show up over there? Let's see, yeah. So it would be around here. That's the goal state. That's the last one expanded and declare success. But what you see here in this maze is that it actually expanded every single reachable state, except for just one of them over here. Why did it do that? Well, Uniform Cost Search doesn't know where the goal's going to be since they uniformly explore in all directions. And if you want to know, will it have expanded the state or not, and you want to quickly guess that for a maze like this, you would say, OK, let me eyeball what's the shortest path to the goal. I find the shortest path to the goal from Pac-Man to the go. I measure the length. And I know that Uniform Cost Search will expand every state that is at less than that distance from the start, which in this case means every state except for this loner over here, which is actually further away from the start than the goal. But everything else gets expanded. So we're hoping to make that more efficient today. The way we're going to make it more efficient is by infusing information about where the goal is while the search algorithm is running. These are called heuristics. Heuristics is a function that estimates how close the state is to a goal. And it will be designed for a particular search problem. Everything we've seen so far, you build a search problem abstraction and you can run your algorithm. If you want a heuristic, we have to take one additional step. We have to come up with a heuristic function for the current problem we're trying to solve. So we'll build the heuristic function. There are some different choices that you can make for these heuristic functions. So we'll dive into that a little bit today. So for example for Pac-Man needing to find the one pallet here in this maze, what could be a reasonable heuristic function? A reasonable way of measuring how close am I to the goal? Any thoughts? Over there. STUDENT: Is it Manhattan distance? PROFESSOR: So the suggestion was, Manhattan distance. Why would that make sense? Well, you can only move North, East, South, West, so your motion is always along that grid pattern. And so Manhattan distance is, well, in this case, Manhattan distance, of course, measures the distance that to achieve that path to the goal, you'd have to go through walls. But it gives you an estimate. And that's what heuristics are about. It's way of quickly getting an estimate of how far away the goal might be. And Manhattan distance is quick to compute. And it's very much tied into how you move in this space, except that it ignores the walls. You can also use Euclidean distance. So there's Manhattan distance, Euclidean distance would look like this. It's probably not as good a fit for this problem. Because Pac-Man cannot move diagonally. And so it doesn't measure as well how far away you are from the goal. How about for this pathing problem we saw last time, finding a path in Romania from Iran to Bucharest? Well, again heuristic is something you want to be able to compute easily. And it gives us a guess of how close we are to the goal. So straight line distance could be something. Because all we need for that is the GPS coordinates of each city. From that we can compute the straight line distance to other cities. And that's a good measure of how far we are away. If we want to go to Bucharest, then it would be all straight line distances to Bucharest. And this would be our heuristic function. And this is in a tabular format what it could look like. Like heuristic function, if there's only a finite set of states assigns a number to each state, which is how far you think that state is or how much cost do you think you'll still incur from that state to get to the goal. How about for pancake flipping ? Any thoughts on heuristics for that? Here. STUDENT: Make a number of pancakes that aren't in correct position. PROFESSOR: So this the suggestion was the number of pancakes not in the correct position. And the reason that could be a good heuristic is because every pancake not in the correct position has to be flipped at some point, at least once. And whenever you do an action, the cost is the number of pancakes flipped. So number of pancakes out of place is definitely an estimate of how much cost you're going to encounter before you can get everything in place. So that's one choice. Number of pancakes out of place. Any other thoughts on other heuristics? This has to be an art. For many problems there is many, many heuristic you can come up with. STUDENT: Longest chain of correct, like ordered pancakes. PROFESSOR: Say it again. STUDENT: Longest chain of correctly ordered pancakes. PROFESSOR: Longest chain of correctly ordered pancakes. And so can you say a little more about how that measures, how far we are away from achieving the goal? STUDENT: So if you have 1, 2, 3 together, that would be like four 3's. But if you have 1, 3, 5 that gets 0. So your final case would be 1,2, 3, 4. PROFESSOR: Mm-hm. STUDENT: So if you have 1-- like 2, 3, 4 somewhere in there, but not closer to the [INAUDIBLE].. PROFESSOR: So the idea here is that you look at a essentially subset of the pancakes and see how many are already in the right order. Because that means they are ready for whatever we need to achieve at the end. Now one subtlety here is that usually we measure heuristics measured in a similar unit as cost. So instead of measuring how many are already ordered nicely, which in case of the goal state would be all four. When we're at the goal state, we want the heuristic to be 0. So we want to do maybe something like 4 minus the function that you proposed to make sure that it measures distance from the goal, not higher when closer to the goal. But it's a good suggestion with that small caveat. Any other thoughts? Here's another one, one that we put on the slides is the number that is the ID number of the largest pancake that is still out of place. How does that work? Well, which is the largest pancake still out of place here, it's the one at the top there. It's pancake number three. So then this heuristic would say 3. Why does that make sense? Well, that pancake is still out of place. That means that everything 3 up still needs to undergo some flipping operation in the stack. Otherwise there's no way to get this one in place. It measures something about how deep you'll need to go into the stack for some operation before you could ever complete. So this what we're showing here. Later we'll also look at what might make one heuristic better than another heuristic. And, of course, a lot of it has to do with accuracy. And so in this case, let's compare the one that's on the slide with the one that was proposed earlier about how many pancakes still out of place. So we'll look at this one here. How many pancakes still out of place? That is 2. So what we see is that these are different numbers. And that's going to happen often with different heuristics. One of them may be more precise than the other one. Because actually there is an action here that takes you straight to the goal state. And it has a cost of 3. And so this heuristic here is more precise than this one. Now there are other trade-offs in heuristics at times related to not just precision but also speed of finding your heuristic. But what we see here is that in this case, we might prefer number of the largest pancake that is still out of place over number of pancakes still out of place. So mostly with heuristics we can start doing something called Greedy Search. In Greedy Search what we're going to do is a strategy to pick something from the fringe is to pick the thing that looks like it's already closest to the goal. So it's like you're searching, and you look in your fringe and you say, oh, well this one's already pretty close, closest of everything on the fringe. Let me pick that one. So what does that look like? On this map of Romania, well, we still start with the start state. Then we expand. At this point we have three nodes on the fringe. And the number shown are the straight line distances to Bucharest, our heuristic function. And Sibiu has the shortest straight line distance to Bucharest. So we started here in Iran. Sibiu is over here, shortest straight line distance. So Greedy will say expand that one. Then we have now a fringe with 6 members. We again look at which one of these is closest to the goal. It's Fagaras over here with a heuristic of 176. We expand that one. Now we have a fringe of 7. And in that fringe of 7, one of them actually has a heuristic of 0. That's as good as it gets. It's actually the goal. We would pick this one, expand it, and declare a success. And we would have found this path over here. What can go wrong here? And we might already have gone wrong. Over there. STUDENT: You might choose a path that's a local optimum but not a global optimal. PROFESSOR: So the answer was you might use a path that's a local optimum, not a global optimal, or maybe to rephrase it a little bit into terminology we're using here, you might, and for this specific example, you see Fagaras here. And you expand it. And you end up at the goal. And you found the path. But it turns out that this other path is actually better. The lower path is the better path. And you didn't find that. Why did we not find that? Well, we didn't find it because from Sibiu we compared Fagaras and Ramnicu Valcea and with Fagaras we kind of took a very big action. We covered a lot of ground. It was a bit off to the side. But we covered a lot of ground. And we ended up closer to Bucharest after that action. But it was a costly action. But Greedy Search ignores that this was a costly action. It doesn't pay attention to how much cost has already been incurred. It just looks at what's left. And so when you take a costly action like this that brings you closer to the goal, you're likely to go for that node again and then maybe again and ignore maybe something that could have been more promising, which is go this route, which was a cheaper option and you could have found a cheaper path to the goal. So what go wrong is that you kind of go down a rabbit hole that looks good and keep going, rather than carefully considering other opportunities that might still exist. So Greedy Search is trying to expand the node that you think is closest to a goal state. The heuristic is the thing that measures the distance to nearest goal for each state. A common case is that this best first approach takes you straight down some path, possibly to the wrong goal, but often to a goal still. Worst case is that it behaves like a badly guided Depth-first search. And the heuristic sends you the wrong way, wrong way, wrong way and makes you explore everything except for where you need to be. Of course, this depends on your heuristic and so forth. But with a poor heuristic this can absolutely happen. Now let's take a look at how well this works on our two examples. So think for a moment. What do you think Greedy Search will do in this scenario? STUDENT: Straight line. PROFESSOR: Straight line. I think so too. Now there's a slight caveat there. It really depends on the heuristic it's using. All right? So if I don't tell you what heuristic it's using, you can't know for sure what it's going to be doing. But here are the heuristic is measuring straight line distance to the goal. And so what we expect to happen is that then it'll keep expanding towards the goal. Let's see if it actually happens. It does and finds a solution very quickly, very low compute cost, and actually the optimal path in this case. Now let's take a look at Pac-Man in the small maze. Let's-- actually this is a uniform cost. We need Greedy. So again, this runs so fast we don't see it highlight. But what happened here is that as we run the search, whenever we expand a state for the first time, code successor function, we color that corresponding square red. The brighter red you are, the earlier that happened. The darker red you are, the later that happened. And black means it never happened. So Greedy did not expend nearly as much as Uniform Cost Search. So it was a faster calculation. But the path it finds is actually suboptimal. It goes off to the left, down, back to the right, it comes down, and then off to the left again. It's a suboptimal path, but it was found quickly. You might wonder, why does it not try to move to the right? Why is that never being expanded? It's because the heuristic tells us then from that spot one to the right from where Pac-Man started, the heuristic value, which in this case is Manhattan distance, is higher than for any of the nodes we did expand. And so it's still on the fringe waiting to be up next. But it never gets its turn. We expand the goal before we get to it. It's on that fringe. It's waiting but just never got called upon. So now we're going to see something that hopefully can bring together the best of both worlds. Let's think about this a little bit pictorially. Who knows about the fable of the tortoise and the hare? Most of you, not everybody? Who doesn't know about the fable of the tortoise and the hare? You don't know about it. OK, great. Or do you have a question? Or you don't know about it. It's a great fable. Let's see if I can do it justice explaining in here. But the gist is that the tortoise and the hare are in a race. And they have to get to a destination before the other. And the hare runs off, and it's way ahead of the tortoise early on but then gets off on a side track, takes a nap, does all kinds of things. Because it feels like it's winning anyway. And in the meantime, the tortoise just slow and steady slow and steady, slow and steady keeps moving, keeps moving, keeps moving and actually gets to the destination first. It's meant to be-- the fable is about teaching people a lesson. But we're going to do some analogies here. Uniform Cost Search is our tortoise. It's slow and steady. It tries out every single thing that might be cheaper than what we've considered, that might be cheap enough, and only will declare success when it's tried everything that's cheaper than the cheapest path to the goal. Greedy is like the hare. It just goes off, tries to find a path to the goal. But it might go down wrong rabbit hole and might not do so well after all. Now, well, wouldn't it be beautiful if we could have something like this? So we're bringing together slow and steady with greedy and fast. So it's one thing to make a cartoon. Now we need to put an algorithm behind this cartoon. It's called A Star Search, the topic of this lecture, main topic of this lecture. So let's start comparing Uniform Cost, Greedy, and then A Star on a simple example here. And again, keep in mind, these examples are not meant to be representative of problems you would really want to solve, it's just to illustrate the algorithms. So what does it mean to run Uniform Cost Search? Well, on the left we have the states based graph. On the right here, we have the Search Tree. I have shown the Search Tree here. What will Uniform Cost do? It will go in tiers through that Search Tree, initially S is on the fringe, then a, and s is off. Then a will go off and we'll have b, d, e on the fringe. And then what will happen next, we now have choices to make. There's three options. We will go by lowest cost so far. To get to b cost is 2. To get to d cost is 4. To get to e cost is 9. So then Uniform Cost will go here, 2 is lowest. Expand b and we'll have found this path to c and so forth. Now what we see here, if we look at the graph, it start state a, b, c. That's kind of going off in the wrong direction. But Uniform Cost Search doesn't know. It does not have access to any information about the goal, except for a Boolean check, am I at the goal or not? So it doesn't know. And so it doesn't do so well. Now, it's slow and steady and ultimately will explore that search Tree until it finds the shortest path to the goal. But it's going to waste a lot of time on things that are not promising. Greedy, how about Greedy? Well, Greedy orders by forward cost, the heuristic function. So Greedy would also start with the start state, expand that, get just a on the fringe, expand that. Now there are options. And it would check H. H is 1 here, 2 here, 6 here. It would say 1 is best. Let me expand this one. Now we have this. Now we have 2, 2, and 6. There's some tie-breaking there. It would pick one of them. And then next it finds one of those two goals and declares success. But so what happens with Greedy is that you might end up finding this longer path to the goal that you would find if you were more careful about how you expand. How about A Star Search? A Star Search will consider both G and H. And keep in mind here, g is the cumulative cost so far. The reason g is 2 here, it's 1 plus 1 to get our 2. So A Star means you do it by g plus h so how will it expand? Well, we'll start with s. No choice there, expand then a, no choice there, expand. Now we have b, d, e. That's our fringe. Which has the lowest g plus h? It would be this one here for 6 total, expand d. Then it would be over here. It would not declare success yet. Remember, we don't declare success just when we put something on the fringe, it's when we pop it off. Now we'll look at what's lowest. G plus h, this one has 8. This one has 6. This one has 10. This one is lowest, we declare success. Because we popped a state-- a path that ends up in the goal state. So that's A Star Search and we'd expand less nodes than Uniform Cost Search expands, yet still in this case, at least, find the optimal path. So question, when should A Star terminate? Can we stop when we enqueue a goal? No we can't. And I've tried to emphasize this a few times. This is the most commonly occurring bug in Project One. We have to wait until we dequeue. Now if you want to show something like that, if you want to say, I want to show to you that you have to wait until dequeueing. It's not enough to declare success when you enqueue. The way to prove that is by showing a counterexample, showing that if you do it wrong on this example, things go wrong. You get the wrong solution. That shows that your algorithm then is wrong, and you should change it. So here's a small example showcasing that we can not stop when we enqueue a goal. Because what happens in this example is we start with s on our fringe. Then next we have a and b on our fringe. Then what would be next? Well, t plus h, we have 4 here total. We have 3 here total. So we would expand b first. We then queue g. If we declare success at this point, we'd found the path of length 5. But there is actually a path of length 4. We need to wait. We have now a and g on the queue. G has a score of g equals 5, h equals 0. So 5 a has 2 plus 2, which is 4. We need to expand a. And now we'll get-- we have on the fringe at this point is s to a to g as well as s to a, s to b to g. This one has 4. This one has 5, both plus 0, because the heuristic of the goal is 0. And then we pop this one, then we can declare success. So only stop when we dequeue. Is A Star guaranteed to find the optimal solution? Let's do a raise of hands. Who thinks yes? OK. Who thinks no? So it's a bit of a mix. So it's kind of a mixed answer also. But in general, it's not guaranteed to be optimal. Some extra conditions have to be met before that guarantee is satisfied. So we will have that guarantee with extra conditions. Now, why is it not generally true? Well, the way to show it is by, again, showing a counter-example. Come up with a graph where A Star Search finds a suboptimal solution. And then that's your proof right there that is not guaranteed to be optimal. Here's an example of such graph. If you're run A Star Search on this graph, what do we get? We start with s on the-- with a 0 plus 7. Comes off the fringe. We have s to a with a 1 plus 6. We have s to g with a 5 plus 0. What's next? This one is next. Sorry about those vertical lines, not sure why. This one is next. And that would mean we declare success. And we found this path here, the bottom one. Why did that happen? STUDENT: Because the heuristic is not on there. PROFESSOR: So the answer is this heuristic here is a very poorly chosen heuristic. It says that it thinks it's still 6 away from the goal, but it's only 3 away. And so because it thinks 6, well, it's on the fringe. And it has to wait its turn much longer than it really should. And it doesn't get its turn soon enough for us to find it before we declare success. So that's also the intuition about what goes wrong when A Star is suboptimal on this specific problem. It's whenever the heuristic is too pessimistic and forces things are actually promising to stay on the fringe for too long and you find something else in the meantime, and you declare success. So the actual cost was smaller than the estimated cost from the heuristic. That's not a good thing. We need to do it the other way around. So this is called an Admissible Heuristic. If you're heuristic satisfies the property, then it's optimistic, meaning than it estimates how much more cost you're going to get before reaching the goal from the state as a lower number or equal to what it really is. That's an optimistic heuristic, which is admissible, then A Star Search will be optimal. If you're heuristics are too pessimistic and they are inadmissible, A Star Search optimality guarantees will break. Because good plans, good partial plans will be trapped on the fringe and not expanded on. Because the heuristic is holding it back. So formal definition, h star n is the optimal cost with which you can get from this node m to the goal. Typically we don't have access to that. But mathematically it exists. And a heuristic h is admissible if for every node it estimates something that's lower than or equal to the exact cheapest way to get to the goal from that state. What are some examples? Manhattan distance. It's lower or equal to the actual cost. If there's no walls, it would be the actual cost. And when there are walls, it might be an optimistic estimate of the actual cost. Pancake flipping. What we had there, number of the ID of the largest one that's still out of place is optimistic. Because at the very least, you need to still go as deep as that one to rearrange things, which means flip at least that many pancakes to get to the goal. So it's, again, and Admissible Heuristic. Come up with Admissible Heuristics is actually a lot of what's involved in using A Star in practice. Because once you've coded up A Star it's there. You can use it. But when you now want to solve a new problem, you need to think about what's a good heuristic for this new problem. So let's now formally prove that if we have a Admissible Heuristic, again, which means that for every node we estimate the cost to the goal as less than or equal to what it really would be, then A Star Tree Search will be optimal. Assume A is an optimal goal node. So we hope to find this one in the Search Tree. Assume B is a suboptimal goal node. So failure would be if we expand b and then we would declare success before we get to a. And let's assume h is admissible. Our claim is that a will exit the fringe before b. If we can prove that claim we're done. Because if for-- this is we have an optimal goal node, any optimal on goal node a, any suboptimal goal node b and if we can show that a will be guaranteed to exit the fringe before b, then that is true for any such optimal and suboptimal goal nodes. And that means we'll always first expand the optimal one before we might consider expansion of the suboptimal one. So this claim is what we need to prove to as a result get optimality. How do you prove a pops off first from the fringe? Let's see, imagine a's on the fringe. If b's never on the fringe, we have no trouble anyway. But imagine b makes it on the fringe. This might be our fringe. What do we know? We know that when b's on the fringe then also some ancestor n of a is on the fringe. Maybe it's A itself, maybe an ancestor. How do we know that? If no ancestor of a is on the fringe anymore, then it means we already expanded a. Like, when all your ancestors are gone, you mean you've-- and you're gone, you've been expanded. There's no other way around it. So we know this will-- this condition to be true. Some ancestor or a itself be on the fringe too. Then our claim will be that n will be expanded before b. So let's see how we can show that. F of N is less or equal then f of a. Why is that? Well, let's expand this. F of n is the backward cost plus the heuristic forward cost. F of n is smaller than t of a. Because that's what it means to be an Admissible Heuristic, it's the heuristics-- to be admissible means that it underestimates how much it will cost to get to the goal, the optimal goal that you can reach from there. So that's using the condition of admissibility. This is where we use it. If it was not admissible, we could not take this step. So we need the admissibility to take this step. t of a equals f of a because a is a goal node. So h is 0. The only way to be admissible at a goal node is by a heuristic that's 0. Heuristics need to be positive. So this is now true. Then next thing we'll claim is that f of a is less than f of b. Why is that the case? Well, the cost to get to A is less than the cost to get to b. Because that's what we said at the beginning. A is the optimal way of achieving the goal. B is a suboptimal way of achieving the goal. That's exactly what this is. B is suboptimal. That also means f of a smaller than f of b. Because the heuristic h is 0 at all goal states so including a and b. Now, we have f of n small label to f of a. F of a smaller than f of b which means f of n smaller than f of b, which means n will be expanded before b. Since n expands before b and this was for an arbitrary ancestor n of a, we can repeat this argument for the next ancestor on the fringe, next one, next one. This will keep happening. Ancestors of a will continue to be expanded before f until finally it's a itself on the fringe, which will also be expanded before b. And we've found a before b, hence, we found the optimal path to the goal, not the suboptimal one at b. So a expands before b. A Star Tree Search with Admissible Heuristics, remember we needed that condition somewhere in our proof, is optimal. I'll leave this up for you to maybe think over during a small break and we'll start again in two minutes. [SIDE CONVERSATIONS] STUDENT: Do you consider A Star to be a Greedy algorithm? PROFESSOR: No. So terminology wise Greedy for us is when you use the heuristic as the only thing to have your-- to determine your strategy of what to expect next. STUDENT: Oh. PROFESSOR: You could think of it may be Greedy in some way and kind of how many people use the terminology greedy in everyday world. But not the terminology, the technical terminology we use here, Greedy means something very, very specific. It means you just look at the heuristic and expand based on whoever has the lowest heuristic value and repeat. STUDENT: So it's like specific to them, like, AI in the [INAUDIBLE] field? PROFESSOR: Yes. STUDENT: OK. PROFESSOR: Hey. STUDENT: Hi. Just had a question on, I think, one of the previous slides. So you said the-- so f is the total like value received for n, like-- PROFESSOR: F is g plus h. STUDENT: F is g plus h. G is-- PROFESSOR: Backward cost, so the cost of all the actions so far to get you to n. STUDENT: Right. PROFESSOR: Accumulated together, summed together. And then h is the heuristic function that you choose to use. STUDENT: OK. PROFESSOR: It's the estimate of the cost to get to the goal. STUDENT: Right, right, right. So then when you say f of n is less than or equal to g of a, for the cost all the way to get to a, we're saying that-- PROFESSOR: So this step here, from here to here, that's what you're asking about right? STUDENT: Yeah. PROFESSOR: That is saying that this heuristic is admissible. STUDENT: Like this-- PROFESSOR: That's the definition of admissible is we're saying then, f of n, which is how much we estimate it takes from n to get to the goal, and we know a is the optimal goal right? STUDENT: A's the optimal goal. PROFESSOR: So we're saying then, it's the cost encountered so far plus our estimates. And we know that that estimate is less than what is really going to be. And g of a is what it's really going to be. STUDENT: OK, from start. PROFESSOR: For total, from start. STUDENT: OK. PROFESSOR: So that's why some of these two is less than g of a. STUDENT: OK, OK, yeah. PROFESSOR: It's good to go through this a few times. I've gone through it at least 100 times by now. STUDENT: OK, thank you. PROFESSOR: Good question. STUDENT: So we've proved that there are two goals, and we were going to use the optimal one-- PROFESSOR: Mm-hm. STUDENT: --before it comes. But I don't think we have to prove that. We were going to the optimal path to achieve a. PROFESSOR: OK, so that's maybe a slight terminology thing. STUDENT: Yes. PROFESSOR: So when we're looking at the search history, A corresponds to a path, a is the-- is a node, which corresponds to a sequence of actions and a sequence of states you traverse that ends in the goal. And it's one that-- STUDENT: Oh, yeah, that's right. PROFESSOR: --corresponds to the shortest-- STUDENT: OK. PROFESSOR: --path to any goal. STUDENT: Oh, I see. So-- PROFESSOR: So that's why we're trying-- STUDENT: --just one goal in the original search graph, whereas in the Search Tree that same graph node-- PROFESSOR: It could be that the-- STUDENT: [INAUDIBLE] different branch. PROFESSOR: Yeah, it could be that it's on a and b. Or there could be multiple states to satisfy goal and a and b correspond to different ones. STUDENT: Right. PROFESSOR: Either way could be true. But the proof holds for when a is the one that is the cheapest goal you can get to from the start state. m encodes the path to get here. STUDENT: And the tree cannot convert [INAUDIBLE].. PROFESSOR: Correct, mm-hm. STUDENT: Hi. PROFESSOR: Hey. STUDENT: I was wondering how necessary the C70 pre-req is for the course. I took [INAUDIBLE] class-- PROFESSOR: So I think you will find out by doing the math cell diagnostic, which covers some of the essentially that's a way of measuring, do you have the math background? STUDENT: The first homework? PROFESSOR: Homework 0, yeah. STUDENT: OK, I did that one. PROFESSOR: So if that goes well, then you should be fine. STUDENT: OK. PROFESSOR: Yeah. STUDENT: OK, cool, awesome. Thank you. STUDENT: What's the-- what do you mean by suboptimal over there? Suboptimal-- PROFESSOR: Suboptimal means that you get to the goal. But the path you follow has higher costs than another path to a goal that has lower cost. STUDENT: This is is a little bit unrelated question but what do you think of the significance of memory networks [INAUDIBLE]. Because like use tools [INAUDIBLE] memory-- PROFESSOR: Absolutely. So memory will play a big role in everything we do as humans and in AI. It's the second half of the class, not this lecture, but the second half of the semester. STUDENT: OK. PROFESSOR: We'll look at those things. STUDENT: But, like, in terms of newer network and, you know, gradient based optimization-- PROFESSOR: I might have to restart. And this is not-- STUDENT: Yeah. PROFESSOR: --too into this lecture. But having to talk about it in office hours-- STUDENT: Yeah. PROFESSOR: --or in the second half of the semester. STUDENT: OK. Thank you. PROFESSOR: Hi everyone, let's restart. So one quick clarification that ties into some of the questions that came up during the break. When we talk about a is a optimal goal node, what does that mean? Remember, this is a Search Tree. So what is a? A encodes a sequence of actions as well as a sequence of states that you traverse that get you from s to, in this case we've assumed it achieved the goal, to a goal state. And we said a is an optimal goal node, which means that it encodes of all paths to all possible goals in your state space, A encodes the shortest path to any goal. That's what a is. That's what it means to be an optimal node in the Search Tree. There could be many goals states. And there could be many paths to each of the goals states. The optimal one is the one that is shortest from the start to any of the goal states. Like, the goal state that's closest to the start state and the shortest path to that particular goal state. To be suboptimal means you are a path to a goal state that is not as sure as the path encoded in a. Any questions about the first half? Yes? STUDENT: When we were writing [INAUDIBLE].. PROFESSOR: Mm-hm. STUDENT: Then [INAUDIBLE]. PROFESSOR: So he's saying, so your question is, once we're here, why are we not done yet? STUDENT: Right. PROFESSOR: The reason we're not done yet with the proof, even though at this point intuitively it's clear that we're going to get there, and I agree with you on that, that once we know that, we expect it to be true. The reason we're not done yet is because we need to show that the process of how we pop things from the fringe will also get a before b. And essentially what this is showing here, this entire reasoning is showing, if you look at it carefully, it's showing that you go in order of f cost. Right? And you keep expanding in order of f cost. And anything lower at F cost will happen before things that are higher F cost. And as a consequence, I mean, those are the kind of ideas that are at play here. And that's why, when you look at this, you might say, oh, maybe we're already there. But it turns out that we actually need to carefully look at the process. Because this is not actually enough. Well, there are some subtleties to think about. And we'll get to some of those later in lecture. And it is important to step through the algorithm and see what happens. And so the reason at this point then we say this is not enough is that a might not be on the fringe when b is on the fringe. So we need to say, while when a is not on the fringe, we know an ancestor of a has to be on the fringe. And then we say, well, are we guaranteed that that ancestor will be expanded before b such that we can get a on the fringe before b gets expanded? And so that is what we still need to do. STUDENT: Thanks. PROFESSOR: Another question there. STUDENT: And it assumes that the heuristic is accurate? PROFESSOR: It does not assume it's accurate. It assumes that it's admissible. And admissible is this very kind of specific mathematical condition that the heuristic value at each node is lower than or equal to the cheapest cost path to the nearest by goal from that node. And that's what it means to be admissible. You can be admissible without being accurate. And then you might not be very informative. You might not help the A Star Search much in terms of being computationally efficient. But you will still have it be optimal. In fact, special case. What if I said h equals 0? If I said h equals 0, this condition is satisfied. We can still run A Star Search. But it actually becomes equivalent to Uniform Cost Search. And so the proof we just showed here also proves that Uniform Cost Search is optimal. Because this is a special case of this. But it's not as efficient than a more accurate heuristic might be. Other questions? Over there. STUDENT: If we prove that our heuristics is admissible, then wouldn't we already know, like, h star n? PROFESSOR: How do we know our heuristics is admissible? Let me see if you still have that question as well we're a little further down this lecture. But right, you don't want to have to rely on knowing h star n. Because if you know n star n everywhere, and you're going to explicitly check against that, you might as well use h star n. So that's not what we want to do. We'll get to that, what we actually want to do. Over there. STUDENT: So how do we know when reading the first part of the claim that f of n is less than or equal to t of a? PROFESSOR: How do we know in the first part that f of n is less than or equal to t of a? That's the definition of admissibility. That's the key place where we need the assumption that the heuristic is admissible. H being admissible means that from n the cheapest way to get to goal has to cost less than h of n. And so we know that this thing here, t of a is the cost to get to the goal encoded in node a. And so we know that the cost encountered so far to get to n plus our heuristic, which underestimates the extra costs still yet to come, together should be smaller than the actual cost. That's what this is, and that's the admissibility. And if we don't assume admissibility, we cannot go through with this proof. And that's also, I mean, we expect that, that we need it somewhere. Because we've seen counterexamples. When heuristics are not admissible, we can have counterexamples where A Star Tree Search doesn't find the optimal solution. Yes. STUDENT: So if [INAUDIBLE] heuristics [INAUDIBLE] put 0 rather than [INAUDIBLE]. PROFESSOR: So how to choose heuristic, we'll see more of that soon. OK. So what are some properties Uniform Cost Search versus A Star Search, Uniform Cost goes equally fast in terms of the expansion in all directions whereas A Star will zone in towards the goal if you have a reasonably good heuristic. Pictorially looks like this, and looks at-- let's also take a look on the maze environments. So we'll do demo one and five. So remember this is Uniform Cost Search. What do we expect for A Star Search. It will, again, depend on the exact heuristic we're using. Here we're using a heuristic that is saying, take the straight line distance to the goal and then take half of that value as my heuristic. So things that are closer to the goal get favored, but maybe not favored as much as you might want. And here is the result in action. So it still expands a bit in all directions but favors the direction towards the goal and doesn't need to expand as much as Uniform Cost Search. Remember Uniform Cost expands this many nodes compared to A Star this may nodes. How about the Pac-Man environment? Well, here is the result of A Star Search. We see that it expanded again. The coding is red means expanded at some point. The brighter red, the earlier we're expanding, the darker red the later. Black means never expanded. We see, and we'll-- let's do a comparison with other things we've seen. Greedy expanded very little. But the path found was suboptimal. Uniform Cost expands everything below cheapest path cost to the goal, which was everything but one square. And then A Star Search is somewhat in the middle. It does a bit more work than Greedy but as a result also finds the optimal path. How to know what will get expanded by A Star Search if you look at this. Can you just, like, eyeball what will be expanded? Well, it depends on the heuristic. What if the heuristic is the Manhattan heuristic? Then you can say, well, what is the shortest path to the goal? What's the length of the shortest path to the goal? And then you can say every state that I can reach with F cost, lower than that will need to be expanded first. So anything that has cost plus heuristic lower than cheapest path to the goal will be expanded. And that's exactly what we're seeing here. This is used in many, many applications. A Star is typically the go-to planning algorithm, whether it's for video games or routing or decoding in speech recognition, machine translation, and so forth. Let's take a look at this in action on the tiny maze first, 6 and 7. So let's run Uniform Cost Search. Look at the console here to see how many nodes are being expanded, the work that's being done. The starting expands over 9,000 nodes before it finds a path. With Uniform Cost we'll find the optimal path clears the board optimally. Now, so remember over 9,000 nodes expanded. Now let's take a look at A Star Search. How many nodes does it expand? It expanded, well, less than 1,000. Only 175 nodes expanded to find the optimal solution. So thanks to this heuristic, it could find a solution a lot more quickly. Now let's do a few demos of different algorithms in action. This is our water maze. Dark blue means more expensive to traverse. Light blue is cheaper to traverse. Let's guess all algorithms. So how about this one? Anyone? Breadth-first because it goes out in every direction equally fast and finds the shortest number of steps path. How about this one? STUDENT: Greedy. PROFESSOR: Greedy. It finds the thing that's closest to the goal based on the heuristic and keeps expanding that and doesn't have to do much work but doesn't find the best path either. How about this one? STUDENT: Uniform cost [INAUDIBLE].. PROFESSOR: Uniform cost. How do we know? Well, it moves forward more quickly in regions where the cost is slower. But it does expand everywhere. And it actually does not orient itself much towards the goal. How about this one? STUDENT: [LAUGHTER] PROFESSOR: Depth-first search. Yes. How about this one? STUDENT: A Star. PROFESSOR: That's A Star. It pays attention to costs incurred so far, also what's looking like it's getting closer to the goal, and can do a relatively small amount of work to find the optimal path. OK. So part of the art is creating good heuristics. We saw straight line distance as one, Manhattan distance as one for Pac-Man. And Admissible Heuristics can be useful at times. You might lose optimality. But there might be informative and just like Greedy market, your solution is faster but not the right solution, not the optimal one. You might sometimes just want to find a solution fast. Let's do some kind of practice here in designing heuristics. OK, this is the 8 Puzzle. What are the states in the 8 Puzzle? Well, it corresponds to all possible configurations of the board. How many states are there? Well, there's 9 positions for the first tile, 8 for the next one, 7 for the next one and so forth. Then there is 9 factorial states. What are the actions? You can move a tile onto the empty spot. Or another way to think of it, you can move the empty spot around. How many successors from the start state? Well, there are 4 tiles you can move into that, so 4 successors. What should the cost be? Well, that's-- you as the designer choose. But maybe you don't want to put too much effort into sliding these things into place. And so every time we have to move a tile, that's a cost of one. So, now let's think about heuristics. What would be a possible heuristic for this problem? So an estimate of how many more actions, how many more steps you need to get to the goal in this case. Any thoughts? Over there. STUDENT: The number of tiles that are in the wrong position. PROFESSOR: So number of tiles in the wrong position. Why could that be a reasonable heuristic. Let's think about it. Might it be admissible? Yeah, it is admissible. Because every tile in the wrong place will need to undergo an action. And likely more actions will be needed so it's a underestimate of the cost you'll incur to achieve the goal state. So we've reasoned through the fact that it is admissible without having to compute what the actual optimal cost is to the goal state. So we just had an abstract reasoning that told us this is indeed admissible. For example, what's the heuristic for the start state here? Well, looks like all 8 are out of place, so 8. If you run Uniform Cost Search and we have a problem where the goal is four steps away, it expands 112 nodes, eight steps away to get the to go from start state, 6,300 nodes expanded, 12 steps away, 3.6 million nodes expanded with Uniform Cost Search, A Star with this heuristic only 227. So a lot of time saved. Another way to think of this heuristic is that it's a relaxed problem heuristic. What do I mean with that? One way to ensure that your heuristic is admissible is to introduce new actions. You take your original problem, you add new actions, and in this new hypothetical problem space, you find the optimal solution. And because this is the optimal solution, in a new space, we have more actions available to you, the optimal solution there will be cheaper or same as the optimal solution in the real scenario. So you know optimal in this new relaxed space is an admissible heuristic. How is this-- how can we relax this? Well, like this. Essentially what we think of here is that if every acti-- if you have as an action available to just grab a tile and place onto the destination, in that space number of actions you need is equal to number of misplaced tiles. And it's an easier problem than the original one. It's a relaxed problem. So optimal solution in the relaxed problem is an admissible heuristic for real problem. Let's see, can we do better than this? Can we have an even better heuristic meaning something that's closer to the true cost? STUDENT: It's the largest number [INAUDIBLE] PROFESSOR: So one suggestion is the largest number that's misplaced. Any other suggestions? Over there. STUDENT: Some of what happens [INAUDIBLE].. PROFESSOR: So some of Manhattan distances for each of the tiles from where they are at the beginning to where they need to end up. That is heuristic we have on this slide here. And how does that correspond to a relaxed problem? It's as if you can slide the tiles without them constraining each other. So it's a relaxed problem, less constrained. For Pac-Man, a relaxed would be something like ignore the walls. Here it's ignore the other tiles. It's a relaxed problem. We call it total Manhattan distance. It is admissible because it's a relaxed problem solution. Another way to think of it is through more abstract reasoning. It's admissible because every tile has to undergo at least that many steps before you can achieve the goal. H for start. Well, we need to look at every one of these pieces and see how far they're away and sum it up. And this brings it from 227 expansions for the tiles heuristic from the previous slide to only 73. Remember Uniform Cost Search was 3.6 million? Having a good heuristic allows A Star Search to be much more effective than Uniform Cost Search. Of course you can A Star Search with the heuristic that's always 0. It'll, again, have to expand 3.6 million nodes. Because it's just like Uniform Cost. How about using the actual cost as a heuristic? Would it be admissible? STUDENT: Yes. PROFESSOR: Yes, actual cost is less than or equal to the actual cost. So that's satisfied. Would we save on nodes expanded? STUDENT: Yeah. PROFESSOR: Yes, a lot. What's wrong with it? STUDENT: We don't know that. PROFESSOR: We don't know it. If you know it, you're done. You have already solved the problem. So there's a trade-off here between the amount of computational cost required to compute your heuristic and the resulting number of nodes expanded, which also takes time. So the closer heuristic gets to the true cost, the fewer nodes you tend to expand. But usually the closer you want to get to the true cost, the more work you have to do to compute the heuristic. And so that's the trade-off. You can actually define a semilattice of heuristics. So earlier we were talking about is one heuristic better than the other one and so forth, for example, for pancake flipping. Well, you can define this as follows. You have two heuristics. And we say h (a) is dominating h(c) if for all nodes h(a) is higher than h(c). Now not all heuristics can be compared this way. Sometimes the heuristic is higher in one node, lower in another node. Then they will live kind of next to each other like here. But if you are strictly higher in all nodes, you can put yourself above another node. For example, a is above c, and they're all above 0. And you can build this partial ordering shown in this graph here that at the bottom has the all 00 heuristic, at the top has the exact. And the further you go to the top, the more informative your heuristic is but possibly more expensive to compute. And one last little trick is that if you have two heuristics, you take the max, double dominate them. And so if you have two heuristics that are both admissible, the max will be admissible and will be more informative than either one of them. In the last kind of 10 minutes that we have here, I want to switch up the algorithm that's powering all of this. So, so far we've done Tree Search and what's been different across approaches is the strategy of what to expect next from the fringe. Now we're going to change Tree Search into Graph Search. Why? Well, on the left this is state graph. And on the right there's a Search Tree for this. If you look at this, well, b appears multiple times, c appears multiple times. There will be exponential growth in terms of how often a node appears. And do we really need that? Do we really need to keep track of all past a00 from a to b to c and so forth or might it be that just one of them is enough? Like, might it be that just this c is enough. And once we've looked here, we don't need to look here or here or here anymore. Because if we could do that, we could save a lot of time. And that's exactly with Graph Search proposes. Graph Search will keep track of a list of nodes you've expanded and not expand them again. For example, in Breadth-first search you have-- this is the Search Tree and we should not expand these nodes. Why not? Why should we not expand this one over here? E is also over here. And if it's Breadth-first search, this way of getting to e is better than this way. Because this one takes one step. This one takes two steps. So we already got there in one step. We should not later, again, see what's underneath e. Because we got there in a worse way. So anything we find underneath here is worse than anything we find underneath here. Because we got to be e with more cost. And what's underneath is the same anyway. We're not going to find anything new here. We're just going to have gotten there in a worse way. There's no point in doing all that work again. So the idea in Graph Search is you never expand a state twice. How do implement this? It's just like Tree search. But you have a set of expanded states, which we'll call the closed set. The closed set, whenever you expand, whenever you call a successor function on a state, you first do a quick sanity check. Have I already called a successor function on this state? The closed set will tell you. If you have, skip it. Just you don't do it. If you haven't, you do call your successor function. But then you add the state to the closed set. It's important to store it is as a set, not a list. Some literature even calls it closed list. When they call it closed list, same thing. But it's not great terminology. Because if you store it as a list, the time it takes to traverse the list and find whether you're in there or not is much worse than when it's stored as a set. And your algorithm will be really slow. So that's why we call it explicitly closed set. And you should code it up as a set. Then it [? right ?] completeness. Well, let's think about it. So what does it mean to be complete? It means that if a solution exists, you'll find it. What do we not do? Which part of a Search Tree do we exclude here? It's parts of the Search Tree that we already have expansions for. So if we exclude something by doing Graph Search, it's something we already have somewhere else anyway, so we're not losing access to the goal by what we're excluding in our expansions. So still complete. How about optimality? Oh, that one's trickier. So let's look at A Star Graph Search gone wrong suggesting that it's not always optimal. So here are some heuristics. Here is some costs. Search Tree, expand, expand, expand, expand. We build our Search Tree. What do we do now? Well, we found the bad path to the goal. But luckily we don't expand yet. We're going to expand this one first. And now we've gotten here and here. But what will Graph Search say? My closed set already has c on it. So I'm not expanding this. And now the only thing that's left is this thing here. And we declare success. So what happened here is that we first expanded along this path. And then later when we found the better path to c, we said, we already expanded from c. Let's skip it this time. So we need to be careful in Graph Search, the first time we expand, it's actually when we were there in an optimal way. Because if not, we might never do the optimal way. In fact, we will never do it the optimal way. So the issue here is some kind of poor choice of heuristics in some sense guiding us down the wrong path at that time. So what we need is more than Admissible Heuristics. We need consistent heuristics. The main idea is that the estimate heuristic costs are not just more than actual cost to the goal, but everywhere things need to be representative. So admissibility was just about cost to the goal and heuristic. And that's what we have here. Consistency will be about locally checking that things are consistent within the heuristic and the actual cost. So here in that graph we just looked at, we have h of 4 over here, h of 1 over here, and a cost of 1 here. That is inconsistent. If we think from 8 is going to be 4, but then after one step, at a cost of 1, we think it's 1, that doesn't make sense. If we really think it's 1 from c, we should think it's 2 from a. There's an inconsistency in our heuristic choices there. If we make it 2, it becomes consistent. And if we revisit the previous example, it'll find the optimal solution. Consequence of consistency, the f value along a path never decreases. Why is that? We have a heuristic value. And the heuristic value is smaller than-- this is consistency. So consistency means this. Then we just add backward cost on both sides. Together that makes for f. And what we see here is that f of a, the node we started from, and c, the node we got after the expansion, f of a is smaller than f of c. So what we see is consistency implies as we expand, f will go up and up and up and up. And that goes back to some of the original intuition in one of the questions said, if we have consistency, which is a stronger condition than admissibility, f will keep going up as we expand. And that means then if we have consistency, whenever we expand the goal state, we know we're done. And we found the optimal path. Because everything else will have a higher f cost on the fringe. Because otherwise we would have done it first. Whenever we expand the goal, everything else is a higher f. And thanks to consistency, we see that after we expand the node, f goes up. So in the future, every f will be even higher. And so consistency implies f costs keep going up as we run the search, which in terms implies that when we expand the goal, there is no option left for us to ever encounter something with lower f cost. There is a more formal proof on this in the slides here. But since we only have two minutes left, I'm just going to leave you with that main intuition that's really important. If you have a consistent heuristic, f cost will keep going up, which in turn means when you expand the goal, nothing else could ever be lower again in the future. These are the optimality properties for both Tree Search and Graph Search, different conditions. One implies the other. That's just applying the consistency to the path to the goal. And here's a summary, A Star uses both backward and forward cost. It's optimal for Tree Search for the admissible heuristics, for Graph Search with consistent heuristics. Heuristic design is key to make it good. And in your Project One you'll do a lot of heuristic design to make your algorithm go fast. Often you'll use relaxed problems to get there. The slides have pseudocodes for you, which will be useful for your project and a more formal proof of optimality of A Star Graph Search, which you can do in your own time. Thank you. [SIDE CONVERSATIONS] |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181016_Bayes_Nets_Inference_Variable_Elimination.txt | PROFESSOR: OK. That's low. All right. I can't seem to change the volume down here, so I either have to talk really quietly or hope they can fix that in the back. If I get carried away and you all get like blasted into the back of your seats, I apologize in advance. It's easy to get carried away because we're talking out Bayes' Nets again today. OK. All right. So today we're going to talk about inference in Bayes' Nets, which is the process of taking your lovely Bayes' Net that describes your domain and your variables, and the probabilistic interactions between those variables locally, and answering global questions, like, what does this variable over here do if I observe something clear on the other side of the network? So let's first remember what Bayes' Nets actually are, because we're going to be using this semantic definition formula over and over again today and in the next lecture. So a Bayes' Net is-- they're also called graphical models. They're a model of your domain. The nodes in the Bayes' Net represent variables that you care about in your domain, and those variables have values, which are the domains of those variables. The network is also a directed acyclic graph over those variables where the arrows encode something about direct influence between variables. And, in particular, underneath each variable in the network, lives a conditional probability table that specifies what that variable does-- meaning what is its conditional distribution over all of its values in its domain-- given the various settings of the parent. And that's where we get to say-- actually, let's just pull up an example. OK. So from a couple of lectures ago, we had built this tiny little Bayes' Net and it's still there. And in this Bayes' Net, each node represents a variable like low pressure causes rain, rain causes the drip in my roof, and it also causes traffic. There's no arc between drip and traffic, because there's no direct influence between them in a way that we mostly made precise in the last lecture when we talked about conditional independence. But if we take a look inside here, inside each of these-- so underneath low pressure, it doesn't have any parents. There's just a little probability over that variable that says, oh, there's low pressure 10% of the time. But if I go to a more interesting node, like traffic, there is a description of how likely traffic is under all its conditions. So here it says when there is rain, 80% chance of traffic. When there's no rain, 30% chance of traffic. And, in general, the more parents you have, the more different behaviors can exist at that node, because you get to specify a conditional probability for each setting of the parents. So maybe we'll actually go look that. Let's say we want traffic to also depend on-- I guess before we had a ball game. So there can be a ball game or not. And I'm going to make an arc between ball game and traffic. And now what I have to do is, suddenly, this sort of becomes a pain, because I need to specify how likely traffic is for each combination of the parent. So maybe if there's rain and ballgame, almost guaranteed traffic. If there's rain but no ball game, 80%. Ball game but no rain, maybe that's 50%. And neither rain nor ball game, maybe that's 20%. And so the more parents you have, the more action there is inside that node, because you've got to specify what it does for each combination of the parents. So that's written here. Underneath each node x is the following conditional probability family. It's a distribution over x, but it's not one distribution over x, it's distribution over x for each combination of values of the parents. A Bayes' Net, which is this graphical structure along with a conditional probability family for each node, encodes a joint distribution as a product of local distributions. So if your nodes in your network are x1 through xn, here is the definition of the joint probability your Bayes' Net encodes. The joint probability of some complete assignment, x1 through xn, is the product of each Xi, given the parents in the network. If you have no parents, that's a bunch of independent probabilities. That would be a very simple network. If nodes have all kinds of parents, then this starts to become a complex expression, and in the limit, this will look like the chain rule, and it will represent an arbitrary class of probability distributions. In general, you want to build a network that has all of your variables connected up, but with a small number of parents for each, so that each of these terms is relatively simple, even though the whole network it defines may be an-- encoding all kinds of interesting interactions. All right. So this is the definition of the probability of an entry in the joint probability table defined by a Bayes' Net, and you just multiply it together, all of the appropriate conditional probabilities that match on those values. Sometimes I'll call this the Bayes' Net reconstitution formula, because it lets you take these little conditional probabilities and inflate them into a giant network-- or sorry-- inflate them into a giant joint distribution where each and every entry in that high dimensional table gets filled in according to this recipe. Sort of a wasteful thing to do, but we-- we're going to have to do it for a while until we have a better algorithm, which we'll get today. OK. Here's an example of a Bayes' Net. Hopefully, this network is not only familiar, you're getting mildly sick of it. This is the alarm network. It's a classic. This one says, burglaries happen but rarely. It's up here. There's no parents to burglary, so it's just a little marginal distribution over that. Earthquakes happen, but rarely. That also doesn't have any parents, so just a marginal distribution. The heart of this network lives under the alarm node, which specifies how likely it is that alarm's going to go off when there's a burglary, and an earthquake when there's a burglary, but not an earthquake, and so on. And so, for example, if there's a burglary and an earthquake, the alarm goes off 90%-- 95% of the time. OK. Similarly, Judd-- John has a node and Mary has a node that specify how often they call, with and without the alarm. If I started adding parents to these things, like I, you know-- maybe John alway-- Mary always calls me when there-- when there's an earthquake. Then suddenly Mary's distribution family is going to be more complicated, because now I have to specify what she does with and without earthquake, with and without alarm, and all the different combinations. OK. So this is what's in the Bayes' Net, and it defines any entry in the joint distribution over these variables b, e, a, j, and m. So, for example, if I come up to you and I say, all right, you've got this Bayes' Net, which is a bunch of little local pieces, but I'm intensely curious how likely it is that there is a burglary, but not an earthquake. The alarm goes off. John does not call, but Mary does. That's this event here. And, now, you're thinking, that's an awfully specific event. But, remember, all the things we actually care about, like how likely is it that there's an earthquake if Mary calls me, are derived by summing together and dividing lots of these elementary events. OK. So if I really am intensely curious about this, I look and I look around and I see, you know what, my Bayes' Net doesn't actually list an entry for that element of the joint distribution, but it gives me a recipe to reconstitute it. So I can multiply together the relevant pieces. Let's see, so we need the probability of a burglary. All right. Minus e, an earthquake. And, now, I need to know the alarm is plus a. Well-- but it's plus a in an environment where it's plus b and minus e. So I go find that here. So that's plus b minus e plus a. That's here. OK. And I look up all the relevant things. So John does not call, despite the alarm, 10% of the time. Mary does call when there's an alarm 70% of the time. And I can multiply these little pieces together, and I'll get some number. I don't know what that is. It's some small number that represents that exact event. And if you say, well, actually, I just want to know how likely it is Mary calls. Well, how do you find that? You take this event, and a whole bunch of other events where Mary calls, and you add them up. Which events? All the events that are plus m, and then you sum together everything on all the other axes and all combinations. And that gives you the total marginal probability of Mary calling. And in this Bayes' Net, you can answer questions of the whole distribution by assembling it as you go from the Bayes' Net definition. So if I told you I am now-- I've changed my mind, I'm intensely curious about no burglary, no earthquake, the alarm goes off anyway but nobody calls me. You could go and you'd be like, all right, I'll circle different entries and I'll multiply them together, and I can get you that probability, too. And whatever probability asked for you can fetch it on demand. Even though you're taking entries, you're fetching probabilities from a high dimensional table of exponential size. But you're building it out of local little pieces of, basically, linear size, assuming all of the conditional probabilities are of constant size. OK. All right. So what do we have with Bayes' Net so far? We've seen representation, which is what probability distribution does a Bayes' Net represent? We've talked about conditional independence. What conditional independences hold for sure in a network based on the structure? And you saw last time how we can do a kind of a search operation and look at various kinds of paths in the network to detect independence. Today what we're going to talk about is probabilistic inference, and we actually already spoiled this lecture a little bit two lectures ago when we talked about inference by enumeration. Unfortunately, that's usually intractable, so we're going to replace it with a much better algorithm today, which is also intractable, because this is an AI class, and everything we do is NP-hard. But it's usually better. All right. So we're going to talk about inference by enumeration again. We're going to talk about variable enumeration, which is, usually, much better, even on the worst case, it's just as bad. We're going to talk about why these things are so hard and, in particular, why is inference in a Bayes' Net NP-hard. And then we'll talk about sampling, but not today. And then, finally, learning these networks from data will come later. All right. Any questions before we start in on inference? OK. Let's do it. So what is inference? Inference means a lot of different things to a lot of different people. Here in this class, it means calculating some quantity from a joint probability distribution. So you imagine the given is some collection of probabilities. Maybe it's the whole joint probability table in all its exponential size glory. More often, it's a bunch of conditional probabilities that define a Bayes' Net. OK. Some examples of things you might calculate from those givens are the canonical thing is a posterior probability. I care about variable Q. There's a bunch of evidence variables whose values I know and, unfortunately, there's going to be a bunch of other variables called hidden variables that I don't care about, and I also don't observe. And we're going to have to sum them out, and that creates a lot of time complexity. So posterior probability is give me a distribution over Q given the evidence I have. Another classic canonical query you might do is a most likely explanation query, where you say, I have some evidence. I would like to know the most likely value of a one or more variables given that evidence. We're only going to talk about the top kinds of queries today. But the bottom ones are solved in, basically, the same way. In most cases, the changes are limited to sums turning into maxes. All right. So let's figure out how to answer. Mainly, let's get back on track. Let's wait-- all right. So posterior probability queries. We have this algorithm called inference by enumeration. It barely fits on the slides. Let's go through it again. But, in fact, it's really simple. It's just really, really excruciatingly slow in practice. So the general case is we have a Bayes' Net. So what's our Bayes' Net look like? It's got a bunch of circles, and those all represent some variables. And there's one that we care about. Let's call it Q, right there. You care about that variable. Actually, often the variable you care about's the top. Then there's some things we know, like, maybe we know-- maybe we know the value of this variable, and the value of this variable, and the value of the variable, and the value of this variable. And, of course, the whole reason that this is interesting is because we've also got all kinds of arrows going between these variables in various patterns. OK. So there's your Bayes' Net. You have a query variable that you care about. That's your query variable. We have evidence variables, which are-- oh, you know, I know that variable takes on the value 7. And then there is the bane of your existence, there are the hidden variables. There're these variables, and these variables, and these variables. There're all those variables in your network that at the moment you don't observe, and you also don't care about their values. So if I give you a joint distribution, I say, hey, here's the probability distribution over x and y. I'd like to know the probability that x takes on the value big. What do you do? You say, well, that's not in my network. That's not in my joint distribution. But I'll find all the entries that say big, and I'll add them up, and that will give you your marginal. OK. So we talked about that. We've had that for a while. In this case, x is the query variable. y is the hidden variable. Because we have to get rid of it by adding it up, by summing it out. And there are no, in that case, evidence variables. OK. So if evidence query and hidden, that's all your variables. And we'd like to have the probability of Q, the query variable, given all the evidence variables. So what do we do? Let me make it go away. If I give you the whole big joint distribution in all of its exponential sized glory, step one is you take the entries consistent with the evidence. So everything that doesn't match your evidence, you just delete it from your table, and your table shrinks. Right. So all those entries that don't match the evidence, they're gone. You have a smaller table. Your table is a big joint distribution. So it used to out up to 1. Right. Because it's a happy well-formed distribution. As soon as I knock out all the rows that don't match the evidence, it doesn't sum up to 1 any more. What does it sum up to? So if I was like, stop, stop doing inference. Add everything up. Let's say that my evidence was e. What does this add up to? Any guesses? It's not 1 any more, it's smaller, because I knocked out all the rows that said something other than e. Like this was maybe like, I know it's sunny, knock out all the rainy rows, not-- knock out all the windy rows, or whatever else is in there. OK. What's left is all the rows that say sunny. And when you add them up, you get the probability of sunny. So this thing is going to be the probability of your evidence. It's like this magic space I can't actually draw any in. OK. So that's the probability of your evidence. But we're not going to do that. That's not why we're here. We didn't come for the probability of the evidence. So we're going to knock out the rows consistent with the evidence, and then we're going to sum out all of the variables we don't care about. That means take this still large dimensional table and get rid of dimensions of that array that correspond to variables that we don't currently want. And what we'll end up with is a tiny little-- so, in mathematics, that's this. So we have-- we have the whole distribution here. We're going to sum out the hidden variables. And so all those rows that differ only in their value of the hidden variables are going to collide and get summed together into the smaller array. And the smaller array represents the probability of Q and the evidence. And for each value of Q, you're going to have a separate entry. So what's this thing look like? It's now a one-dimensional array, and the difference between each entry in that array is they have different values of Q. Do they add up? Is it a distribution over Q? It's not. They still add up to p of e. But if you normalize it, which is the last step, meaning, you add up all the entries and then you multiply them all by 1 over that number so that they are proportional to their values, but now sum to 1, you're going to now get, if you take this and normalize, you end up with p of Q, comma, all the e's, and you divide that by what we know to be p of e, you'll end up with p of Q, given all your evidence. And that was the query you wanted. So that is inference by enumeration. You take your whole table, you select the evidence matching rows, you sum up the variables you don't want-- we'll see that more in a second-- you normalize, and you're done. OK. Problems with that algorithm, aside from the fact that it barely fits on the slide, one, we don't have the giant joint distribution, we've got a Bayes' Net. And while we could, in principle, build the giant joint distribution, that would be a waste of space and time. So that's basically enough problems. We'll stop there. OK. Let's get some more intuition about why this is a bad idea so that we can come up with an idea that-- algorithm that's a better idea. So, if you have unlimited time and unlimited space, inference and Bayes' Nets is easy. You take your Bayes' Net and you get your query. And so maybe somebody comes along and says something a lot more reasonable, which is, I'd like to know how likely it is there's a burglary given that John and Mary have called me. OK. This is a reasonable query. It's not in your Bayes' Net. Your Bayes' Net tells you things like probability that Mary calls given the alarm. This is a case, by the way, side comment, of we've observed variables at the bottom, which is very common, right. This sort of-- these-- to the extent these Bayes' Nets represent causal domains, the stuff at the top sort of causes the stuff in the middle, that causes the stuff in the bottom, you tend to observe the effects at the bottom and try to reason about the causes. It's not the only thing you can do with a Bayes' Net, but it's a pretty common thing that people do. We shade nodes to indicate that they're observed. So, in this case, I've observed the values for J and M. They're both positive. And I'd like to know the value or, in this case, the distribution over b. What about e and a? I wish they weren't in my network. What can I do about it? Nothing. I got to work around it. Say, why didn't you give me a network that doesn't have e and a? Two reasons. One, maybe next time you're going to want e and not b, and you want this network to be a sort of multipurpose. And the second thing is if you start deleting nodes from your network, the nodes that are left, end up all pointing to each other in all kinds of crazy ways. Because simple interactions that were mediated by those hidden variables are now just sort of like a free-for-all of direct interactions between the remaining ones. So, usually, you're going to have some hidden variables, for one reason or another. So you're in this situation. I'd like to know the probability of b, the distribution over b. So it's going to be two values. One for plus and one for minus. Given J and M are both true, let's figure it out. We've got laws of probability. First law of probability says that that conditional distribution over b is proportional to the values where we have joint distributions instead of conditional. So let's stop and figure out what this little fish shaped with a b means. This in-- this proportionality with respect to b. So this says these two things are almost equal. They're not actually equal. But they'd be equal if we summed up the values over b, and multiply by the inverse. So that means they're proportional. OK. And so what this says is if you want a conditional probability, you compute the equivalent joint probability, and then normalize. All right. We know how to normalize. If I gave you this-- so what is this? Like what is this? I don't know, but it's like-- it's something that says-- b says plus minus 0.2, 0.1, and when you normalize, you get one-third and 2/3. So let's compute this. Let's compute this joint-- this joint value here. So what is that? That is for each value of b. I don't actually know that joint distribution-- those joint probabilities, either. But if I introduce e and a with specific values, suddenly, the network tells me how to get these. So I want those joint probability entries. I now have to look for the probability of b, and e, and a, and plus j, and plus m. We're going to do it for each value of e and a. And we're going to have to sum them. OK. Because those are variables we introduced. That's how marginalization works. And this capital letter means we're going to keep a vector around. That means we're going to be doing sort of vector math. We're going to be doing this exact same computation once for plus b, and once for minus b. In general, we're going to be doing the exact same computation for every value in the domain. So when you see capital letters involved in these, it means vector math. Means that same operation's happening for each value in the domain. All right. So conditional quantity I wanted. Joint quantity I wanted. This is a full entry from the joint distribution which, in principle, the Bayes' Net gives me. This is the rewriting of that according to the Bayes' Net reconstitution formula. This is the chain rule as it applies to this Bayes' Net. So it's not the full chain rule, it's just the multiplying together of the conditional probabilities of this Bayes' Net. All right. What's that? That bottoms out in taking a whole bunch of individual probabilities and multiplying them together in a bunch ways and adding them. Don't stare at that too hard or your eyes will roll out of your head. OK. Get the idea? All right. Great. So we have our algorithm. Here's why we don't use that algorithm. Here's a, you know, by modern standards, what's the technical term? Extremely tiny network. But even in this network, if I wanted to know something like, hey, what is the probability that you have-- I don't know-- an extra car given-- here is your driving history and your age. OK. I observed two variables. Great. I'm interested in one variable. Great. And the other 24, or whatever, are hidden variables. So when I do that same formula-- formula, I'll be like, oh, I'm computing probabilities of, what is it, extra car given, you know, some particular value of the age, and some particular value of the driving history. And I don't know that, so I'm going to get the joint probability instead of e and h and a. But I don't know that. But I know how to do that. It's going to be a sum over, like, the whole rest of the network, p of e, h, a, lots of things. And so because we're summing over lots of things, this is going to be an exponential time thing and we can just go get lunch while it runs. OK. So already in this tiny, tiny network, it's bad news to do this in an exhaustive way. All right. So why is this so slow? The reason it's so slow is somebody was nice enough to give you this conceptually giant network in tiny little pieces. Right. So these tiny little pieces multiplied together to find this exponentially large thing. And they have this warning on them, which is like, do not multiply us together to form the exponentially large thing. So what did we do? Step one, multiply them together to form the exponentially large thing. And then once we get the whole thing, what's the first thing we do? We collapse out all of those dimensions we don't care about so they'll get small again. Maybe we should not inflate the whole thing before we start deflating it. And so a variable elimination is going to be is instead of joining up the whole joint distribution and then summing things out, we're just going to interleave those things. So we're going to join some variable, and then immediately sum it out, and then join the next variable and sum that out. And so we control that growth. And you say, why does that really matter if you're going to do 10 joins, and then 10 sums, why is it worked to interleave them? The reason it's good to interleave them is because when we let that network grow, it grows exponentially. So if you join 10 times, things are growing sort of exponentially. And then by the time you sum them out, it's sort of too late. Whereas, if we catch those joints early, and sum out hidden variables as soon as we legally can according to the laws of probability and the conditional independence assumptions we have available, then we can avoid that exponential blowout. OK. So we're going to interleave joining and marginalizing things. This is called variable elimination. It's still NP-hard, but it's usually much, much faster. And for certain kinds of graft structures, it's in fact efficient, and sometimes even polynomial. All right. So he need some notation-- some notation. The notation we're about to get is what are called factors. You already seen the basic types, but there's sort of a whole zoo. There's like a menagerie. There's simple factors, like joint probabilities and conditional probabilities. And then there's the weird stuff. I'm not even going to show you all the weird stuff. When this algorithm runs, it's taking little pieces of network and multiplying them together and summing out variables. And what you're left with is often something like weird hybrid of a joint probability and a conditional probability. Let's take a look at what you can get, and build up some intuition. But just know that if you do this on a big network, the intermediate stages can be hard to think about. All right. So the factor zoo. Let's see what's in our factor zoo. There's the relatively innocuous stuff, like, you know what a joint distribution is. What is p of x and y? Well, it looks like this. It contains a probability for each value of x and y. When you sum it up, it sums to 1, because it is a joint distribution over those two variables. And it's just a sort of enumeration of all of the cross product of their domains. OK. So in this case, what's the probability of hot and sun? It's 0.4. Great. We know about joint distributions. Often, we are selecting rows of a joint distribution. Like, for example, somebody tells you, oh, hey, it's sunny. And so I might say, all right, it's sunny, let's knock out all the rows that do not correspond to sunny. Once I've knocked out rows, I structurally still kind of-- I semantically have the same thing. I still have entries of a joint distribution. They didn't like become a conditional probability or anything, but the dimensionality got reduced. Right. Because on that one axis, in this case, that axis of sun, it's no longer all values. It's just the one. OK. This is called a selected joint. You can think of it as a slice of a joint distribution. There are still probabilities of x, y. But now x is fixed and y still ranges. This doesn't sum to 1. So if I look over here and I sum these things together, it's going to sum to probability of sun. OK. So it's going to sum to the probability of what you selected. So there's a probability of cold, comma, w. You knock out everything that doesn't correspond to cold, and you're left with a slice of the joint distribution. OK. This is actually an important point to stop and think about the difference between what the numbers mean. Right. These are still joint probabilities over T and W. 0.2 is still the joint probability of cold and sun. But as a data structure, it went from being a two-dimensional array to a one-dimensional array, because I picked a value for one of those dimensions. How do I know what this is as an array, as a data structure? Capital letters indicate dimensions of the array. So T, comma, W, in caps is a two-dimensional object, like what's shown here. Cold, comma, W. Well, there's only one capital random variable whose value has not been fixed. That's now a one-dimensional object. And if I go to cold, comma, sun, it's still a joint probability, but now it's a one-dimensional object as a scalar. OK. That's important. We can reduce the size of the data structure without changing the semantics of what's in it. OK. You want to know what changes the semantics of what's in it from a conditional probability to a joint probability? That's the laws of probability. You multiply things together and you cross things off according to the product rule or the chain rule. All right. Number of capitals is dimensionality of the table. All right. Here is a relatively harmless thing from our zoo that you've seen before. This is a conditional probability. This is the conditional probability distribution over Y. See, Y is a capital, so as a data structure, this is a vector over all the values of Y. This is the probability of Y taking on its various values, given the evidence x. Each entry in this is a conditional probability of Y given x. They all have the same x, but they have different Y. If I sum this up, what am I going to get? Well, it's a distribution over Y. If I sum up all the values, it's going to sum to 1. It is a probability distribution over Y, for the condition x takes on some particular value. All right. That's great. We know about conditional distributions. They look something like this. This is the distribution over w given T is cold. And so there's a value for each element in the domain of w here, and these things sum to 1. All right. You can also get a family of conditionals. This isn't too weird, but it's a little weird. This is distributions over Y, but it's one for every value of x. So it's like you've taken p of Y given this x, and p of Y given x, and you've glued them together. It's a two-dimensional data structure. It's got a value for every value of x and every value for Y. And everything in there is p of Y given x for some x and some Y. If you add them up, each little distribution over Y adds-- adds up to 1, and there are x many of them, so this whole thing together now sums to more than 1. That's weird. OK. So we'll look at them through the glass. This is getting a little weird. Here's what it looks like when you see a probability of say, w given T. You look at it, it looks two-dimensional just like w comma T, but if you peek a little closer, you'll see that for each fixed value of T, like these ones, we get a distribution over w. Here's another distribution over w. They've just bolted together. It's a two-dimensional object. It's got variables for w and T. All right. Now, we get to the weird stuff. OK. You'll end up things like specified families. What does it mean when I hand you something and I say, oh, what's in here is the probability of a fixed value of lower case y, like the probability of rain. And then it's giving capital X. That means there's going to be an entry for every value of capital X. So this is a whole bunch of probability of rains, for example. So here's an example. This is a probability of rain. When it's hot, there's a probability of rain. When it's cold, what happens when you sum them up? It's not a distribution over rain and sun, it's a bunch of probabilities of rains. And so when you add them up, you get to-- no one knows. They could be all zero, they could be all 1, just depends on what those conditional probabilities are. This is no longer a distribution. It's no longer a family of distributions. It is a one-dimensional array, because there's an entry for every value of x. But each of those entries is the conditional probability of some particular value of Y, which is usually an evidence value. OK. So that's weird. You'll see those two. OK. How do we get these things? We get these things because somebody hands us little pieces of Bayes' Net. We're going to be selecting the value of the evidence, wherever it is on the conditioning bar, could be on the left or the right. And then we're going to start multiplying things together. So we're going to get all kinds of stuff. Any questions? All right. So, in general, we're going to be writing things that look like this. We're going to be writing conditional probabilities of some number of variables, possibly zero. It's not going to happen today. Don't let it freak you out. Given some number of other variables, possibly zero, that happens all the time, those are marginal probabilities. This is what's called a factor. It's always going to be data structure wise a multidimensional array where every capital letter is going to indicate a dimension of the array, and we're going to have the whole cross product of all of those entries. The values in this factor are always going to be specific values of probabilities of the stuff to the left of the conditioning bar, given the evidence to the right. Anything that's assigned, meaning we write it with lowercase, is a dimension that's missing, or selected, from this array. So we're going to have all kinds of factors, and we're going to generate them sort of algebraically as we go. All right. Let's do a specific example now that we have our factor zoo. So this is a tiny example. This example is so small that it's actually fine to just build out the full joint distribution or resist. So the random variables are, is it raining or not? Boolean random variable. T, is there traffic or not? Boolean random variable. L, am I late for class or not, which I nearly was today. Boolean variable. OK. What's living in my Bayes' Net? Well, I have p of r, because it doesn't have any parents. And, it says, 10% of the time it rains. OK. What's under t? This encodes the probability of traffic given yes rain, and given no rain. So in fact, this is a family of conditional distributions. I eyeball it. I see it adds up to 2, because there are, in fact, two distributions over t, one for plus r, one for minus r. You can see it's a two-dimensional data structure. They're all conditional probabilities of t given r. Same thing under l. There's a bunch of conditional probabilities given the parents. OK. Here's distribution over l given plus t, given minus 2. OK. Let's say somebody comes to me and said, I'm intensely interested in the probability distribution over l. That is, forget this raining and stuff, how often are you late for class? And I say, well, that's annoying, because if you had asked me how often it rains, I have that answer at my fingertips. This answer of p of l I need to compute from what I know. It's not just sitting in there. I can tell you probability I'm late when there's traffic. But in order to tell you the overall probability, I sort of need to know how often traffic is, but I don't have that, either, so I need to sort of combine that with rain. And that's what the Bayes' Net mathematics lets you do. So I could look at this and I could be like, well, because this is just a distribution over these variables, I can always write this. This is introducing a variable and summing it out. The probability distribution over l is just the sum over all the values of r and t of the probability distribution over RT and l. OK. I just introduced some variables, but I'm collapsing them out. So that's law of marginalization here. Now, I don't actually know that, either. That's an entry of the joint. But because it's a Bayes' Net, I can break that down into a product of things I do know. I do know p of r, because it's sitting there under r. And I do know t of-- t given r. It's sitting there under t. And I do know l given t. OK. All right. So inference by enumeration, which you already know. We're going to see it again through the lens of factors, which will let us get to variable elimination. All right. We're going to track these factors from our zoo. The initial factors, the givens, are going to be conditional probability tables. OK. You get one per node. And if I have some node sitting here in my network that's called a and it's got a parent called b, and another parent called c, my initial factors are going to look like P of A given B and C. OK. So the network structure gives me my initial factors. And so for the computation of p of l in that little lateness network, these are my initial factors. You see, that's just the Bayes' Net. That is just the Bayes' Net. OK. All right. Step one. Just like our original inference by enumeration, we have to select values that match the evidence. So let's say we're going to do a query, and I'm going to say something like, maybe, what's the probability that it's raining given that I am late? Well, if we know l is plus l, then the initial factors are going to be smaller. p of r doesn't change, because it's all sort of valid for plus l and minus l. But all the entries that actually mention l in them get paired down to just the amount that is consistent. Now, conceptually, I had this whole giant joint distribution. And, conceptually, I just knocked out half the rows. I knocked out all of the minus l rows. But to my Bayes' Net, I sort of knocked them out locally. But the consequence is if I inflated the whole thing, half of the network would be gone now. OK. All right. So inference by enumeration. Initial factors are local conditional probability tables, one per node. Any known values are selected. That's your evidence. Join all your factors, whatever that means. Eliminate all your hidden variables. We know what that means, it means sum them out, and then normalize what's left over. So operation one. What does it mean to join factors? I've got all these little pieces. So I got factors, and I need to merge them together. But I don't just want to like multiply everything together. That would give me the whole Bayes' Net. So we need an operation for joining a selected set of factors. It's sort of just like a database join. OK. So in a database join, you've got some table, and it sort of talks about A and B. And then you got some other table that talks about A and C. And you join them together. And what do you have now? You've got this big table that talks about A and B and C, except now it's big. I guess it depends what join you do. OK. So this is like a database join. We're going to-- and what happens when we join, we talk about joining factors. In particular, we talked about-- we talk about joining on a variable. So maybe we'll join on a variable x. We get all the factors that mention that variable. If you leave one out, weird things will happen. Don't do it. So if somebody says join the factors on x, I sum unto me all of the factors that mention x. Some will mention it on the condition side of the bar, and exactly one will mention it on the left-hand side of the bar. And that's whatever was living under x to begin with. So we summon all the factors that mention our magic variable. We build a new factor over the union of the variables involved. So let's do an example of that. Let's say I would like to take my little r, t and l Bayes' Net over raining, traffic and lateness, and I want to join on the variable r. Step one, I gather all of the variables that mention r. What mentions r? p of r. Great. And p of t given r. And if r had other children, they would mention r, too, and we'd join-- we'd summon them, too. But at the moment we just get these two. What does it mean to join? It means we create a new-- we create a new array that's going to represent the product of these things, whose dimensionality is the union of all these variables. So in this case, the union isn't actually any bigger. And we'll end up with r, a, and t. And because we multiplied of p-- t given r, or with a p of r, I know from the product rule that's going to be p of r comma t. I don't have to know that to join variables. This is a mechanical operation. But because I'm tracking what these things mean, and I know the laws of probability, I can infer that what's actually going to live here is going to be p of r comma t. But I can do the algorithm anyway without giving it a name. So what do I do? For each entry of this new thing, which is still a two-dimensional array, it looks just like this one. Only the numbers are going to be different. For each element, I'm going to do a pointwise product. So, for example, in order to get the entry for minus r plus t, I'm going to get minus r-- minus r plus t. I select everything that's relevant and I multiply them together. And I do that for every entry. It's a pointwise product. All right. That's what joining on a variable means. We grab all of the factors that mention the variable, I create a possibly higher dimensional array to hold all of those variables, one per access, and then I do a pointwise product of everything relevant. Once I've done that, I sort of don't have a Bayes' Net with an r pointing to a t. I've sort of merged them together into one mega variable called r comma t. We've never seen that in our network, and we actually won't, except in this lecture, I'll draw that to evoke the fact that r and t have sort of been conflated by this joining process. All right. Let's do an example of multiple joins where we take all these little pieces and we stick them together and get something higher dimensional. All right. So here is that network p of r. Raining causes traffic, traffic causes lateness. Here are the local probabilities involved. Let's go crazy joining things up. So I can join on r. What's going to happen? I'm going to gather everything that mentions r. That's p of r and p of t given r. I'm going to get something that multiplies them together. So it's going to be p of-- it's going to be something with t and r, because that's the union of the variables. And we know it's going to be t comma r, because if you multiply a conditional by a marginal, the product rule here says you're going to get a joint probability. So we already did that. We're still going to have this other factor. It is unchanged. It is not harmed in the joining of r, but there's this new factor r comma t that we saw before that we got on the last slide. I've taken one step away from the Bayes' Net and towards the joint distribution by collapsing together r and t. Let's go crazy. Let's join on t now. All right. So if I join on t, I gather everything that mentions t. And, look, everything mentions t. When I do this, I'm going to get a new factor. It's going to be a probability of something. It's going to have dimensions for r and t and l. And I know by sort of the chain rule here and what I'm multiplying together that, in fact, what I'm going to end up with is the probability of r and t and l. All right. What's it going to look like? It's going to be a three-dimensional array here, one for every value of r, every value of t, and every value of l. And for each element here, like, if I want plus r plus t minus l, I grab plus r plus t from here, and plus t minus l from here, and I multiply them together. So, again, it's a pointwise product, and now we've made something bigger. In general, if you go crazy jointing stuff together, these tables get higher dimensional. And as you add dimensions, things grow exponentially fast. OK. Which hopefully I've said enough times is not what we want. All right. So it's multiple joins. If you just go crazy and start joining on variables and pressing the join button like a-- like a maniac, you're going to end up with the full joint distribution, and it's going to be big. So how do we control this growth? We have something that basically doubles the size of things by joining them, or perhaps much worse. We, also, have an operation which halves the size of things by eliminating a variable. This takes a larger dimensional object, and deletes one of the axes by collapsing everything on that axis and summing them together. So this is called marginalization. It's just like marginalizing from before. It takes a factor and sums out a variable. So a factor becomes a smaller factor. This is a projection operation. So, for example, if I have my factor over r and t-- this is also just a joint distribution over r and t-- and I want to get rid of a factor, I say, you know what? I don't care about r. I just want to know the probability of traffic. And, I say, unfortunately, my network mentions it. I'd like my network to not mention it, so I sort of like-- I can't just scribble it out, right, because there's going to be a bunch of Ts that collapse, and a bunch of Rs that collapse. But if I scribbled it out, I would say, huh, I guess I got to put together those two plus Ts. And I guess I got to put together the two minus Ts that have now collapsed. And if you do that, you'll end up with a distribution over t, and it'll look like that. So this two-dimensional thing became a one-dimensional thing. And where there was collapsing, I added things together. That's marginalization. And this is the thing that reduces the size of the objects that we're constructing. So, in general, we're going to join, and then as soon as possible, we're going to marginalize everything we don't need that's legal to marginalize so that we can keep things small and controlled. OK. So here's an example of multiple elimination. Here's a joint distribution over r, t and l. It's a three-dimensional object. It's a joint distribution over three variables. Where'd it come from? We reconstituted it from a Bayes' Net from-- but for the present purposes, it doesn't matter where it came from. OK. What can I do to this thing? Well, I can answer questions like, hey, what's plus r plus t plus l? OK. It's 0.024. Great. I can, also, start projecting things out. So I could sum out r. That basically means I get rid of this axis, and all the plus t plus l's that collapse get added together. And now I have something that's just two-dimensional, t and l. Each one of these entries is backed by a whole bunch of things that also say plus t plus l, but used to vary over r and have now been collapsed. All right. That was summing out r. OK. We're going to pare it down. It's going to be brutal. We're going to sum out over t. What's left? Just going to have p of l left. That's just a plus l and a minus l. And there you have it. My probability of lateness is 13.4% in this model. Hopefully, real statistics do not bear that out. OK. So we can join a bunch of things. We'll get the-- in the limit we'll get the whole joint distribution. We can sum out things from the joint distribution limit, we'll end up with just like one variable left. And it turns out that that's all you need to do inference by enumeration. Somebody gives you your conditional probabilities and your Bayes' Net, you select your evidence, you join, join, join, join, join, until you can join no more. And then you eliminate everything you don't care about, and then you normalize. That's it. And all variable elimination is going to do is interleave those op-- those operations. OK. So variable elimination, and then we'll take our break for today. Variable elimination is simply marginalizing early and you say, I have a really good idea. I'm going to marginalize everything at the beginning just like project it all out. The problem is when-- you can't actually marginalize things until there is only one factor that mentions them. So if you sit down to your Bayes' Net and you see, oh, I've got like a p of a, and then I've got a p of b given a, and you're like, I'm going to marginalize out a. OK. Like don't get too excited, you can't marginalize that out yet, because you haven't yet done the joining that's necessary to connect everything that a mediates. So here a influences b. You would have to join these together before you could marginalize it. If you had p of c given-- well, given b and a, let's say, as well, you would have to join all of these on a before you could marginalize on a. And so the rule is you can't actually marginalize something unless you only have one factor that has it that represents the distribution over that variable, perhaps jointly with other things, and under some condition. So the basic recipe is going to be simple. You can't marginalize it until you've joined all the factors that have it. So you're going to pick a variable, you're going to join on that variable, and you're going to immediately marginalize on it. You say what about my query variable? I don't want to marginalize over that. OK. Don't pick that one. Leave that one for last. And what about my evidence variables? I don't want to marginalize over them. Well, the first thing you do is select them, so they're basically gone in a data structure sense. So let's take a look in the traffic domain and see what this is going to look like. p of l. OK. We're not-- we're not going to actually go through p of l again, and we've already done a couple times. But just sort of eyeball another-- just another look of sort of what variable elimination is. Eyeball the inference by enumeration. It says, oh, you want p of l? Well, too bad. We don't have that. Instead we have p of sort of l comma t comma r, which decomposes into this product, because it's a Bayes' Net. So we're going to write that product and then we're going to do lots of summing. The summing is all out here, and this represents joining those factors on r, joining the factors that remain on t, then eliminating r, then eliminating t. All variable elimination is going to do is move the sum in. So you're going to join on r, but then immediately eliminate it. And what's going to be sitting there after you do that is a little p of t factor. And now you can join on t and eliminate that. And so you didn't sort of grow it completely before you projected it down. OK. So you could say, why did you teach us about factors? Why was there a factor zoo? Why are there database joins? Why don't we just write algebraic expressions like this, and move the summation around according to the laws of algebra and probability? You can do that if you want. In fact, some people teach it that way. Enough said. OK. So let's do marginalizing early. OK. At this point, like don't even track the individual numbers, though feel free to go back and actually like go through these slides your own. We're now-- instead of going to join, join, join r t, l, marginalize, marginalize, marginalize. Let's just sort of like marginalize aggressively. OK. So we're going to join on r. Boom. The r and t nodes become one big factor r comma t. OK. So these two factors get consumed to produce their join. But now we can sum out r, because we don't care about it, and it only appears in one factor. See, this was too early here to marginalize it. OK. So we sum out r. Boom. r comma t is just t, and this factor just got exponentially smaller. OK. Got half as-- half as big. OK. All right. Now we can join on t. We get something two-dimensional over t and l, but then we sum out t and it's back down to l. Is this better? Yeah, it is actually better, because when we did join, join, we got the big three-dimensional thing, and here it never got that out of hand. We never got anything bigger than two-dimensional in this order of variable elimination. OK. So let's take a break, and then we'll present kind of how that works with evidence. We'll do some examples. We'll talk about why all the stuff is NP-hard anyway, and the difference between good and bad variable orderings. OK. All right. Two minutes. All right. OK. Let's get started again. So what about evidence? What are we doing here? We're looking at the variable elimination algorithm, which is an algorithm for computing queries in a Bayes' Net. And so far, we sort of have some pieces, but we don't quite have an algorithm, like, we haven't really said what the inputs and outputs are. We've just said what the building blocks are. The building blocks are joining where you gather all your factors that share a specific variable. And then after you join them, that variable will appear once in one factor. And it turns out it'll appear on the left side of the conditioning bar, and so you can marginalize it out. So in order to actually have the full algorithm, we need to talk about what happens with evidence, what happens with your query, what are the inputs, what are the sequences, what are the choice points. So we're going to do that now. So your inputs are-- you have a bunch of local factors there. The conditional probability is to come with your Bayes' Net. There's no evidence yet, but you know which variables have specific values, because they've been observed. Like maybe you've observed that the patient has this temperature. So now that random variable is evidence. So what do you do? So you start with your factors. So if you have no evidence in this exam-- running example of p of r, p of t given r, and p of l given t, these are the conditional probabilities that come with your Bayes' Net, no observations. But if I'd like to compute what's the probability that I'm late given that it's raining, my initial factors change. p of r changes, because the minus r part is gone. You say that's a-- I only saved one line. But, remember, you've conceptually just knocked out half your joint distribution. So that's great. You've just knocked it out locally. And then t given r becomes t given plus r. So it's still probabilities of t given r, except now instead of being a two-dimensional object, it's a one-dimensional object. And p of l given t doesn't change, because it doesn't mention r, which is our evidence. So you start with the same factors as before, but everywhere your evidence appears, you delete all of the rows that correspond to the values other than the observed evidence. All right. So you eliminate-- and now what you do is you eliminate all variables, you're not going to eliminate the evidence we already took into account. We're happy. The evidence is happy. We don't need to eliminate it. We don't need to join it. OK. And we're not going to eliminate or join our query just yet, because that's what we want to have left in the end. So what we're going to do is we're going to go to every other variable that isn't our evidence. Here our evidence is r, and here our query is l. So we're going to go to all the others, which is just t, and we're going to eliminate them. What does eliminate mean? Join and marginalize. OK. So if we do that-- and we'll do an example in a sec-- in a second-- what you'll end up with is you will end up, if you do probability of late given r, meaning you select all of the plus r rows, and then you find everything like t that isn't mentioned in your evidence and query, and you join and marginalize, you'll end up with one factor at the end. So there'll be a join at the end. And the join at the end, you'll end up with one factor that looks like this. It'll be a selected joint of the evidence in the query. So the evidence has one value, and the query has all its values. So it has plus l and minus l, but just plus r because that's your evidence. As a data structure, that's a one-dimensional object. As a semantic object, is it-- those are all joint probabilities of r and l. So if I normalize that, meaning divide it by its sum, I'll end up with a probability distribution over l. There's an entry for every value of l and it sums to 1, so it's a probability distribution. Moreover, it'll be the conditional probability of the query variable given the evidence. OK. So let's do some examples. That's it. We're going to do the general again-- say what it is in general, and then we're going to do some examples. Your query, it has a query variable. You can actually have multiple query variables, but for now we'll pretend there's just one. There's evidence. OK. You can have anywhere from 0 to everything else in the network as evidence. Your initial states is the local conditional probabilities instantiated by the evidence. While there are variables, which are neither e variables nor q, pick a hidden variable, join everything mentioning h, and then sum out h. So you join on h, you sum on h. Once that's done and you can find it no more hidden variables, what's left in your factors? All the hidden variables are gone. What's going to be left is q, which will appear as a vector. And a bunch of evidence values, which will-- which will not appear sort of as axes. Countdown to goodness. I think we can wait an hour for the goodness. OK. All right. So the last thing you're going to do is you're going to join all your remaining factors in-- and normalize. So you select your evidence. You hunt down the hidden variables in order of your choosing, and then at the end you join and normalize. Here's an example. All right. There's going to be a lot of little conditional probabilities with conditioning bars and some capitals and some lower case. Just try to track the flow of how these objects are sort of changing algebraically. So at the beginning, here's my alarm network. So that's-- so forget variable elimination for my-- forgot my query. There is my network. If that means we're going to have a model here over a, b, e, j and m. OK. Great. And that means in principle, there is a joint probability over a, b, e, j, and m. But we're going to try our very best to not actually assemble that five-dimensional object. Somebody comes along and says, hey, I want the probability of burglary given that John and Mary are both calling, let's say. So let's pretend those are plus j and plus m. OK. They're just written as j and m. Pretend they're plus. All right. So I say, all right, let's do it. Well, what I'm really going to do is calculate this joint probability instead. It's the same thing but unnormalized. And then I'll normalize as a last step. So let's calculate it. Well, rather than actually going through and introducing variables and summing and trying to do this sort of arithmetically, we're going to do variable elimination. So here we are with our five factors. Each one corresponds to a node. So, for example, this j node has probability of j given a, and it's been instantiated to select to match the evidence. So I start with five. P of b, p of e-- those are the two top nodes-- p of a given b comma e, that's the big node in the middle. It's already a three-dimensional object, so that one might be trouble. And then I've got j given a, but it's already been selected to the value of j plus that I have. And m given a has also been selected. These are-- remember, we talked about them, they are selected families. They are not probability distributions over a. They're conditional probabilities over plus j, one for every value of a. What do they sum to? Who knows. OK. All right. So here are my five factors, which are my original conditional probabilities from the Bayes' Net instantiated by the evidence. So I pick variables in some order. Let's pick a. So I choose a. I summon together all the factors that mention a, of which there are three. All right. So I got to join all these factors together. What am I going to end up with? Well, whatever I end up with, it's going to involve a and b and e. And, actually, also j and m, in terms of what it means, but they're not as a data structure. They're not axes here. These are going to be three-dimensional objects. So I'm going to have something that's going to be a factor over a, b, and e. And what it's going to correspond to is a bunch of products of p of a given b e values, times p of j given a values, times p of m given a values. What's that? That's going to be the probability of a and j and m given b of e. So if I put all those things together, you say, how did I know that? Well, that's the only thing they could be. You sort of union up over-- union up everything that's on the left of the conditioning bars, and then whatever is left is on the right of the conditioning bar. OK. All right. So I'm going to join them up, and then I'm going to sum out a. I joined on a, and now I'm going to sum it out. And I'm going to end up with a table that represents joint probabilities of j and m given b and e. Except j and m have single values, which means how big a data structure is this? Dimension one, dimension two. OK. This is going to be a two-dimensional array for values of b and for values of e, and for every entry for a b and an e, it'll give you the probability of j and m, given that b and d. Well, here we are. We got factors again. I dispose of these. They will confuse me, so I throw them out. Besides, I don't need them anymore. I already have their replacement. So now I've got p of b because it was not harmed in that joining. We've got p of e, it was not harmed in that joining. And these three here have been replaced by this one. OK. Almost done. I want p of be given j and m. There's one more hidden variable, which is e. So I'm going to join and eliminate. So I say-- I say, I will pick e. When I picked, there are two variables that mention-- there are two factors that mention e. The original p of e, and this new thing that I just made that has e as one of the conditioning properties. When I join these things together, I'm going to get another two-dimensional object. You'll see e has moved from one side to the other, because I multiplied by p of e. And then no sooner does it move than I eliminate it, and I end up with the probability of plus j plus m given b. So there's going to be a value for plus b, and a value for minus b. Now, I've got two factors left. I got the original p of b, and I've got this new p of j comma m given b. All right. I'm all out of hidden variables. My evidence variables were taken into account when I instantiated them. All that's left is my query variable. I could join and eliminate one more time, but that would be silly because, actually, I don't want to eliminate b. So what I'll do is join but not eliminate. So the last thing I do is a big join. I join everything that's left. And I will end up with the probability of my evidence jointly with my query variable, because this is a vector. There's one value of this for plus b, and one value for minus b. And if I normalize that, I will get the conditional probability of b given that evidence, and I'm done. OK. So that's how the variable elimination flows. You start with a bunch of little factors, you join. And when you join, you might grab seven things and build some mega factor, and then you eliminate one thing. And so you have to choose wisely about what you join, because for bad orders, you might build things faster than you're shrinking them. OK. All right. Do not process the slide. Don't do it. This slide is simply saying that instead of working with these factors, I could have worked all that out using products, and sums, and laws of distribution, and all of that. OK. Variable elimination is-- is a mechanical way of figuring out how this is going to work in a way that's much easier to reason about than these sort of algebraic arithmetic expressions. OK. But feel free to like eyeball this later and see how it's doing the same thing. But don't eyeball it now. I'm going to make it go away. It's gone. Oh, man. OK. All right. So variable elimination. Somebody hands you a network, and in that network, there is a whole bunch of variables that are maybe observed. So observe, observe, observe, observe, sort of observe, maybe observe all the Y's. This is pretty common, right? You've got some underlying cause you'd like to infer something about, and you see a bunch of distal facts, and you want to do a-- do a diagnostic inference here. And so I'd like to know what's the probability of something given the rest. Maybe I want to know what's the probability of xn given all those Y's. OK. Now-- well, first of all, let's just figure out what's going on. These guys, these are the evidence. OK. So those are the evidence variables e. The Y's are the e's. The query variable is x sub n. Everything else z, x1, x7, all that stuff, that's all hidden variables. So the evidence I'm going to instantiate that. Fine. We're not going to eliminate the Y's. The last thing left standing is going to be xn. And we'll join and normalize in that in the end. And in between all this other stuff-- put a green box. All the stuff in the green box is hidden variables. These are the H's. And these need to get eliminated. So we have an algorithm. We're going to point to one, we're going to summon its factors, and then we're going to multiply them together, and then marginalize out over that axis. So let's actually try something. Let's try two possible worlds here. One is where we eliminate X1. Let's just do it real quick. So we're going to summon all the factors that involve X1. Well, there's this p of y1. That's evidence, so it's going to be a lower case y1 given capital X1. OK. That's cool. What else mentions X1? What other factors in this network have an X1 in them? It was only one more. There's the one living under p of X1. So it's p of X1 given-- given z, given its parent. OK. So we're going to join them. So let's do it. We're going to do a join on these things. We're going to end up with some new data structure. It's going to contain probability of little y1 and big X1. So it's going to be a given z. It's going to be, as a data structure, two-dimensional. There's a dimension for x and a dimension for z. And every entry is a probability of y1 given X1-- sorry, y1 and X1 given z. Now-- so that's the join. And now we're going to marginalize out over X1, and we're going to end up with probability of y1 given z. That's OK. No harm done. And then I'll do it for X2 and for X3. That'll be OK, right? Let's think about another path we could take. So this is where we pick X1 as our first variable to eliminate. So, now, let's eliminate z. Eliminated z. OK. Let's see what happens. Step one, we're going to summon every factor that mentions z. Let's figure out what faction's met-- factors mention z. Well, there's p of z, that mentions z. That lives in here. All right. What other factors mention z in this Bayes' Net? So think about which nodes, either rz, we got that one, or have a parent that is z. Well, p of X1 given z, that's one of them. What else? p of X2 given z. p of X3 given z. p of xn minus 1 given z. All right. Do you have a bad feeling? I have a bad feeling. So we're going to take all these things and we're going to join them together. What do we get? We're going to get a monster array, right? This is going to be some thing which is going to be probably-- it's going to be some factor over X1 all the way through xn minus 1. Actually, it's going to be all those product together. p of z. Then we're going to eliminate z. Let me just-- Let's do it this way. OK. So we're going to form this joint probability, and then we're going to eliminate z. All right. What are we left with when we eliminate z? We're left with a giant factor over all of its children. That's kind of bad, right? We just built something exponential. Sorry. OK. That was a bad choice. We should not eliminate z. Why? Intuitively, what's going on here? In this network, you've got z, which is a cause. And there's a whole bunch of things which are independent given z. Well, what happens if we eliminate z? They're not independent anymore. Suddenly, they all kind of rise and fall together. But it's not mediated by z anymore because we just nuked z. So what's left is a big joint factor over everything that's left that has now been coupled. So we have to be very careful when we do variable elimination that we don't pick variables early on that cause everything left to couple. Because that's going to end up with exponentially large things right away. So there are good variable orderings and there's bad variable orderings. Is there always a good ordering? No. This is NP-hard. How hard is it to find the best ordering? NP-hard. Right. So, however, you know, good luck. Good luck finding a good ordering. OK. So this can be exponentially better if you pick a good ordering. Can't always find a good ordering, but you can try. OK. So the ordering affects efficiency. OK. We have an algorithm, variable elimination. You join things and eliminate immediately. Some orders are better than others. Finding the best ordering can be hard. The computational and space complexity of this algorithm is going to be determined by how big these things that you build are. If you just build up the whole joint probability, it's going to be bad news. So it's going to be determined by how quickly you let those factors grow. The elimination ordering can be very-- they can vary greatly how big that factor is going to end up being and, therefore, how expensive this is going to be. So in the previous slide, there was an exponential difference. All right. So we already knew this. Is there an-- always an ordering that results in only small factors? No. Can you find the best ordering? No. Any questions on that before I show you why? OK. In practice, often there are reasonably good orderings that are reasonably easy to find, and everything ends up being quite efficient. But as always in AI, the worst case is very, very bad news. The reason the worst case here, if you think about it, we can actually go directly to thinking about a CSP and, in particular, a satisfiability problem. So how could I use a Bayes' Net to solve a satisfiability problem? If a Bayes' Net can knock out the solution to any satisfiability problem efficiently, then we just found a way to solve [? SAT ?] efficiently. We don't think that's possible and so, therefore, we know that we've got sort of bad news with these Bayes' Net problems in general. Because even though they might not encode satisfiability, they might, or something else equally bad. So here's how you could, given a satisfiability problem, build a Bayes' Net. Somebody drops on you a satisfiability problem. What do those look like? They are a bunch of variables, and then you have a bunch of disjunctions, ors, of those variables so there are negations. And then those little clauses are all ended together. How could I write that as a Bayes' Net? Well, variables X1. It's either true or false. And I could stick that in my network, maybe with a distribution says 50-50, true or false. And here's X2, 50-50, true or false. And then each of these clauses is going to say, well, what's y1? Well, guess what? This is y1. y1. OK. What is y1? y1 is true if X1 is true, or X2 is true, or not X3 is true. How do I encode that? Well, remember, hidden inside this Bayes' Net for y1 is the probability of y1 given various assignments. And so you just set it up so that y1 is false if I have a bad configuration of X1, X2, and X3, and y is-- y1 is true otherwise. So I can do that for y2 and y3, and then each of the variables. These are the variables. This row is the clauses. What's this structure? This structure is going to be the ands. This is the thing that's going to say all of those Y's have to be true. OK. And you similarly fill in their probabilities so that they enforce-- they're true only if both of their parents are true. Then you go and you say, hey, what's the probability of z? Well, if there's some way to assign those X1s so that everything percolates down to true, then the probability of z won't be 0. And if there isn't, it will be 0. And so by looking at this marginal probability, you can answer whether or not this thing is satisfiable. And so you say that's a weird sort of construction. Why is there this tree-shaped object? Why don't we have variables, disjuncts, conjuncts? OK. Let's do it. So let's say instead all those y1 through yn went straight to a z. Well, they have to all be parents of the z, like this. And if z has a ton of parents, what's living in there? Well, it's something that says all the parents have to be true. And like exponentially many other things, because you don't want to have a lot of parents. So this is a construction that's designed to stay compact while specifying that conjunction. If you made everything a parent of z, it wouldn't be compact, and so it's not actually-- it's not actually that bad that it takes a long time to solve because it's a big thing. So, in general, for these reductions, you need to have something small that is hard. This keeps it small. All right. All right. We now know a Bayes' Net inference is NP-hard. There's no way around that. But in practice, often things are not that bad. So let me give you some examples of things that aren't that bad that maybe will remind you of things you've seen in CSPs. So one thing that's not that bad is-- well, before we talk about polytrees, one thing that's not that bad is a Bayes' Net that's a chain like this. OK. Why is that not so bad? Well, if I have to do variable elimination on this, it turns out that if I sort of eliminate them from left to right, nothing will really explode in size. OK. Really, the thing that's bad news is when you have lots of things that get coupled together. So a parent with a lot of children, and you eliminate that here, there are things-- don't have that many parents, don't have that many children. In general, a polytree is something that is a directed graph that does not have undirected cycles. You can think about it, basically, as a tree. But there are some other cases, like, two parents pointing at the same node. For polytrees, you can always find an efficient ordering. And, in fact, that example we saw with the z's and the y's and the X's-- not the satisfiability one, but the one z on the top. That's an example of a polytree. And for polytrees, there can be bad orderings, but there's always a good one. So what does that remind us like from CSPs? Remember, CSPs, if you try to do your backtracking search, that might be expensive. But if you arrange things just right, then you can be greedy if you did this sort of message passing algorithm for constraint satisfaction-- for enforcing our consistency. This is a very similar thing. They're very deeply related, in fact. And there's also notions like cutset conditioning. So just like in a CSP, if you don't have a polytree, but if a couple nodes would go away, you would. You can do the same thing here. You can sort of instantiate the problematic nodes in all the different ways, which creates a number of subproblems that's exponential in the size of the cutset. But if what's left is a polytree, then you have an efficient algorithm over what's left. So you can see this sort of same idea from before. Take a CSP, find a cutset that leaves a tree-shaped residual, and now everything's efficient. You can do the same thing here. Take a Bayes' Net, find a relatively small set of things that leave a polytree residual. You have to work out what residual means here. And then you do something efficient on what's left. OK. So you can think about the specifics there. All right. What do we have? We talked about representation. What is a Bayes' Net? What joint probability model does it represent? We talked about conditional independence. What can you conclude about which variables do and do not influence each other given other evidence? We talked about various things and probabilistic inference. How to do it exactly in exponential time by enumeration, how to do it exactly in exponential time, but often faster with variable elimination. We talked about why this thing is NP-hard. Next time, we're going to talk about sampling, which is a way of doing inference really, really fast. And maybe getting the wrong answer, which is another way of attacking these problems. So we'll talk about sampling next time, and then get to learning later in the course. All right. We'll stop there. Thank you. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181030_Particle_Filtering.txt | PROFESSOR: OK. All right. I'm too quiet this time. All right, everybody. Today, we are going to continue talking about hidden Markov models. And in particular, we're going to extend what you saw last time about inference in hidden Markov models to a sampling based technique called particle filtering. And we'll also see some applications of these methods. We'll see even more applications towards the end. The overall structure of this course, remember the beginning was about choosing actions. The middle has been about reasoning about uncertainty. The last chunk of the course will be about machine learning. And then at the very end, there will be a section on applications where we'll see a bunch of different techniques that we've seen throughout the semester show up. Hidden Markov models will feature prominently, because this basic idea of reasoning about underlying phenomena over time on the basis of noisy observations is a really important one. And that overall framing backs a lot of modern problems in AI, though the specific techniques that we use do vary. So today, what we're going to do is we're going to talk about quickly recap hidden Markov models from last time. We're going to talk about particle filtering as an approach to doing inference. Particle filtering partially is something we do-- yes? STUDENT: [INAUDIBLE] PROFESSOR: All right, can people hear me in the back? OK. All right, I think for once, I'm just quieter than expected. All right, how's that? I also just got louder. All right, so if everyone can hear, let's find that train of thought was particle filtering, we partially do in the real world, because it's efficient. Especially when your event spaces are very large or continuous or infinite, it can be a very powerful technique. But it's also a really useful way of understanding what's happening in the exact case. Because sometimes, it's easier to think about tracking a particular particle or a particular sample instead of tracking the exact space in aggregate. We're going to see a bunch of demos to illustrate what goes on in these exact and approximate techniques. And if there's time, we'll start talking about most likely explanation queries, which are used for a lot of things like speech recognition where you're interested not only in tracking something over time, but in reconstructing the trajectory it took over time. We'll see some applications today. Robot localization and mapping are going to feature prominently as well as some ghost busting. And later on in the course, we will see a little bit of information about speech recognition. So to recap, last time, you saw a couple instances of reasoning about how a random variable evolves over time. The simplest instantiation of a random variable evolving over time is a Markov chain, like what's shown here. In a Markov model, you have some variable x that replicates at every time step, and you're interested in watching how its value evolves probabilistically over time. In order to do this, we need to supply you with a couple pieces of information, so that you can ask questions like, what's the probability of this variable on day three? And so we need to give you an initial probability distribution, so some initial marginal probability over x at the first time step. And we also need to give you a transition probability matrix. These are sometimes referred to as the dynamics of the model, and this represents the probability distribution over this variable x given its value at the time before. So what this tells you is this tells you probabilistically how this variable evolves. If this variable is a variable that is unchanging, this will be a big diagonal matrix that just says whatever you were before with probability one, you'll be the same thing at the next time step. Usually, something more interesting is happening. So the example you saw last time is over here on the right, and this is a very, very simple Markov model. Here, the random variable x takes on the value rain or sun and represents a simplification of weather. And this thing on the right, if you look at it really quickly you say, that's the strangest Bayes net I've ever seen, because the arrows form cycles, and some of them are even pointing at themselves. It's not a Bayes net. So on the left, that's a Bayes net. So even though we talk about hidden Markov models and Markov models as special cases, because we feed them to you in time slices, and then they unroll over time, this here is a Bayes net. Over here, this is not. This is, in fact, more like a finite automaton. The circles on the right represent the states of this Markov model. And in particular, they represent elements in the domain of the random variable x. So here, x can take on the values rain or sun. And the probabilities are represented here by the arrows, which show what state transitions are possible, meaning probability non-zero, and the numbers or weights on those arcs, which represent the probability. So it says, for example, if it's rainy today, 70% chance of rain tomorrow and 30% chance that it will transition to sun. This kind of a network structure is really useful for specifying the structure within the different states and how they can interact. This is useful for things like speech recognition where not all states can connect up to other states. When you see these, don't confuse them with Bayes nets or try to run variable elimination on them. They are representations of probability tables. All right, so you have a Markov model. And if I give you a Markov model, shown as a Bayes net on the left, and I give you the specific probabilities that live inside it, shown on the right, what can you do? There's not a lot you can do. You can do things like say, if yesterday it was sunny, what's the probability of rain? Or what's the probability of rain five days after it was sunny? These are computations you can do on a simple Markov model like this without any evidence or observations, like we'll see in a hidden Markov model. There are other uses for Markov models that you talked about last time, I think. Like for example, looking at the stationary distribution, that backs some algorithms like page rank where that quantity is important. In a hidden Markov model, which is much, much, much more useful, you have not only the random variable x, which tends to be the variable that you're trying to track or reason about, you also have another class of variables that you see at every time step, which are your noisy observations of something that's connected to x. They may be an observation of x itself. They may be a sensor reading. But what you have now is in addition to at each time step knowing how the variable x evolves, you also have evidence, and you're armed with a table that shows you for any value of the underlying state x, which you don't observe, what the probability distribution over evidence values is, which in general you do observe. There are other questions you could ask of this Bayes net shown here as a hidden Markov model, but the one you usually ask is, given evidence at every time step, either what is the current distribution, the current posterior distribution over the variable x given all my evidence to date, or I ask the question, what sequence of x values was most likely given the evidence I've seen? In this case, p of given x says something like, when there's rain, you usually see the umbrella. And when there's sun, you usually don't see the umbrella. If you remember, this was a case of something like a, I don't know, maybe a security cam robot that doesn't get to see the weather, but sometimes sees you walk in and out with or without your umbrella and has to reconstruct something about its belief over the variable that represents weather. All right, so these are Markov models, which we won't say much more about, and hidden Markov models, which we're going to say a lot about. Let's see an example. OK. All right, so here is our ghost buster grid. What is this showing? This is going to show at all times our belief distribution over the random variable ghost location. What's a belief distribution? It's the conditional probability of that variable given all the evidence to date. And right now, there is no evidence to date. So how can I change this? Well, in a hidden Markov model, you'll notice that there's a very regimented structure, which you'll have to modify to make more flexible in your projects. In a hidden Markov model, every time step, you get some sort of evidence, and then time elapses, and then more evidence comes in, and then time elapses. So if I were going to simulate that here, I could say, I'm going to get some evidence. And what that does is it reshapes my posterior distribution knowing that the ghost is medium far from that point probably. Now if I let time pass, this ring that represents my belief distribution, what's going to happen? It's going to get moved around through the dynamics of the model. And you can't see in this visualization what the probability of ghost position is given the previous ghost position. But let's imagine I know that the ghost tend to wander around clockwise in a circle. So you can see that diamond of higher probability has been pushed along that whirlpool there. So in a hidden Markov model then, I will get more evidence. Notice that when I get this evidence, it tends to sharpen my belief distribution. It's easier to localize the ghost after I get information about its position. That makes sense. But then when time elapses, not only does this probability swirl around a little bit, it also flattens. And this makes sense too, because I'm not totally sure where the ghost is. And as time passes, I become even less sure, because its movement isn't deterministic. So in a hidden Markov model, I would get another piece of evidence. Time would elapse. Get evidence. Time elapses. And you can see that I'm slowly accumulating, in this case, some certainty about the position, but I'm never totally sure. OK, now, I'm pretty sure. And if I stopped collecting evidence and I treated this like a Markov model instead of a hidden Markov model, what's going to happen? Right now, I'm pretty sure where the ghost is. OK. I'm super sure where the ghost is. If I let time pass, my belief distribution gets flatter and flatter, because the possibilities that could unfold drift further away from that most likely position. All right. So that gives you a sense of a little bit of what happens when time passes and what happens when you get evidence. And those are the two building blocks that we're going to use, both in exact inference and in sample based inference with particle filtering to construct and track our belief distributions over the random variable x. So there's basically two pieces we can break that into, and we want to wrap our heads around the math. Because it's basically these two little building blocks that get used over and over. So the first case is this passage of time. I don't exactly know where the ghost is. I don't exactly in general know what value x1 takes, but I have some distribution. And I'd like to say something about that random variable at the next time step. I'd like to predict forward one layer. Predicting forward involves taking my belief distribution and advancing it through the dynamics of the model. We'll dive into that. That's going to show up over and over. The second base case is I have some distribution over the random variable x. I think the ghost is about here. Then evidence comes in. That evidence will also reshape my belief distribution. And from these two building blocks, you can do any kind of updates of getting a bunch of evidence or letting time pass for a while without getting evidence. You can do it in the exact case, and you can do it in the sampling based case. So let's dive into these two cases. All right, first one, the first one is I'd like to know the distribution over x based on what? Right? So let's say I want to know the distribution over x at time two. Well, what do I know? In general here, I'm going to know the distribution at the previous time step. So what do I know? I happen to already know P of x1. Why? Well, it could actually be that I've observed x1. That would be extra easy. But maybe I know it because it's given to me on the first time step as a known in the problem, or maybe I've already done inference up to this point. And I know it conditionally on some evidence, which isn't shown here. We'll expand this based case to condition on evidence as well. But let's say I'd like to know p of x2, but what I'm given is p of x1. How am I going to do that? Well, the way I'm going to do that is I'm going to break that into two steps. And the first one I say, well, for any given value of x2, p of x2, I can rewrite that as p of x1, x2. Remember x1 is for the first time step. X2 is for the second time step summed out over x1. So this rewrite here that says p of x2 equals p of x1, comma, x2, summed over x1, whenever I write something like this, what you should be thinking is, why was this allowed? Otherwise, you start to get to feel like you can throw in variables and commas and conditioning bars everywhere and sum things out and whatever you write is going to be true. That's not true. Almost anything you can write about probabilities will be false. Why is this one true? You want to think is that because of something I know about an HMM, or is that just laws of probability? What's this one? This is just laws of probability. You can always take any p of a, and say, hey, that's the same as p of a, comma, b if I sum out all the values of b and whatever random variable b belongs to. That's always true. OK, that's introducing a variable and marginalizing out at the same time. All right, well, how does that help you? Well, that helps you because p of x1, comma, x2, that is the probability of x1 happening and then x2 happening, is just the probability of x1 happening in the first place, which remember we know something about, and the probability of x2 given x1, which is an element of the transition probability that was given to us when we were handed this hidden Markov model, or in this case, just Markov model. OK. So when can I do this? I look at this and I say, is this because of a property of an HMM, or is this because of a law of probability? In this case, this is just the law of probability. When we do this in general with evidence, we're going to need to know conditional independence properties from the HMM. But right here, I've just written here the chain rule in its simplest form, the product rule. All right, so what does this mean? This means that if I know the probability of and I'd like the probability distribution over x2, for each value of x2, I say, well, you had to come from somewhere. You had to come from some x1. I know how likely each x1 is. So I write the probability that you started at that particular x1 and then went to our current value of x2 we're considering from there. So these are all the ways you can end up in x2. You can start at this x1, that x1, that x1, and then transition here. And then you repeat this computation for every x2. OK? So the transition probabilities say, hey, if your ghost was here at x1, here's all the places it could be at x2. But this computation says, hey, what's the likelihood that the ghost will be here at x2. And then you consider, well, you could've started here. You could've started here. You could've started here. You could've have started here. And you compute all the ways of getting to that x2. And for each one, you weight it by the probability of being there at x1 times the transition probability. This idea that you think about all the ways to get to a certain state by considering the ways you could have gotten there, that's going to be the backing of the dynamic programs like the forward algorithm, which you saw quickly last lecture. OK, any questions about that? All right, so that we wrote basically without using the fact that this was a Markov model or a hidden Markov model. The hidden part won't show up until we have evidence. So we're going to have to use somewhere the conditional independence of a Markov model, which says that the past is conditionally independent of the future given the present. All right, so let's say we know something other than p of x1. Let's say we have a belief about xt. Say, what's b? OK, this is just shorthand. It's actually kind of sloppy shorthand that people use to indicate that this is a belief about variable xt. But with a probability, when I write p, I have to be very, very careful what I condition about. Because p of xt, that's a marginal probability of xt. P of xt given e1 through t, that's a conditional probability condition on the evidence. They are not the same. With belief distributions, when I write b, sometimes I'm sloppy. If you're ever confused on homework or exam and it's not clear in the context, please ask. But here on the slide, imagine we have a belief over xt, which is equal to some already computed conditional probability over xt given the evidence up to time t. So I've been tracking the ghost. Here is the probability. All right, then I want to know what should my belief look like after one more time step. Well, what is that? As a probability, I now want to compute the distribution not over xt, which I already know. So this thing here in the box, that's known. That's sitting there in a vector in memory ready to be accessed. I'd now like to compute given the same evidence, e1 through t, I'd like to compute the probability distribution over x at time t plus one. What does this look like visually? This looks like I have a hidden Markov model here, so now evidence is coming into it. I have a hidden Markov model. Here is x1. This went all the way up to xt. There was evidence at each time step. So there's all this evidence. Here's evidence one all the way up to evidence t. All the evidence is shaded, so I know all those nodes down there. And I currently have the probability over that node. What is going on with the ghost right now at time t? And what I'd like to do is I'd like to project that forward time step to xt plus one without observing any new evidence. The evidence will come. So time is going to pass. I'd like to compute this new conditional probability. How am I going to do it? I'm going to write down a bunch of conditional probability statements that are licensed either by the laws of probability or by something I know about hidden Markov model's conditional independence until I can reduce the thing I'm looking for in blue, which I don't know, to some computation involving known things. So the first thing I can do is I can introduce this variable xt. I'm curious about the value at t plus one. I can throw in the value at xt and sum it out at the same time. This is just introducing a variable. So xt popped in here, and it got summed up. The next thing I can do is I can pull out, I can decompose this conditional probability. I can say the probability given my evidence one through t of generating xt and then xt plus one is just the probability of generating xt. And then once I've done that, generating xt plus one. This isn't exactly how you usually see the chain rule, but this is also just true from the laws of probability. If you write all these conditional probabilities, everything will cancel out. So far, what have I accomplished? I've done some manipulation. And this value that I already know, which is the distribution at time t given the evidence up to t, has appeared, and now I can plug those in. I don't need to simplify that anymore. But this part over here, the probability distribution of xt plus one given xt and e of one to t, this still needs to be simplified, because this is not provided by my hidden Markov model. However, what my hidden Markov tells me is that xt is conditionally independent of all evidence before it given xt plus one. And that lets me simplify that expression down to a transition probability. So this says something pretty intuitive. It says, if you want to know how likely it is, given some fixed evidence, of being at sum xt plus one, you just think, what are all the different ways I could have gotten there? Well, I had to be somewhere at the previous time step xt. So I'm trying to figure out how likely is it to get to xt plus one. So I think of all the ways I could have gotten there. I could have been at all these different values for xt. Each one has this probability given the evidence. And then the probability of doing that transition is the transition probability. So I had to start at a certain place. I had to transition to my current xt plus one. And then, I'm done. I add those up. All right, so to write that compactly, you'll sometimes see something like this. This says, if I have a belief distribution, yes, it's conditioned on the evidence, yes, I didn't write that out, for time t. And I'd like to know the belief at time t plus one. Without any new evidence, I just take that old belief distribution, and I multiply it by the transition probabilities and add things up appropriately. This is sometimes called b of xt through the dynamics. Yep? Why do we have this b prime? Oh, yeah. xt plus one. Yeah. Sloppy PowerPoint. Sorry. xt plus one would be more correct. Thank you. All right, so the basic idea here is you have some belief distribution, and it gets pushed through the transition. If you actually knew where the ghost was, if you knew the value of x at time t, it would be just the transition probability from there. But since you don't know, you get a superposition of those transition probabilities weighted by the likelihood. All right, so what happens as time passes in the absence of any observation? As time passes, uncertainty accumulates. So even if you knew exactly where that ghost was or the exact value of variable xt, if time starts to pass, well, after one time step, even if you know the value of xt, at the next time step, all you know is where it would go. And so the sharper your transition probabilities are, the more deterministic they are, the more you're going to know after a time step. But there's a lot of things that could happen. Even after one time step, you're starting to be unsure. And if you push things, five, 10, 100 steps into the future, you stop being as certain as you were, because of the simulation. This is a general tendency. Is it possible to be uncertain, and then let time pass and have your certainty increase? It's definitely possible. If I had transition probabilities that said that wherever you are, you go towards the lower right corner. And I said, the ghost is somewhere. I don't know where, uniform probability. Right? It would say 0.2 everywhere or whatever. And then if I advanced time, that probability would slowly push itself into the corner. And after 1,000 times steps, I would be sure that by now surely the ghost had made it to the corner. So there are transition probabilities for which the stationary distribution, which is basically what this is computing, is actually very peaked even if you don't have a sharp transition, even if you don't have a sharp initial probability distribution. Normally, it goes the other way. Normally, you have some knowledge, and it decays over time. And so the picture here is more like you have your robot, and it starts off with some clear sensor reading. And if some time passes without any more sensor readings, it gets a little confused. And then if enough time happens, it has no idea. We saw that in Ghostbusters as well. OK, here's the other base case, and then we'll translate these into this new particle filtering algorithm. The other base case is I have some probability. So here, we imagine the known is p of x1. I have some distribution over a random variable, like the ghost position. Maybe it's conditioned on a bunch of evidence. We'll get the evidence here in a couple slides. But I currently have a belief distribution and evidence comes in. When time passes, I take my belief distribution and I reshape it through the dynamics, which in a fuzzy way pushes it forward through that transition probability. Here, time isn't passing, but I'm going to get some evidence. So I'm going to go from p of x, which is a known quantity, and try to compute p of x given some evidence. Well, in a way, this is easier. And in a way, it's more complicated. It's easier in that we're still going to work with x1. All we have to do is figure out how the evidence reshapes our beliefs, and this should be fairly intuitive. Because if I'm not sure where in this room my robot is, and it takes a, I don't know, how close am I to the wall reading, and it gets really close, suddenly, I don't think it's in the middle of the room anymore. So evidence should reshape my probabilities, and it should do it in an intuitive way. When I get evidence, the underlying states x that are more compatible with that evidence should come to dominate. How exactly does that work is a little more complicated, because there's some renormalization involved. So again, we're going to use the laws of probability to rewrite x1 given e1 in terms of the things we actually know. In this case, we know p of x1. And we know the evidence probability, so we know p of e given x. OK, the HMM gives you that. All right, so let's write that out. OK, that's too much. Probability of some x1 given e1. Well, I can rewrite that as the joint probability of x1 and e1 divided by the total probability of the evidence e1. This step, just the law of probability. This is the definition of conditional probability. All right, well, how does that help us? This helps us because to compute p of capital X1, that's going to be a vector. I'm going to get a probability of each and every value x1 could take on given e1. And so I'm going to do the same computation for every single x1, which means I can ignore this constant, because this constant is present for each value of x1. And I can do the computation being off by that constant and then renormalize at the end. So what that means is it's actually enough to compute the joint probability of x1 and the evidence, provided I do that for every value of x1 and then renormalize. And the probability of x1 and the evidence is just the probability of x1, which I already know, that was a given, times the probability of the evidence given x1. This has a fairly intuitive interpretation as well. This says, for each position x1, each value of x, x1, they get weighted by the probability of the evidence given that underlying state. So if e1 is likely given x1, this doesn't change very much. If e1 is very unlikely, this p of x1 will then be reduced to a very small number. Each p of x1 gets reduced. Some get reduced more than others. And then you renormalize. And the ones that were reduced less in proportion will grow. This is why that robot, if I have some distribution that at sums to one over robot positions, let's say it's flat over this whole room, and I get a sensor reading saying the wall is nearby, all of the positions for which wall is nearby is likely would have their joint probabilities be fairly close to just p of x1. All of the positions in the center would have their joint probabilities very close to zero. And then when I renormalize, all those probabilities that are towards the walls will then go up, because we divide by sums number that's less than one. All right, so that's the key bit. Now I'll write it out here, but some of these equations are easier to go through on your own mechanically. So here's how observation works. Imagine you have some current belief distribution. You have p of x given some previous evidence. So for example, we might have a belief distribution over the value x takes on at time t plus one given all the evidence up to t, but not including t plus one's evidence. This is actually just the same if you're getting multiple readings of evidence at the same time. You can modify the Bayes net for that as well. You get the same update. And so we have this b prime, which is a belief over xt plus one, but it doesn't take into account the current evidence. And we're going to update it by feeding it evidence at the current time step. So we're going to feed it e of t plus one. So evidence comes in. And now, I want to compute what's the probability distribution over t plus one given my old evidence, but also e of t plus one. So that evidence chain is now longer. So I can write that out. I can say, well, what that is is I can shuffle some conditional probabilities here and say, that is up to this constant here, that is the probability given it's proportional to the probability of the old evidence and then xt plus one happening and then the new evidence happening. And so if I write that out, I can remove the constant, because it's constant overall values of xt plus one, as long as I renormalize later. And now, I'm computing the probability distribution given the old evidence, the evidence from time one to t, the probability that, first, I transition to x xt plus one and then I observe evidence e of t plus one. OK? What's that? That is the probability given the same evidence that, first, I transition to xt plus one. And then given that I've done that, I observe the new evidence. This here is a known. This is here. So I don't have to simplify that any further. I've said, my new belief distribution is actually just my old belief distribution times this quantity here, that I still have to simplify, and then renormalized. All right, let's re normalize that. Let's simplify that quantity. This is the probability of the new evidence given xt plus one and the old evidence. Well, in the hidden Markov model, the old evidence doesn't matter. And so I get the simpler statement, which now has this very intuitive form, which says, if you want to include the new evidence at t plus one, you take your belief distribution not including that evidence, you weight every term in that distribution by the conditional probability of the evidence. Now, it's not going to be a probability distribution anymore. Because where it was a p of xt plus one, now, it is a p of xt plus one and your evidence t plus one given all of your other evidence. But then if you renormalize, that transfers this to the other side of the conditioning bar. And now, you have a belief distribution again. So compactly, you could write this as the new belief distribution-- whoops, unexpected-- the new belief distribution is the old belief distribution weighted by the evidence and then renormalized. And that makes sense, because you believe what you used to believe except the evidence shifts things. And then your beliefs still have to sum to one. So all that stuff that got deleted because it was incompatible with the evidence, that probability just moves over proportionately to the other events. OK. There's a key piece here, which is unlike-- both of these basic operations, which are the building blocks of all these inference procedures, have a simple intuitive interpretation. Passage of time is you take your vector and you move it through the dynamics. Easy. Observation is tricky. It makes sense that you should weight all of the different outcomes by the evidence, but then you have to renormalize, and that's the part that's a little less intuitive. And that will also be less intuitive in terms of how it shows up in the algorithm. It's going to show up as a step either in the forward algorithm or in the particle filtering algorithm where it might be less obvious why you're doing a certain thing. And we'll see that in a minute here. So here's an example. I can pull up the app and do this in the app. You'll see these in your projects as well. As we get observations, what happens? Your beliefs get reweighted. And the caricature is, typically, you're sort of a little confused. And then some evidence comes in, and you're a little bit less confused. So let me see if I can make this happen on the-- let's see if I can have it here. So right now, I'm maximally confused. If I get some evidence, I'm less confused. I still don't know where the ghost is, but my belief distribution is concentrated on this diamond now. Now if I get more evidence, what will typically happen is my posterior distribution will sharpen, and it did. Now my belief is concentrated on these six squares. Let's see if it sharpens again. OK, it sharpened. Sharpened. Sharpened. Thank you random number generator. That's what normally happens. If you play with this app enough, you can do this in your projects, what you'll see is every now and then, you get some evidence, which is actually inconsistent with your current beliefs. And that actually causes you to become a little more confused. Because some things that you thought were unlikely, now, they seem actually a little bit more likely. And those things that you were sure of, they don't really match all the evidence anymore. And so it's possible for evidence to come in and for you to get more confused. But the typical thing that you will see-- I'm going to bust just for fun. Nice. OK. The typical thing that you'll see is that every time you get evidence, your uncertainty decreases. And every time elapses, your uncertainty increases again. Neither is actually saying that's guaranteed to happen. The only thing that's actually guaranteed to happen is that if you elapse time a lot without getting any evidence, that your belief distribution will converge to the stationary distribution of the matrix that's represented by your transition probabilities. Which means, among other things, it'll be independent of what you started out believing assuming everything's connected in a sufficiently probabilistic way. All right, let's see a little tiny zoomed in example, and then we'll start getting into particle filtering. So there are these two building blocks. And in your code, you're actually going to have functions with these names in them. This is not the only way you can represent this stuff. You're going to have one action, which is to take your belief distribution and let time elapse. And we saw that that is you take your old belief distribution. You multiply it by the transition probabilities. And then you sum things up in the appropriate way. There's also going to be a step where you observe evidence. In this case, you take your old belief distribution. You weight each term by the evidence. And then you renormalized, so it's a probability again. That's why this one is equal. Turns out, you don't have to renormalize there, because this is a stochastic matrix here. Here, you do have to renormalize, because you took a bunch of numbers that add up to one. You multiplied them each by a probability. So they each got either a little smaller or a lot smaller. And so you have to renormalize to get a probability again. So here's how it would work. You would start off with some belief about, say, the probability of rain versus sun. So let's say x here now is rain and sun, and e is umbrella or no umbrella. So you might have some prior on x1 that's given to you as input to the model. And then some evidence comes in. You see that e1 is umbrella. OK, so I can shade that in here. Now we do some computation. And we see, OK, there's this 0.5 and 0.5. They both get multiplied by probability of umbrella. One of those probabilities is very high. One of them is very small. And when you renormalize, you end up with something like this. So now I think it's rainy. But then I flip to the next day. I let time pass. I look at the probability of x2 given e1 is umbrella. All I have to do is use this elapsed time update, and things will flatten out. Because if it was rainy, well, maybe it's still rainy. And if it was sunny, well, maybe it's rainy, and things tend to flatten out in this particular little model. But then I get new evidence that I see an umbrella again, and then now I'm even more sure than ever that it's raining. OK? All right, any questions there before we do that basically again sideways with particles? OK. So one common application that people use to illustrate HMMs, we've been using ghost tracking. In general, you could think about those tracking position. Certainly not all hidden Markov models have the random variable x being position. We've had one here where it's weather. You can imagine in speech recognition, it has something to do with the words that you're saying. For medical diagnosis, you might be tracking some oxygen level. In general, we are trying to construct hypotheses with probabilities associated with them over the underlying variable x, which in a localization problem is positions on a map. What we've done so far is say, here's the map. And in each square, meaning for each value of x, we're going to write a probability. That's the whole vector of probabilities. The thing we track is a mapping, a vector, from hidden states x to probabilities. That is a distribution of recs. In particle filtering, you flip that around. You don't necessarily track the entire distribution. You track particles or samples, which are individual hypotheses of what's going on. You might have two samples of the same hypothesis. Some samples might not have a hypothesis, or sorry, some hypotheses might not have a sample. And the data structure you manipulate isn't a map from all the different outcomes of x to probabilities, but rather a list of samples. So for example, here, you might have a bunch of samples hypothesizing you're here, you're here, you're here, you're here, you're here. And then there's other places where you have no samples representing that hypothesis, which means in that empirical distribution probability zero. So you might need a lot of samples for this to work, and there might be a lot of hypotheses which have probability zero. So this is an approximate solution to the filtering problem. Remember filtering is one name for the problem where I ask given all of my evidence, what is the current hidden variable xt? Where is the robot now? That's different from the most likely explanation problem where you want to know something about where, basically what was the robot's road trip like that got it here? There are a couple reasons why you might want to do particle filtering. One is this event space x of robot positions might be too big to use exact inference. It might just be really big, and you don't want to store it, or it might be slow to work with, or x might be continuous. If I want to track robot positions in the real world, I can't enumerate all the pairs or triples of real numbers. Right? But I can say, all right, I'm willing to have 600,000 samples, go. And there's one sample that's here and here and here and one over by the wall. And there, I get to decide how much compute I do and what the trade off is between compute and accuracy even if x is an infinite space like a continuous space. It's also a solution, because we can now do approximate inference, which often has a lot of trade offs, usually speed. So here's how it works. Here's how particle filtering works. Rather than writing down belief distributions explicitly where for every outcome x, I write a probability, which is a vector of size domain of x, which could be very big. Instead, I'm going to track samples of x. Maybe I track a million samples, maybe I track two. I'm going to track samples of x not necessarily all values. These samples here are called particles. They're just samples. The amount of time I'm going to spend is linear in the number of samples. Because my basic flow of this algorithm is, here are my samples. They represent my current belief, because, hey, all the samples are over there by the wall. I think the robot's probably over there. I'm going to do operations on these samples to give me new samples, and I'm going to do it basically by scanning over the old samples. So that's great. It's linear in the number of samples, but the number of samples you need might be very large. The thing you actually store in memory, and this is really important, because some of the PowerPoint figures will suggest that all the spaces or even the ones with zero are tracked. They're not. In memory, you have a list of samples, a list of particles. There is not a list of all possible states. That's important, or there'd be no point doing this. In practice, robot localization works this way, because it happens in continuous spaces. A lot of other tracking problems also work this way. And particles are just samples. OK, are we ready? Here is a distribution over this three by three grid, represented in the explicit way. For each state, I have a number. In this case, some of them are zero. This is the way I'm going to draw a collection of 10 samples. Of those 10 samples, five of them are in the lower right corner. Some states have no samples. If I write out the state space, I'll have to write out nine numbers. If I write out the samples, I have to write out some list, oh, OK. There's one in two, one. There's one in three, two, whatever. And I have to write them all out. I'll have to write out pin numbers. In this case, it's cheaper to write out the states based representation. In general, it's going to be the other way around. So here's my representation. My representation of my distributions over x, which will in general condition on evidence, they'll be belief distributions, but whenever I have a distribution over x, it's going to be a list of n particles. Normally, n is much, much, much smaller than the size of x, except on these slides where it's just going to be a tiny bit bigger. But imagine the grid is huge and the number of samples compared to that is relatively small. So we're not going to store a map that says in this point, this place we have five samples. We might as well just draw the probability. We're going to store this in memory, list of particles. When we see a green particle here in these slides, that's going to be the particle that I'm going to talk about. We're going to talk about the journey of the green particle and what happens to it. Particles aren't actually distinguished. They're just a list of them. They each have the same status in the computation so p of x will be approximated by the number of particles that have the value of x. So many, many, many values of x have p of x equals zero in the sampling distribution. If we have more particles, we get more accuracy. Right now, imagine all the particles have equal weight. So if I show you this list of particles here, well, I could ask you a question like, hey, what's the probability that the robot is at this location? Which is what is that? Three, two. So I would look through, and I'd say, well, there's a three, two. And there's a three, two. So two out of 10, that is my approximation. Is that right? It's probably not right. This is like any other sampling method. Unless you have a lot of samples, you're going to have some error due to the sampling noise. All right, so I am going to be carrying a list of samples, a list of particles which represent my current hypotheses and in aggregate represent my belief distribution over where the robot is. It could be here. It could be here. And at three, three, I have five samples. That represents I have a higher probability there. How should I take this list of particles, which represents my probability distribution, and advance time? What happens when time advances? The dynamics come into play. And for every xt, there will be some distribution over xt plus one, which represents what happens next in the dynamics. And so what I'll do is I'll go to each particle and I'll just move it, like think about moving it around the game board here. And for each sample x, that I have right now, I'm going to take a sample of its future. Well, if it's at value little x, I'm going to look up the transition probability and say, oh, here the probability of the various x primes. And then I'm going to sample. I'm going to use my multinomial sampling that we had before. I'm going to take a uniform distribution from zero to one. I'll map it onto the various outcomes, and let's say it'll go to the square below. That is my new value x prime. So for example this green, let's go green, this green particle here is at three, three. I will look up and I will say, in my HMM, where do things go from three, three? And it'll say, well, 60% of the time, they go to the square below, and 20% of time, they go to the square below and to the left or whatever it is. And then I'll flip a coin, and maybe it goes to the square. That particle will be replaced by a new particle at this newly sampled position. So each particle before turns into a particle after, and each one takes its own journey. So if I have five different particles that all happen to be at three, three, for each one, I'm going to pick it up and say, hey, you, where do you want to go? I want to go down. OK, great. Where do you want to go? I want to go down. Great. Where do you want to go? I want to go to the left. And so they will in general scatter a little bit. But each one is independently picked up. You go through this list, pick up each particle, and sample a future for it. Even if two particles are the same, they will in general not have the same future. And you can have two particles that are different that end up having the same future, because there are multiple ways to get to a certain square. This should remind you of prior sampling. Things that are more likely in my distribution, that increased likelihood is reflected in the increased count of samples. So the sample frequencies here are going to reflect the probabilities. And then each variable, I'm just going to sample a future for, just like prior sampling. Oh, you chose sprinkler on. Let's sample for wet grass. And so for each of these samples, I'm going to be, oh, you're in three, three. Let's sample a future for you. So each particle before becomes a particle after in general evolved according to the dynamics, and this is going to capture the passage of time. If you have enough samples, these will look pretty much like the exact values before and after. If you only have a couple samples, it'll be super noisy. OK? Any questions? So this is actually a lot easier, I think this is a lot easier to get intuitively than the actual exact inference, because I don't have to think about marginalizing probabilities or anything. I just have to think, all right, here's a hypothesis. Maybe it's at three, three. If it were, where would it go? I'll flip a coin. The fact that there are now a bunch of particles here, well, how did they get here? Well, they all started at a different location. They all flipped a coin about their future, and then they transitioned here. That is just the summation weighted by the transitions of all the previous probabilities, but it's done in a more sample based way. OK, here's the trickier step. Then we'll look at some examples and take a break. The trickier step is observation. So I've got a list of particles. So here they are. This is what is actually in memory. This is what I put in PowerPoint so we can visualize it. OK? So you've got this list of particles. And I could look at this list of particles, and I could ask questions like, how likely is it that you're at two, comma, two? Well, I've got one particle there out of 10. Looks like 10%. I could ask questions like, what's the most likely location of the robot? And according to these particles, it's three, comma, two. I can also use these particles to incorporate evidence. So let's say I get evidence that this square here measured a reading of red, meaning ghost very close. Well, if this were prior sampling, or more to the point here, rejection sampling, what I would do is I would pick up three, comma, two, and say, all right, time to sample a future for you. What reading do you think should happen? And I'll be like, oh, I think orange. Dead. You pick up the next one. What do you think should happen? Yellow. Dead. And you reject all of these samples by asking them what evidence they would like. And then when it doesn't happen to match the evidence you have, which is red, they're out. This is a way to reject all your samples. OK, so we don't do this. Instead, we do something that's like likelihood weighting. I pick up each particle and say, hi, particle. I'm here to inform you that there is this evidence red at this square. And the particle might say, great. That's probably what would have happened anyway. So this particle maybe says that. There are going to be other particles that say, I think that's extremely unlikely. And so each particle will stay there in your list, but it will pick up a weight, because we're now doing likelihood weighting. We're not giving them the option of being rejected. We're forcing them to stay around. And so they can only stay around fractionally. They can stay around with a fraction that represents how likely it would have been to generate the evidence in question. So now suddenly, we have the same particles, but weighted. You can think about that as some particles stay size one, but most particles shrink. Some particles shrink a lot. Maybe some particles will shrink down to zero. So suddenly, instead of 10 particles, I've got three fractional particles. But I do still have 10 things in my list. What does this correspond to? This corresponds to taking my belief distribution, my old belief distribution, and weighting everything by the evidence. Except remember when I weight everything by the evidence, I have to renormalize. Why do I renormalize? I renormalize because I don't have probability distribution. Here, I have samples, but a lot of them are sitting there with tiny, tiny weight, maybe even zero weight. So I need new samples. And so there is another step that we add after we've down weighted all of the samples by the evidence probability, where we get new samples weighted from the old samples. OK? So what we're going to do is we're going to take these weighted samples, which are the old samples we had, and some have shrunk a lot and some have not shrunk, and I'm going to say, these samples are no good to me, because their weights are starting to shrink. And if I do this for too long, their weights will all go to zero. And so what I'll do is I'll create new particles. The new particles, I sample with replacement here from the old weighted samples. So I say, all right, what particle should I have for the first thing on my list? Well, I come back up here and I pick one of my old particles in proportion to its weight. So this one here at three, two, which has weight 0.9, it's going to get picked more often than this one at one, comma, three, whose weight's approaching zero. And so you might find that particle at three, two, which was one highly weighted particle, it might show up now as multiple unweighted particles. And so we, by resampling, shifted the information about their relative probabilities from their weights over to their multiplicity. So how's that work? We had n particles. We're going to choose n different particles from our old particles with replacement. And in the process, we'll ditch the weights. The weights were necessary to reflect the fact that, say, this one here was present, but basically shouldn't contribute very much to the belief. That same situation is reflected in the fact that with high probability, it won't be selected. And now your update's complete. So when you observe evidence, you take your particles. You weight them by the evidence. And then you resample them. And now, you have another set of 10 particles, which are composed of your old particles, some of them more than once, and some of them have dropped off. Then time will pass, and those particles that are cloned from each other will start to diverge and so on. So let's do a recap. We'll see some demos. We'll take a break. All right, particles track samples of states or hypotheses of unobserved variables rather than a distribution. So instead of tracking a distribution, I track this list of particles. When time elapses, I move my particles around the game board probabilistically by picking each one up and picking a future. I don't have to renormalize. I don't have to resample. It's just each particle moves. Multiple particles that had been in the same place can diverge. Multiple particles that had been diverged can come back together. It's whatever you sample. When you get evidence, you weight the particles. Some of the particles maintain high weight, because they're compatible with the evidence. Some of the particles are starting to shrink. They're almost gone. You don't want to keep around particles that are basically not contributing, because they don't represent very much mass of your belief distribution. And so you take those weighted particles and resample, which gives you a new set of particles, once again unweighted. There's some multiplicity, and they represent your new distribution. Elapsed time, this represents what happens to your distribution if time passes. Weighting and resampling together represent what happens to your distribution when you observe evidence. OK, let's see some examples. We'll take a break. All right, so ghost busters. We will run the same ghost busters app where we can elapse time or take observations or bust the ghosts, except this time instead of exact inference, we will be doing it with particles. Each particle will represent a hypothesis of where the ghost is. So in memory somewhere is some list of 30 positions where the ghost might be. What's being visualized? What's being visualized is according to those samples, what is the probability implied over the space? Luckily, the space is small enough I can do that computation. In general, I would just have my particles. You'll notice that some of the positions have probability zero, because there's no particles that live there. And others have probability greater than 0.02, and that's because instead of sampling everything uniformly, the samples fell where they fell, and there's noise in that process. So here is what with a moderate number of samples a uniform distribution looks like. It's not all that uniform, right? What happens when I elapse time? Well, all these particles will swim around in a circle in a stochastic way here. So there they go. If I do it fast enough, you can actually see the circular motion there. I can gather some evidence. Like, oh, this one looks reasonably like-- let's sense here. All right, so you can see what happened when I gathered evidence? A whole bunch of particles were wiped out, because they didn't match that evidence. And then a bunch of new particles were sampled from those weighted particles. A disproportionate number of them happened to be right here. Maybe I'll sample here. All right, so a bunch of particles that were all piled up there that are now inconsistent with that evidence just basically got deleted. They got small weights, and then they didn't get resampled. So where did my mass go? It went to wherever there were particles that were still reasonably likely to have survived the previous round. So at this point, the particles are starting to clump together, but they're not probably in the right place. This is why it's dangerous to have a small number of particles, and it's always dangerous to have a small number of samples, because you're going to get noise from that. But in particle filtering, there's this extra tendency of particles to accumulate in weird places and get overly peaked in their distribution. All right, let's do that exact same thing, but with one particle. So what do I do? I'm going to have my initial list of particles, which will represent a uniform distribution over the space. There it is. That's as uniform as you get with one particle. All right, I can elapse time. What's going to happen? Well, the particle is going to sort of swim around in a circle. I can take a measurement. What will happen? Let's measure over here. All right, where's my particle go? It stays right there. There's nowhere for it to go. It gets down weighted, and then you sample a new one. And guess what? You've only got one choice, and your particle's back. And you can't shake this one particle. It's never going to split and become two particles. Each particle just does its thing. Only when you have multiple particles can you have one get resampled a few times and another one get lost in the resampling step, and this causes your distribution to refocus. All right, let's do it again, but with a ton of particles. OK. Now, there's a lot of particles here. And now you can there's some noise. Right? In fact, there's a 0.01 here where it really should be 0.02. But this is a lot closer to the actual distribution, because the number of samples is higher. So lets grab some evidence. All right. Now, if I let time pass, all of those particles will start to diverge. And now, they're smeared all over the place. Now if I take some samples, almost all the particles are living here. But if time passes, those particles start spreading out. And all those things that happen to particles, the same thing happens to the probabilistic distribution. OK. We'll bust. OK. It's approximate. OK, so let's now take a break. And then we'll see some demos of what these techniques can do. Two minute break. OK, let's start again and see some demos of what this looks like in practice. So most of the demos we're going to see now are robot localization demos in one way or another. Sometimes, the robot will be a ghost in Pac-Man. In robot localization, you imagine you know the map, but you don't know the robot's position. So think, what is a hypothesis in this case? That will usually tell you what the underlying variable x is that you're trying to track. If the hypothesis, I think the robot is here, in a known map, that's robot localization. So x here, the underlying variable that you'd like to reason about, is the robot's location on the map, e, the evidence, observations might be something like range finder readings. So in a lot of these cases, as shown here, you have some robot. You're not really sure where it is, and the evidence you get is you have a 360 range finder that says, far from an obstacle, far from an obstacle, close, close, close close. That tells you there's a wall behind you and to the right. In these cases with real robot localizations, the state space in the readings are typically continuous. There's some range finder reading, which is a real number. And your state space is some map with maybe polygonal objects or something, depending on how that's specified for you. And you're somewhere in there. The robot can be at any real location. So what's really appealing to you is particle filtering. But of course, your particles can't be everywhere in a real space. So it might look something like what's shown here. There's a map. The gray areas are obstacles, walls, furniture, and so on. And all the little red dots are places the robot might be. This is a case where the particles are smeared all over the map. And if you stare really hard and the resolution is high enough, you can see there's a lot of places that don't have any particles. And what if the robot's there? Well, in general, in particle filtering applications, you're going to have a cloud of these things. And you're going to ask questions like, well, what's the average location? So even if there's no particle in the exact correct hypothesis, it's still very useful, because you're working with a whole cloud of them. So particle filtering is a main technique. Let me show you a couple examples. This is an example from University of Washington where you're going to see a robot. So there's actually a robot in this map that's shown. You don't know where it is, so there's particles everywhere to begin with and a whole bunch of them. And the robot is going to get readings that say there's an obstacle in front of you, but nothing to your right or left. And so it's going to get these distances, which are shown here as blue lines. And of course, not only don't you know where the robot is, you also don't know which direction it's pointed. So your hypothesis is not just a dot. It's sort of a dot with an arrow. And as the robot moves around, new readings come streaming in. So if I say, oh, hey, I'm pretty sure there's a wall behind me. Well, I could be here. I could be there. I could be there. I could be there. And then as I move around, I start to see, hey, wait a minute. There's not a stage here that I expected, and so I know I wasn't at that wall. And so I'm going to run this, and you can watch the particles. And each particle, think of like a hypothesis. Robot might be here. And as you move around and you start to get evidence that's not consistent with all of those hypotheses, a lot of the samples are going to die out. They're going to die out by getting down weighted. And then during resampling, in general, they're going to not get reselected. OK, so let's go. Right now, it has no idea where the robot is. There's some most likely hypothesis, but it has no idea. The robot's actually going down the corridor, and it's getting all these readings that say, wall to my side, wall to my side, wall to my side, so you'll notice that already it's gotten a whole bunch of readings of wall to my side as it's been moving forward. So what does it know? It knows that there's a wall next to it, and that it was able to move forward along the wall for a while. And all these red dots that have survived are ones that are in locations where you would have moved past a wall for a while. So all the little rooms that aren't big enough to support that reading of a wall as you move forward, they don't have very many particles in them. And you'll see this distribution is going to start to collapse as you get more and more evidence as to where you are. So once you've seen a wall for a very, very long time off to your side, you know you're not in the square off to the left. So now, it's getting to the point where it knows it's along a long corridor. But it's not really sure whether it's on the left end or the right end. And then watch what happens here. OK, now, he knows where he is. Let me back him up. OK. Right here. Can I stop it? OK. Right here, if you had to guess you would say, well, the densest cluster maybe of points is over there on the right. And you can see that's where the visualization has guessed the robot position is. The robot's continuing to go forward. One of two things will happen very soon. Either it will get a range finder reading that says, there's a wall in front of me. At which point, a whole bunch of dots to the left are going to disappear, because they're not consistent with that evidence. Or it's going to see more corridor. And then a whole bunch of dots to the right are going to disappear, because they're not consistent with that evidence. And as soon as that happens, it's going to be pretty much localized. And it turns out, no wall appeared. Now, it knows where it is, and it can wander around. And there's this cloud around it. You'll notice anytime it turns, there's a bloom of uncertainty around its position. But now, we're basically tracking it. So that first phase was really localization. Where am I in this map? And then once we were localized, monitoring our location once it had collapsed down from enough evidence is an ongoing thing that's more like robot tracking. First, you're figuring out where you are. Then, you're tracking where you are as you go. But mathematically, it's all the same thing. All right, here's a laser range finder of this. It goes by pretty quickly. But imagine each of these hypotheses is consistent with the evidence. And the robot knows by now, based on its movement pattern, that it's somewhere in the hallway. And it knows it's about to turn into one of these two symmetric rooms. This is how I feel whenever I get off the soda elevator. And then it's either going to see a piece of furniture or not. If it sees a piece of furniture, it's going to be in the upper room. And if not, it'll be in the lower room. And wipe out on the lower particles, because it saw the furniture. So this is sort of the way this goes. You have a bunch of particles. They represent, in this case, two primary groups of hypotheses. And then evidence comes in that deletes a whole bunch of those particles. OK, any questions on that before I talk about-- Yep? STUDENT: [INAUDIBLE] PROFESSOR: That's a great question. What if you know where the robot starts? You still want to do this, because you won't know where it is for long. Imagine it's basically like how we localize. I know exactly where I am. If I close my eyes and I just start walking, I will have a reasonable sense for a while, and then I'll probably hurt myself. And what happens is the robot, you give it a command like, go forward a foot. And it goes forward 11.3 inches. And then you have it turn, and it turns almost what expected. And these little errors start to accumulate. This is equivalent to a lapse in time. Over time, your uncertainty is going to bloom. And that distribution is going to flatten. Because your transition probability says wherever you are, if you move forward, you're going to be a unit forward with some noise. And that with some noise is going to add and add and add. What you want to do is you still want to use something like particle filtering to track how that uncertainty grows and is then mitigated and reduced by the readings you get. And so you could think, well, if I really know where I am, maybe I should use another technique. But in fact, the value of particle filtering is especially high when you know where you are. And you'll notice in this here, the number of particles they use is adaptive. And so you start with a bunch of particles, or relatively speaking, a bunch of particles here. You start with a bunch of particles, because you have to cover all these hypotheses. You might as well actually have enumerated a grid of probabilities. But once you basically know where you are, you can just keep a cloud of hypotheses around you, which is basically like, I'm still there. I've drifted left. I've drifted right. Except instead of enumerating the hypotheses, you just blanket your nearby area with random samples. And that's really good, because it means you don't have to think about the local geometry or anything. You just blast samples everywhere. OK, you'll notice the particles now, if I can make this go away, the particles now are starting to drop, because the more peaked your distribution, the fewer particles you need. But you got to be really careful, because at some point, the math is going to tell you you are completely, completely sure you're at this particle. And remember that lone particle? It's bad when you have a lone particle. So you need to generally maintain a cloud of hypotheses to catch that uncertainty as it blooms. Any other questions? That was a great question. Yep? How do you determine the number of particles? That's a really hard question. There's no hard and fast rule. Often in these environments where you're repeating these things, you can figure out what the right number of things is to track. It's going to depend on the complexity and the level of uncertainty. And there's no hard and fast rule. There's some signs you don't have enough, like distributions collapsing. And there's signs that you have too many, like particles are all lining up on top of each other. But there's no-- I mean, there are some things that can be said about required number of samples. But in practice, it's hard to know. Yep? Great question. How does the robot move? In all of these things I've shown you, the robot's intended motion, meaning, the dynamics is known. Somebody is saying, move forward, or turn now, or whatever. So there's no action selection. It is simply tracking what's going on. In actual robot planning, you would have to make the decision based on some other goals. Maybe you're trying to move a package from here to there. For this tracking, you know where the robot is trying to go in the form of the expected behavior under that transition probably. If the robot's trying to stand still, you're not going to get any new evidence, and you're never going to figure out where you are. On the other hand, you won't actually accumulate any true uncertainty. OK, let me show you something else, which is pretty cool. It's called SLAM. It stands for simultaneous localization and mapping. This is where your hypothesis is not, I'm here on a known map, but, I'm here and I think the map looks like this. So you don't know the map. That means each particle is a map that you are drawing with a dot on it. OK? Instead of a grid of these things, there's some high dimensional space of maps and dots on them. So let's take a look. In this case, how do I visualize it? Well, I can't have each dot be a map anymore, but I can take all the hypotheses, each of which is a map with a red dot in it, and I can superimpose them according to a visualization. And so what you'll see is in addition to the ballooning out of where I am on the red dots, the map is also going to be blurry when I'm not really sure exactly where that wall was or which direction it went. And so as you go, you're mapping, but you're not totally sure how long this corridor is. And notice that flash was you had a whole bunch of particles that diverged. So you can see here, your space, your estimated position is in red, but you're not really sure where you are. And that's leading to not really being sure how far this wall in front of you is, because you don't know where it is with respect to the rest of the map. And as you start going around this loop, what's going to happen? Right? Right now, you've gone around in a circle. And as soon as you see your starting point, a whole bunch of hypotheses that perhaps this is a big spiral are going to vanish, and everything's going to snap tight right about now. So this is a case of maintaining a whole bunch of different kinds of uncertainty over multiple variables and using them to disambiguate. Let me show you another vehicle of this, another video. So you can see here, again, this is showing its trajectory. It's on one of these trajectories. But as it goes around, it gets less and less sure exactly where it is, until it sees its starting position. And suddenly, everything goes rectilinear here, and a lot of those hypotheses, which correspond to having drifted when in fact you didn't or whatever it is, go away. And you can see there's a bunch of snapping to rectilinear shape that happens when you see reference points. Localization and mapping, it's super cool. Any questions on the localization and mapping? And then I'm going to leave you with one thought on how this is going to apply to your projects where you'll have more than one variable. Let's quickly take a look at projects here. The general case of a hidden Markov model, so what is a hidden Markov model? It's a super simple Bayes net where you have x and evidence, but it's interesting because it's replicated across time. You can also take more interesting Bayes nets, which have a bunch of variables with dependencies, and replicate them across time. This is popular for things like medical diagnosis, where there's a bunch of different underlying things. They have connections between them. They all evolve over time, oxygen levels, heart levels, whatever. For DBNs, dynamic Bayes nets, you want to track multiple variables over time, which have correlations amongst them, using multiple sources of evidence. The idea here is you're going to take instead of this tiny little Bayes net and replicate it, you're going to take whatever fixed Bayes net you have and replicate that at each time step. And you're going to specify not only how things interact within time, but across time. So here's an example of that. Let's say you had two ghosts instead of one, and you have sonar reading that said, I am very far from the red ghost sound, but it seems like I'm really close to the blue ghost sound. Well, you could have ghost a and ghost b, which are two different random variables. Each of which has an observation. And this could replicate across time. So here's time one, which has the variables. And then at time two, those variables repeat. And you can unroll this network as far as you want. This is what you're going to have in your projects where the thing you're going to reason over is where are multiple ghosts on the evidence of multiple sonar readings? So I'm going to show you what this looks like in two demos, and then we'll wrap up for today. All right, so here is Pac-Man. I'm going to control it. And what you're going to see is sonar readings. These are down in the lower right. Actually, this is super hard to play. Those numbers in the lower right show you for each ghost color how far you are. It's a noisy signal. So I'm going to try to hunt down the blue ghost. OK, that's the wrong way. That's the wrong way. That's really the wrong way. This is super hard. OK, he's here somewhere. Got him. OK. Now I'm going to go try to track down another ghost. Anyway. It's very hard to know where the ghosts are given these readings. The best I can do is, it looks like I'm getting close. It looks like I'm getting far. I can play hot cold with these ghosts. But you can take these readings and a model of the ghost behavior and synthesize them into a localization of the ghost. And that's going to look something like this. All right. As you can see here, these are clouds that represent your beliefs over the ghost location. And you might think, well, that's great. You're running four HMMs. OK, it's still super annoying. You're running four HMMs. In fact, the ghost dynamics in the project will be such that they sometimes do things like go towards each other or disperse and go away from each other. So the hypotheses you are tracking are multiple ghost locations simultaneously. Each hypothesis is this ghost is here. This ghost is here. This goes is here. So the event space is very big, but a particle is very simple. Here's what I think is going on. Also, here's what I think is going. Also, here's what I think is going on. And you have 1,000 of those hypotheses. And even though it's a high dimensional space, they cover a lot of different ghost positions and let you compute these marginals that you can use to chase the ghost down. All right, that's your project. We'll see you next time for starting our unit on machine learning. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181129_Advanced_Topics_Summary.txt | DAN KLEIN: Hi, everyone. Welcome to the final lecture of CS 188. We have a lot to do today, and it's totally different than what's come before. So to start with, we would like to take a look at the contest results-- and not just the results for the final contest, which we're going to go through in some depth, but also the contests up to this point. So hopefully everybody who won one of the mini contests or placed in one of the many contests is here right now. Ready? PIETER ABBEEL: Ready. DAN KLEIN: All right. So contest winners, get ready to come up. Everybody else, get ready clap all day. Remember back to P1. Remember way back, search. And we had a multi-agent formulation of Pac-Man. Eat as many dots as quickly as possible under time pressure with multiple agents. I'm going to call up the contest winners. We went through what they did and what their awesome entries looked like at the beginning. But now we're going to come up. And we have for you something extra special. We have CS 188 medals. PIETER ABBEEL: Woo. DAN KLEIN: So we said these contests would be for glory. They're actually for glory and bling. So if you are here, we'll start third place-- Team Winnie the Pooh. Philip and Winnie, come on up. Are you here? Come on up. [APPLAUSE] [LAUGHTER] STUDENT: [INAUDIBLE] DAN KLEIN: In second place, Team JasonL. Are you here, JasonL? Otherwise, we're just going to put the medals on Pieter. [LAUGHTER] All right, moving on. First place, team Yushang. Are you here? All right. Clap-- clap for Pieter getting their medals. [APPLAUSE] All right. Now remember all the way back to the P2 mini contest. OK, again, there was a complicated board, multiple agents, but this time you had adversaries who were trying to eat their dots as well and quickly. So we had winners from this mini contest as well. Hopefully you're all here. In third place, we have DON'T FORGET-- REGISTER TO VOTE. Are you here? Did you vote? STUDENT: No, I'm not a citizen, but I wish! [LAUGHTER] DAN KLEIN: We're going to give you a medal anyway. All right. (CHUCKLING) Come on up. STUDENT: [INAUDIBLE] [INAUDIBLE] DAN KLEIN: All right, Sean and Ham. STUDENT: I guess my partner isn't here. DAN KLEIN: OK. You want to take one for him too, or should we give it to Pieter? STUDENT: We're going to work on projects tonight, so. Thank you. DAN KLEIN: In second place, Team [INAUDIBLE].. [? Yuchen ?] and [? Zuzushu, ?] are you here? Come on up if you are. And in first place, I will just make the expression. [LAUGHTER] Philip and Winnie, come on-- [LAUGHTER] [APPLAUSE] --right where you are. [APPLAUSE] All right. And now Pieter is going to take over for the main event. Congratulations for all the mini contest winners. [APPLAUSE] PIETER ABBEEL: So, final contest. In the final contest, your agent was supposed to work together with another agent that you did not control. So the other agent would communicate their plans to your agent, and you're supposed to collaborate to eat as many food pellets as possible, as quickly as possible, while avoiding the ghost, which is the opponent. We have 32 teams, thousands of matches played-- great work by everyone. We have a few cool names-- pacmantaughtmelife. [LAUGHTER] Stupid Pac-Man is not Ready, Broken Bot, Basic Bot. [LAUGHTER] Not sure how to pronounce this one. HE NEVER LISTENS, SPAM, Pac-Man is Ready, debug fixed. All right, moving on to the results. In 10th place, we have Team Mark-- Winnie Gau and Philip Zhau. [LAUGHS] Congratulations again. [APPLAUSE] Ninth place-- First Attempt Version 4.6, Frederick Roaming. Frederick, are you here? Frederick here? Anywhere? No Frederick. Eighth place-- Opening Eye Five Candidate, Martin Lee. Martin, are you here? Over there. Congratulations. [APPLAUSE] Seventh place-- DieGhostDie, Xi Mao and [? Jhibo ?] [? Fan. ?] Are you here? Congratulations. [APPLAUSE] Sixth place-- Nine 9 V3, Wilson Wu. Wilson, congratulations. [APPLAUSE] Fifth place-- debug_fixed. We did not have a name, but we have a handle of an email address. Mm-hmm, Sheldon Ma. That's you. STUDENT: That's actually one of our [INAUDIBLE] PIETER ABBEEL: (CHUCKLING) I see. You submitted twice. OK. We might have to take you out of this ranking. Your debug version is outperforming your non-debugged version? STUDENT: [INAUDIBLE] PIETER ABBEEL: I see. [LAUGHS] Well, congratulations again. [APPLAUSE] Fourth place-- Watney the Fearful, Alexander [? Kazatski. ?] Alexander, are you here? Alexander? Not here. OK. Top three, which is for the medals. In third place, we have WhenMonaSmiles by Victor Cheng. Victor, are you here? Congratulations. Can you come up front? [APPLAUSE] STUDENT: All right. PIETER ABBEEL: This bot is based on the reflex capture agent, using a feature-based evaluation function. The features are teammate distance, distance to food, and ghost distance. Basically, the agent aims for the furthest food when it's a certain distance close to the teammate. Otherwise it aims for the closest food. DAN KLEIN: (WHISPERING) Good job. PIETER ABBEEL: The agent tries to get away from the ghost when it's close to the ghost. The agent values food more than the danger of ghost, as getting the attention of that ghost would potentially help the teammate. Thresholds of distances to the teammate and goes need to be tuned like the weights of a feature. Optimally, they would be tuned by RL or other learning methods, but they are tuned manually this time. OK, in second place we have-- oh, actually I forgot that we have a video showcasing each of the top three. Let's see if we can play this. DAN KLEIN: [INAUDIBLE]. PIETER ABBEEL: This thing? So yellow is the team bot. Orange is the collaborator bot that you're supposed to work together with. Supposed to clear the board up to the last two pellets while avoiding the ghost. And the score is based on how little time you manage to spend in getting this done. Cool. Next one, second place-- Yihe Huang. Yihe, are you here? Over there. Can you come up front? Congratulations. [APPLAUSE] Strategy here is based on Approximate Q-Learning. Features include three distances and two scores. Distances include the maze distance between ghost team and nearest food and their own bot. Scores include a successor score and a score for exploring and exploiting to avoid deadlock. DAN KLEIN: Here you go. PIETER ABBEEL: Also, very impressively, I believe this is the first time a reinforcement-learning bot is near the top of the rankings in one of the final contests. So, very cool. Congratulations, [INAUDIBLE]. Let's see. [APPLAUSE] So let's watch this bot in action. What you'll notice here is a really nice divide-and-conquer approach, splitting up the work while avoiding the ghost at all times. [CHUCKLES] And in first place, we have Rudy Zhang and Feng Xu. Rudy and Feng, are you here? Over there. Congratulations. [APPLAUSE] This bot strategy is based on a map named a reward density map. It's calculated in the following steps. First, calculate food density, like minesweepers. Lower the reward of the area if the teammate might approach the area using particle filtering to update teammate position beliefs. Adjust the reward of a position according to the Pac-Man's distance to it. And lower the reward of a position if a ghost is near it. Using the computed reward density map, here is the strategy if the ghost is not nearby. DAN KLEIN: And it's ready. PIETER ABBEEL: Go to the position with maximum reward density. Then collect the food local optimally, using A* search. Else, if the ghost is close, use a min-and-max strategy to avoid the ghost. Award the Pac-Man for approaching the max reward density position. Some special calculations-- cached actions from the start position to the first position with length of legal actions bigger than one. OK. Let's watch this one in action. [CHUCKLES] So we see here is one of the Pac-Man. DAN KLEIN: I like the one that's kiting the ghost. PIETER ABBEEL: Distracting the ghost. There we go. Congratulations. [APPLAUSE] DAN KLEIN: And also, in addition to a first of getting reinforcement learning to work on this final contest, I believe this is the first time that we have a winner wearing their own team shirts. So check it out. PIETER ABBEEL: [LAUGHS] DAN KLEIN: Good job. Good job, everyone. [APPLAUSE] PIETER ABBEEL: OK, go back. This is the full top 10. We will release the full ranking on the website with all teams that participated, but we thought in lecture, we highlight the top 10. Dan? DAN KLEIN: All right. So another thing we wanted to do today is acknowledge what we think is one of the stars of CS 188, which is-- how many of you have noticed the artwork throughout the semester? Raise your hand. Yeah, pretty amazing. So here's the story behind this. This artwork first appeared actually quite a while ago when the artist was in CS 188 and would leave behind these beautiful, beautiful drawings that were like these amazing, inspired drawings on the chalkboard. There was chalk. It's like a whiteboard, but different. And there were these beautiful drawings, and they'd be gone the next time. And so when we decided to do an online version of this course, we reached out to her. Her name is Ketrina Yim. She is an amazing artist. And she was kind enough to help us put together all of this artwork. So that's why it seems so connected to the material, is she worked really closely with us to come up with some amazing stuff. And I think it really makes a great difference to the course. So I'd like you to join me in thanking her remotely, because I think it's amazing stuff. Thank you, Ketrina. [APPLAUSE] All right. So now the moment that you have not been waiting for, because you were not aware of. Are we good? All right. PIETER ABBEEL: I think we're good. DAN KLEIN: OK. PIETER ABBEEL: I was just going to coordinate [INAUDIBLE] DAN KLEIN: I was going to vamp otherwise. OK. All right. So we have for you some more Pac-Man stuff. In this case, it looks a little different from what you're used to seeing on the screen. It looks a little bit like this. It's Pac-Man, the cookie. So we're going to take a little break. And as we take the break, we would like all of you to come up over there. Get your copy of Pac-Man the cookie. How many? Do we have a count? PIETER ABBEEL: Up to three. DAN KLEIN: Up to three. [MURMURING] You could take Pac-Man, ghosts. PIETER ABBEEL: Can you [INAUDIBLE] DAN KLEIN: And also, there is CS 188 the laptop sticker. [MURMURING] [APPLAUSE] PIETER ABBEEL: Can you set them up here? Can you set them up over here? DAN KLEIN: We're going to take a couple minute break. Come up. Get cookies. There are plenty. So take your time. Eat the cookies. Don't eat the laptop sticker. PIETER ABBEEL: Somewhere on the floor is fine. Oh, stickers-- maybe stickers here. STUDENT: Stickers! I'll take one. Thank you. DAN KLEIN: Stickers are on the podium. STUDENT: You got more stickers? DAN KLEIN: Yeah. [INAUDIBLE] PIETER ABBEEL: Maybe you can help [INAUDIBLE] If you can do that, and you want to help distribute. DAN KLEIN: Yeah. Can you just [INAUDIBLE]. PIETER ABBEEL: [INAUDIBLE] DAN KLEIN: Sure. All right. While you work your way through Pac-Man the cookie, we could do a little bit of more special topics. So last time, Pieter gave a little bit of a flavor for some special topics on the robotic side. I'm going to talk a little bit about one particular area of natural language processing. And then we've got some other Pac-Man stories to share for you. So here we go. Let's take a step back. There are a lot of areas of AI. One area is natural language-- understanding language and building technologies that handle natural language in various ways. So you use a bunch of these, right. How many people here use speech recognition? At least on your phone? OK, how many people use some kind of machine translation? OK. How many people here interact with some kind of dialogue system or smart speaker or something like that? OK. So these are all examples. To a certain extent, search is also an example of a natural language technology. So these are all examples of things we have today, which are technologies enabled by processing or interacting with human language. And when we think about the things we're trying to build, I like to characterize it by the following Far Side cartoon. Some of you may have seen this. So this is what we say to dogs. OK. You say to the dog, OK, Ginger, I've had it. You stay out of the garbage. Understand, Ginger? Stay out of the garbage, or else. This is the NLP system we would like to have. Here's what they hear. Blah, blah, blah, blah, blah, Ginger, blah, blah, blah, blah, blah, blah, Ginger, Ginger, Ginger. OK. So if you have dogs, you know what I mean. This year is sort of our ideal of an NLP technology. I mean, not necessarily that we want our NLP technology going into our trash, but we want it to understand in context, with nuance what we say. This dog on the left is running NLP. This dog on the right, what's it running? This dog here-- this dog is running grep. OK, it is grepping for its name. And at the lowest level, that's sort of an NLP technology too. You're looking for words and patterns that you've seen before. So the goal of NLP is to have deep understanding and to do sophisticated modeling. This requires context, understanding things about linguistic structure, what things mean, what things mean in context, the pragmatics of it. But there's a lot of technologies out there that are still useful that fall short of this, but are getting better all the time, where, in reality, we do a lot of shallow matching. And this really is sometimes a game of robustness in scale. But there have been some amazing successes and also some fundamental limitations. What I'm going to talk about today, in particular, of all of the neat language technologies we talk about, we're going to do a little bit of a dive into speech recognition. So whole courses are taught on speech recognition. I have a one-hour lecture of speech recognition. We're going to do it in 15 minutes. So I'm going to talk really fast, which would not be nice to the speech recognizer. So how does speech recognition work? And most of you raised your hands that you interact with speech recognizers. How many people think speech recognition is pretty good, pretty useful? How many people think it's sort of terrible? How many people think it's somewhere in between? OK. I would say that makes it a pretty successful technology. Why is speech recognition hard? We're going to talk a little bit about how speech recognition works, how it's formulated, and how it connects to the ideas, really, at the core that we've seen in this class. So I'm going to show you this clip. Many of you may have seen this clip before. This is a real-time speech recognition running on broadcast news, which is a very difficult audio source, partly because of background noise, partly because it's all sort of done in kind of low-quality mics and things like that, speaker independent. So we're going to run it. This is a story. And I'll pause it in the middle of the transcript. [VIDEO PLAYBACK] - Friends, family, and classmates said their final goodbyes yesterday at her funeral in East Falls. [END PLAYBACK] DAN KLEIN: OK, so friends, family, classmates said their final goodbyes yesterday at her funeral in East Falls. OK, pretty close, except for what? The "good buys," kind of like Best Buy, right? And what this shows is one of the reasons why speech recognition is hard, is because the sounds are challenging to process, but even once you figure out what those sounds are, there are multiple sequences of words that are compatible with those sounds. And deciding between those options is not just a function of processing the sound. It's also a process of contextually deciding what language is most appropriate of the things that sound basically the same to the model. And in this case, the model didn't have a good enough context. You say "goodbye," but you shop at "Best Buy," OK. And so let's get a little bit into what is the system underneath it that's able to do this transcription largely accurately, but maybe with some gaps where the language model isn't that strong and why that might be the case. OK, so I'm going to take you on a quick journey about how speech works. And that's going to tell us basically how to model it using techniques we've seen in this course. How does speech work? Speech begins from the standpoint of somebody understanding speech. There's this whole other conversation of how you would generate it. That's speech synthesis. That's fascinating. It's a whole other topic. How do you understand speech? Well, the speech is in the air, right. Speech is in pressure waves in the air. And so those pressure waves need to be put into a form that a computer can handle. That's done through a microphone. So the pressure in the air moves the pressure on the microphone, which causes a transduction. And you get this electrical signal that zigs up and down. And that's the raw input. So does that look like? Speech input-- how many people have seen a WAV file that looks something like this? It zigzags up and down, looks a little bit like an EKG. OK, looking at this WAV file, it is nearly impossible to tell anything about what's going on. It is very hard to understand speech in the time domain. So here is the word "speech lab." There is the WAV file. And what's interesting is, what does an S look like? It looks like a bunch of zigs and zags. Interestingly, E looks like a bunch of zigs and zags, as does every other sound. You can tell visually there's some qualitative changes in there. One of the most interesting things is right here. What does a P look like when you say "speech lab." It looks like silence. There is no energy at all at that point because your mouth is closed because you've put your lips together. So interestingly, in continuous speech, the gaps aren't actually between words. They're just sounds that sound very real. It sounds very P-like to you, but in fact it's silence. And so this is a hard domain to do modeling in. Because if look, if you zoom in, all you really see is stuff is going up and down rapidly. OK. So people don't do speech recognition in the time domain. You don't take that sort of amplitude as your observation. Instead, what we do, is we look at it really closely, and we realize that all the interesting action is happening in the frequency domain. So if I zoom in on that "ae" from "lab," what I can see is a bunch of zigs and zags, but they happen in a certain sort of almost periodic way. So there's this complex wave here that repeats nine times and a smaller wave on top of the bit that repeats about four times as fast. And that gives a certain frequencies that are present. Now, those frequencies are characteristics of the sounds that are being produced. And so the frequency domain is much, much more interesting. So what people do, if you want to recognize speech, is you don't operate on the WAV form, where amplitude would just give you sort of an instantaneous pressure reading. Instead, you transform this frequency. You take a Fourier transform. You take little windows. And in each small window, you do a Fourier transform to a first approximation and detect what frequencies are present. Now, in the frequency domain, things look pretty different. So this "ss" from the speech, from S, is high-energy frication. It's turbulence that's in high frequency. But "ch" from "speech lab," that's also frication. That's turbulence. But it's a lower energy. And when you get vowels like E, they have these characteristic frequencies that tell you what you're hearing. And so what we're going to do, is we're going to operate on a representation much like this, where we'll be able to map from the frequencies present at each time slice to the underlying, unknown words which are being spoken. That's the job of the speech recognizer. So a couple of things to talk about, about what's going on here in the signal. And then we'll talk about how this is modeled using CS 188 techniques at the core. So why do you get these shapes where, for example, in this time slice, there are certain frequencies that are present, certain frequencies that are absent? And why are those characteristic of what you're saying? Well, what you're saying, the sounds you're making really have to do with the position of your articulators, which is basically, where is your mouth? How open is it? Where is your tongue? Where is that in your mouth? And kind of position of lips and things like that. And here's basically the process of how speech came into being. You have lungs. They're really there for breathing. But they also push air out past your vocal folds. And when that happens, it's a little bit like blowing on a kazoo. The vocal folds, which are muscles in your neck, shut, and then they're forced open, and they shut, and they're forced to open. And that creates basically a resonance. A hammer lands every time. And that hammer is the fundamental frequency. Think about it. It sounds a lot like a kazoo vibrating. That creates not only that frequency, but also all of its harmonics. So here's frequency here. Here's amplitude. And then these are all the harmonics. And there's lower energy as you go to higher harmonics, right. The fundamental frequency of the vibration of the vocal folds is right there. So that's what happens. And if somebody kind of like, split you in two at the head and you tried to talk, it would just be buzzing at the fundamental frequency. But luckily, you've got your whole head there to shape those frequencies into a filter function here, which lets some frequencies through and some frequencies die. The frequencies that come through are received, and the frequencies that are attenuated are gone. And you get an output spectrum that looks like this. It has a shape. And then it's got these striations, which have to do with the fundamental frequency. What's important here is the shape. So speech recognizers concentrate on this outline, this envelope, rather than the actual striation here, which is just a predictable pattern coming from the pitch. That would be important for things like prosody, but not for what words you're saying. Why does it look like this? If you take a big step back, the human body is a very complicated thing. But fundamentally, your vocal apparatus is an accordion attached to a tube. And that tube is up-- this is your throat-- and to your mouth, is basically a tube. It's about 17 centimeters long. And from that, you can do some physics and compute what frequencies should resonate in an open tube. And if you did that, you would get very close to the frequencies, which when produced sound like "uh," which is basically what language sounds like, right? "Uh?" Yeah. OK. So now of course, we don't walk around just going, "uh," and that's because we can shape that tube. So we can close it off with our tongue. And then you have a tube and then a little narrow constriction and then a tube in front of a different size and shape. And that causes this to reshape. So let me show you how that all works. And then we'll see what the implications of that are for speech recognition. So what are we going to look at right now? What I'm going to show you now, this is a simpler story. This only holds for vowels. Consonants work in a different way. If you're interested, take a phonetics course or a natural language processing course. This is the space of vowels. Vowels to a first approximation can be characterized by two frequencies called formants, which are sort of the peaks in the envelope of the frequencies that are allowed through. And by default, the middle is "uh." [TONE PLAYS] [TONE PLAYS] "Uh." Yeah, "uh." OK, so how do you get the other vowels? Well, you basically move those frequencies around. So just by building signals, which are sort of dominated by these two frequencies and putting them through an activation, I can make a lot of sounds. Now, they're not going to sound super human-like, and I can't make every sound with such a simple description. But you can start to see how this captures language. So here are some vowels, like "ee." [TONE PLAYS] "Ah." [TONE PLAYS] "Ooh." [TONE PLAYS] Now, if you only have "ee," "ah," and "ooh," you can't really say much, right? I guess you can say I-- [TONE PLAYS] --owe-- [TONE PLAYS] --you-- [TONE PLAYS] --a-- [TONE PLAYS] --yo-yo. [TONE PLAYS] [LAUGHTER] You can't say a lot other than "I owe you a yo-yo," but you can see this is basically what's going on in the speech signal. And it really is the envelope that's important. This is somebody singing the vowel E at lots of different pitches, from here where there's a low frequency. And those multiples are very closely packed. You can see as the frequencies go up, the shape of the envelope sort of stays about the same. But the actual, specific frequencies that are present change. So you can't just look at what frequencies are present. You've got a sort of extract the shape of the envelope of the frequencies. Now it's what's getting interesting over here, because it starts to get hard to see that shape, right. This is pretty high frequencies. For example, you might encounter these frequencies at the opera. How many of you go to the opera? OK. How many of you understand what they say at the opera? All right. So why is the opera hard to understand? OK, it's in Italian. Good point. [LAUGHTER] But setting that aside, once you get to high enough frequencies, these points of energy are so spread out that you can't tell what the heck the envelope is. And in fact, people singing at those frequencies will, consciously or not, shape the envelope to enhance the frequencies they're actually trying to sing at power. So things get hard to understand there. But the point is, it is the envelope you care about. So what are we going to have in a speech recognizer? Every time slice, say every 15 milliseconds or so, we're going to have a signature of that envelope. Over time, it's varied what that means. But think of it like something having to do with a Fourier transform. So you're going to vector at each time slice-- every 20 milliseconds, something like that. And that's your evidence. So this is going to be a big HMM. And what's the hidden state going to be? The hidden state has to be something like which words were spoken. But that's not really going to work, because the evidence changes every 15 milliseconds. But words last for a lot longer than 15 milliseconds. We're going to go last over 100 milliseconds. So these are going to last for a while. So we need to figure out what the state space is. But once we do that, we'll have a hidden Markov model. And therefore, we can build a speech recognizer. All right. So let's talk about what the HMM for speech might look like. Well, what do we need for HMMs? We know that from this class. You need a transition model. And you need an emission model. What's the emission model going to be? It's going to be some probability distribution which says, given the underlying state-- like, I'm saying this-- what sounds are more or less typical? So what spectral envelopes are appropriate or anomalous for that sound? You could train that from data. Then there's the transition functions. When I'm in the middle of this word, what comes next? So we've got to figure that out. So what's the state space? This is the tricky part. The states aren't actually words, because the words only change very infrequently at this resolution. What you have instead is you have a state not for each word, but more like each sound in a word. So imagine the state is like, you have a dictionary and you're pointing to this word and you have a cursor into it that says, I'm in the middle of saying artificial in-- --telligence. And I'm right on that T right there. What's typical in the acoustics? What does a T sound like? I'll make one. It sounds like silence. So there'll be a density there. You say, but T doesn't sound like silence to me. All that stuff that sounds like "tuh" is either going into the T or coming out of the T. The T itself sounds like silence. OK. So we have a state, which are basically pronunciation cursors. What do the states do? Most of the time, you advance through the word doing the next sound you're supposed to do. And if the acoustics don't match the next sound, it's probably not the right word. And the probability mass and the posterior will shift somewhere else. So we're going to build a little state graph. Remember, we talked about there was the Bayes net, where the circles were random variables. But there are also finite automata that we drew that represented each state pointing to the states where it had legal transitions with weights that indicated the probabilities. We're going to have to build one of those. So in our Pac-Man tracking, we might have had a state space the size of the board. It's much bigger for speech recognition because you have one state at least for every position and every word in your vocabulary. So inside a word-- this is the word "need"-- you sort of walk through the word from left to right. And maybe you can skip over some sounds. And maybe you can linger on a sound for a while. And then you get to the end of the word. So you have a big part of the HMM's hidden state is the structure of all the words and the sound sequences that correspond to them. But what happens when you get to the end of the word "need"? Well, what do you need? Whatever you need, you're going to say next. And you're going to move in from the end of the word "need" to the beginning of the next word. So you can imagine this giant automata where you walk through the word, more or less from left to right. And then at the end, you have this big choice point. What word comes next? And that probability of word given various next words is where most of the action is. That's the language model. And that's the part that determines not what words sound like, but what sequences of words-- "goodbyes" versus "Best Buys"-- make sense. Where do these come from? This doesn't come from acoustic data. This comes from data. For example, I can go to the web. And I can say what follows the word "the"? Well, I look it up. Apparently, the most common thing is "the first," but also "the same," "the following," "the world," blah, blah, blah, blah. There's some probability that I will say "the door." One way I can get this language model, is I can add up all of those numbers, compute the score, and I could say 0.06% of the time, the word "the" is followed by the word "door." There is my weights in my automaton. There are other ways to build these models. But that would be the easiest. Now, this is actually really important. Because that mistake we saw had to do with a language model that didn't have enough context. Because the word "buy"-- B-U-Y-- might well be more common than the word B-Y-E. So for example, when I say "the" and I say, what follows "the"? You probably wouldn't be like, oh, I know-- "the door." Who knows what you would come up with? There's all kinds of things that can follow "the." But when I say what follows "close the," suddenly "door" is seeming like a pretty good choice. And you can see that in the data. So "door" only follows "the" 0.06% of the time. But after "close the," 1 in 20 times, it is a door that you're closing. And so the more context you have, the more history, the more correlations you capture, and the better your language model can be. But of course, you can see these counts are getting smaller and smaller, and this drops exponentially. And so one of the challenges in language modeling has been in dealing with having more context-- more distant context, but still being able to make good predictions. Let's say you build all of that. Let's say you build the HMM and there's this big state space that has to do with how words in the dictionary are connected. And there's acoustic models that tell you which frequencies are appropriate for different sounds in the language. Then, in comes the input. And you do the processing. And you run posterior inference. What do you do? You basically run the forward algorithm. It's sort of a beam search, which looks a little more like particle filtering. The difference is, you're not trying to track it. You're not trying to be like, hey, everyone, we're in the middle of an "oh" right now. That's not interesting. You want to know the whole trajectory. You want to know what sequence of words did I go through? And so you're not looking for tracking, which is what we did mostly in this course, where you're looking for the kind of marginal distribution over a variable given preceding evidence. Instead, you're looking for the most likely single trajectory or kind of sum of trajectories through words. And that's an arg max problem now. You want to find the best path, given your evidence, that's a very similar algorithm, but with maxes instead of sums. Once you get that sequence, you say, oh, I was saying "artificial" for a while, and then I was saying "intelligence," and then I was saying "is," and then I was saying "fun." I must have said, "artificial intelligence is fun." And that's how you do the decoding. So that's it. You're all set to build speech recognizers. Except maybe you might remember-- how many of you remember, maybe like 8 to 10 years ago, speech recognition was [INAUDIBLE]?? How many do you remember 8 to 10 years ago at all? That's a long time. All right, but speech recognizers were not great 8 to 10 years ago. And they got a lot better. Part of what made them get a lot better was a ton of data. We got a lot more data. Data is a big part of it. A lot of what made them get a lot better was actually a ton of compute. We have a lot more compute. And a big part of what made them get better it was neural methods. So where are those neural methods? And what does that to do with HMMs, right? Well, we talked about neural nets as well in this course. Major advances in ASR, automatic speech recognition, especially in the last five years have been partly due to neural nets. How does that interact with the HMM? They were separate topics for us. Well, remember, one part of the HMM is an emission model, where we say, given some phone that is this sound in this context, here are the frequencies which are appropriate. Well, that's a big distribution over kind of high-dimensional, real, valued things. This is estimated now using neural nets. So these density functions are estimated using neural nets. And the other thing that's done using neural nets is that, given a word history, we predict the next word, not by just taking the longest history that we've seen often enough to collect word counts, but rather we take the words, we project them into some continuous space. And then we have a neural net that, on the basis of a long history, makes a generalized prediction of what comes next. So these two things have kept the structure of the HMM, what are called hybrid systems. They're still HMMs, but a lot of the big, heavy lifting pieces in between are neural nets. That's it. That's it for speech. Go build your speech recognizers. And we're going to switch topics here and talk a little bit more about Pac-Man. If you're excited about learning these kinds of things, we'll talk a little bit more at the end about what you can do next in terms of follow-on courses. But for now, I'm going to give it back to Pieter. PIETER ABBEEL: So we've seen a lot of Pac-Man in this course. It's mostly been living in a game. It's been living in a cookie also, today. But have you ever thought about where Pac-Man might be in your actual lives? Well, if you look carefully around yourself, there are some Pac-Men around you. Pac-Man is essentially a Roomba. What does it do? It moves around, picks up things from the floor, and that's its job. Roomba doesn't have ghosts, typically. But other than that, we've got pretty much a match here. And so there are some projects out there-- Roomba Pac-Man-- where they setup Roombas, paint them as Pac-Men and ghosts, and play Pac-Man in real life. So let's take a look at that. So this is from Colorado. [VIDEO PLAYBACK] [BEEP] [WHOOSHING] PIETER ABBEEL: Oh, the sound works. [WHOOSHING] [WHOOSH] [WHOOSH] [MUSIC PLAYING] [END PLAYBACK] PIETER ABBEEL: So what are we watching here? This is a game of Pac-Man where there's cameras on top, tracking each of these Roomba, Pac-Man, and ghost. The tracking leads to a game state. The game state, then they have strategies for the ghost. And Pac-Man runs min and max to try to collect food pellets or eat ghosts, whenever Pac-Man has eaten a power pellet. And whenever Pac-Man gets eaten, all the Roombas get sent back to their starting positions, and the game restarts. I think you're up next with the other. DAN KLEIN: All right. I think Roomba Pac-Man is extremely cool. But I want to tell you about-- I'm not sure-- either the coolest or the second coolest Pac-Man thing I have ever seen in my life. It's called Bugman. I love Bugman. All right. So what is Bugman? This is a different kind of AI. This is Pac-Man driven by animal intelligence. This was done by Wim van Eck at Leiden University. And Pac-Man is controlled by a human. Or I guess it could be an AI. So everything that you expect-- there's sort of a joystick, you move Pac-Man around, Pac-Man goes around the digital environment. But underneath this digital environment is actually a real environment. It's just like the matrix. OK, there's a real environment that looks like this. The ghosts are controlled not by AI, but by crickets-- obviously, right. So what does it mean for ghosts to be controlled by crickets? Because mostly crickets just sort of do their thing. How are they going to go after Pac-Man? Or how are they supposed to know to be scared and run away? This is where the genius comes in. These crickets are in a phone version of the maze with a camera above running computer vision that tracks the crickets. So wherever the crickets go, that's where the ghosts go in the digital environment. Vibrations underneath this maze cause the crickets to do things, because crickets basically run away from vibrations. So if you want crickets to be scared of Pac-Man, you vibrate where Pac-Man allegedly is in the virtual environment. And if you want them to go towards Pac-Man, you kind of herd them with vibrations. So you'll be able to see the video I'm going to show where the vibrations are. And the crickets-- well, they mostly do what they're going to do. But they sort of listen to the vibrations. So here it is. Let's watch Bugman. So what you're going to see here, it's synchronized. On the left is the digital environment. A human is controlling Pac-Man. There's dots. There's pellets. There's walls. And on the right is the vision of what the crickets are doing. And this allows for some extremely surprising interactions with the ghosts. So I'm going to run it. So you can see the vibration-- the Pac-Man's right there. And the ghosts are mostly just sitting there. But they're on the move. They're on the move. And they're more or less running away. See, that one doesn't like that vibration. Pac-Man's doing his thing. And if you will watch the ghosts, they are very hard to predict. But now the ghosts are scared. So the vibration reverses. And in fact, they actually don't like that there. They're out of here. OK. There. There, he's gone. They can move fast when they want to. Now, they don't actually eat the Pac-Man. They just disable them virtually. But they do not like that vibration. And I want to point out right up here-- here, this Pac-Man is so unhappy here, it's going over the wall, right. These crickets play seriously-- they they're basically in a relaxation of the state space, if you remember your A* days. The walls are there, but with enough effort they can just sort of climb over them. All kinds of cool stuff happens in Bugman. The crickets go over the walls. They decide to just like tune out. One day, they turned this thing on, and instead of four ghosts from the crickets, there were five. One of them had molted, and apparently the skin looks just like an actual cricket. It just doesn't move. So ghosts accumulate over time. I think it's brilliant. But there's one thing that might be more brilliant. And Pieter is going to tell you about it. PIETER ABBEEL: So a couple of years ago, there was a student who watched our lectures online and saw our crawler bot, and was very excited about the project 3, reinforcement learning, getting this crawler to do things. And they decided that simulation is not sufficient. They build a physical crawler bot that matches up with our simulated crawler. So what's in here? We have a motor in the elbow. We have a motor over here in the shoulder. Those are the two actions available to crawler bot. Then, to know whether it's making progress or not, because it needs to be rewarded for making progress, a computer mouse is attached to the back of the bot. And that mouse is measuring the progress made by this bot so it can get rewarded. Now, does this work? Does it not work? Reinforced learning can require a lot of samples. [? Ashley ?] [? Yang ?] [? Bolivski ?] was just a remote student-- got this to work. So here we have Q-learning in action on a physical crawler bot, using a lot of the 188 interface here. We see some Q functions in the top. Initially, doesn't really know what to do. As you remember, initially, there's exploration. You don't know how to maximize reward. It's gotten zero reward so far. But now, the mouse moved, gives it non-zero reward. It can start populating non-zero values into that Q table. And sure enough, it finds its way across this table. So you see here is the first and only-- we know of-- physical version of a 188 bot. Here is the learned Q values after a good amount of exploration. And then here is the bot in action in exploitation mode. [CHUCKLES] So we covered a lot of materials here. And maybe some of you are kind of sad that there's no more material coming. So we want to give you some suggestions on where you can go next if you want to learn more about artificial intelligence. And specifically, I'll tell you a little bit about all the opportunities at Berkeley here. So we've covered the basics of modern AI. You've done amazing work. This project's gotten you very far. But there is a lot more. How can you continue? Here's a list of classes that you might want to consider in future semesters if you want to learn more. In machine learning, there is CS 189, which is general machine learning. There is CS 182, which is specifically deep learning, and Stat 154, which is also general machine learning. There's an entry to data science course, which is pretty new-- Data 100. There are probability courses-- E 126 and Stat 134, which teach you more about the foundations that you might need for some of the more advanced work in AI. Optimization-- we've seen in the last few lectures that optimization is at the core of a lot of machine learning, where you need to find a good parameter setting by gradient descent, for example. Learning more about the theory behind it and practical approaches-- E 127 is the class to go take. A lot of the work we did was very mathematically focused and algorithmically focused. But there's actually a whole other branch of work in AI that look at, how do humans think? It's called cognitive science. And Berkeley has a cognitive science department and probably the right starter course there after 188 would be Cog Sci 131. If you want to go more theoretically machine learning, you've worked through, let's say a bunch of these, CS 281A and 281B are graduate-level courses that dig into learning theory. Looking at the different application domains-- computer vision is a big one. There's an entire course on computer vision-- CS 280. There's an entire course on robotics, CS 287. That one happens to be taught by me. There is a course on algorithmic human-robot interaction-- 294-115. This course studies, how can you make AI think about the humans that are around the AI and think about, what might this person be trying to achieve? Where might they be headed? How do I do the right thing accounting for this person's preferences? Reinforcement learning is a pretty new course that was started just three years ago. Instead of four lectures on MDPs and RL, it's 28 lectures on MDPs and RL. NLP-- Dan's graduate course, natural language processing. Instead of 15 minutes, [CHUCKLES] 28 lectures. And many more. So if you have specific interests that you might not see covered here, feel free to send us a note and we can give you more suggestions on other things you could study. Berkeley has really good coverage. You're in the right place to learn a lot about AI. Now, several of you have also wondered, aside from classes, can you get involved in research? Well, there's a lot of AI research happening at Berkeley. Here's a list of faculty who work in artificial intelligence. You might first of all wonder, well, why should you do research? There's a few reasons you might want to do research. One reason is, you just really like a topic. And so you just want to learn more about it and make progress in that field. Another reason could be that you have some long-term goals that, just with current technology cannot be achieved, and you think certain research directions could get you closer to it. And ideally, you have both motivations about some direction, and that will make you maximally motivated for your research. In practice, research is a lot about solving problems that haven't been solved before, which is very different from solving homework. So it teaches you a lot of skills in terms of how to understand whether you might have a bad idea or you might just have a bug in your code or you might be testing your great idea in the wrong environment where it doesn't shine and you need to go test it somewhere else. And that kind of interplay between all these is actually pretty tricky. It's one of the key skills you'll pick up that generalizes pretty well to many other activities you might have in the future where you do open-ended work. How do you get involved? Every professor has their own way of getting students involved. And you can never know ahead of time how they do it. So you should just try every possible way with every professor that you're interested in working with. What would be those ways? One is go to office hours. Another one is emailing them. Another one is maybe talking to their graduate students and see if any one of those paths might work out. You might have to try multiple times. If you email a professor, that might not read it the first time around. Maybe send another email, then go to office hours, talk to some students. And at some point, something might stick. Research is not necessarily for everyone. Research is about building tools, whereas engineering is about using tools. And some people prefer to be closer to putting things into practice-- the engineering side of things. Some people like to build the tools that the engineers then can go use to put things into practice. So different people, different preferences. Any questions about this or about the classes? Then maybe one thing we want to do right now is bring all the TAs up front. Can you guys come up front for a moment? Want to take it from here? DAN KLEIN: [INAUDIBLE] Well, it's your plan. PIETER ABBEEL: Yeah. So if you want to do it, there is a good number of projects, homeworks, a slightly lower number of exams than expected. [LAUGHS] A lot of office hours, a lot of discussion sections, and an enormous amount of work goes into all of that by our TA team. And we just want to highlight them and thank them together with you. [APPLAUSE] DAN KLEIN: Thank you. OK. So that's mostly it. There's only a couple of things left. One is, we would appreciate your help with some course evaluations. Are the HKN folks here? So we're going to clear out fairly soon. And then they will take over and help you guys get connected to course evals. These are really important. Please do take the time to fill them out. They're super important. They're kind of how the university and the department evaluate courses, how they evaluate us, how they evaluate plans for kind of what courses to offer and when to offer them. And that's how we kind of make courses better. So your feedback is really, really valued. So please take the time to do that. The HKN people will tell you in a couple minutes how to get going with that. STUDENT: Have a nice summer. PIETER ABBEEL: [LAUGHING] DAN KLEIN: We actually do want you to have a good summer. [LAUGHTER] But between now and then, you should totally feel free to have an exceptional break and a great spring as well. And I guess from us to all of you, please always maximize your expected utilities. If this is the only thing you take away from CS 188, please take away that. Go off and maximize your expected utilities. Anything you want to-- PIETER ABBEEL: Thank you. DAN KLEIN: --add? Thank you, all. [CHEERS AND APPLAUSE] Oh, do we? PIETER ABBEEL: Do we have cookies leftover? MAN: Maybe a couple. DAN KLEIN: OK. So we're going to go. HKN people are going to come up. The cookies are going to stay. So another way you can help us out is by eating just a few more cookies. Otherwise, we have to eat a lot-- a lot of cookies. Hold on, think I'm being-- what am I doing? MAN: You're being photographed. DAN KLEIN: Photographed. HKN MEMBER: Hey, everyone. I'm [INAUDIBLE] from HKN. We'd really appreciate it if you stayed back and filled out the course evaluations for this class if you haven't done so already. We have about 15 minutes or so. So I'll project the link on the screen in a second. But you should have all received an email with the course evaluation's link. So if you could take a couple of minutes now and fill that out, the department would really appreciate it. PIETER ABBEEL: Dan, I'm going to go hang out outside. And then, after people are done with the course evals, we're going to ask questions there? DAN KLEIN: Yeah. So we need to clear out while the course eval stuff happens. So in a minute here, Pieter and I are going to go out there. But we're not going to go far. So when you're done with the course evals, if you have questions, you can catch us out there. That way we don't sort of like keep the HKN folks from doing their thing. HKN MEMBER: Cool. DAN KLEIN: All right. PIETER ABBEEL: We'll be right outside. DAN KLEIN: We're going to be right outside. PIETER ABBEEL: We're going to be there. DAN KLEIN: We're not going far. PIETER ABBEEL: We're not running away. STUDENT: [INAUDIBLE] DAN KLEIN: Yes. That's what we said. PIETER ABBEEL: The next 20 minutes or so-- so you can finish your evals. And then, we can talk once you're done. DAN KLEIN: I'm going to head out [INAUDIBLE].. Come out, and I'll try to give you a good answer. Thank you, HKN. HKN MEMBER: All right. So I've projected the course staff on the screen here. This isn't all the names. But for example, if you don't know the name of your TA, this would be a great place to look. I'll scroll down in a bit through all the names. So if you have any questions or if your TA is not listed in the form, please let me know. And we'll get that taken care of. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180828_Uninformed_Search.txt | [AUDIO OUT] PIETER ABBEEL: Hi. Hi. Welcome, everyone. This is the second lecture for 188. Let's start with a couple of announcements. The first one, Project Zero, Python tutorial was due yesterday. OK. If you didn't do it, it's not a disaster because it's zero points on it. But it's our way to check that you're in the class. So if you didn't do it, we might be worried that you're not in the class. So please do it. It's also a good way to get to know our submission system. There is also a homework zero. It's a math self-diagnostic. Motivation is that the first three to five weeks will be pretty light on math. And then things will change. And when things will change, it will be too late for you to drop the class. And so we want you to be able to know ahead of time what you're getting yourself into and analyze your math skills now while you can still drop, rather than after it's too late. Homework one search will go out this week. There will be two components-- electronic and written. We'll have an announcement about it later tonight. Typically, homeworks are due on Mondays. So kind of get used to that in some sense, but there's an exception next week. Because it's Labor Day on Monday, the homework is going to be due on Tuesday exceptionally. But keep in mind that generally homeworks are going to be due on Mondays. This is just an exception because of Labor Day. Project one is also going out this week and will be due sometime next week, Friday afternoon. It's longer than most projects. And it's the best way for you to test if programming-wise you are prepared for this class. If your programming goes OK for project one, you should be good for all the future projects. Project zero is not a good way to check if you are prepared for this class. It's just a way to check if you have a Python installed and can submit into the system. Sections start this week. The first one is happening, I think, today at 4:00 PM. There was a bunch of them today, tomorrow, and then Thursday. You can go to a any section you'd like. But you'll have priority in the one that you signed up for on Piazza. So just in case there's too many people in one room that you don't fit, the people who signed up for that time slot have priority, if you're one of the first 35 to sign up. Instructional accounts. There are instructions in a Piazza post, our welcome post, that say you need to go find those accounts online. Also, that you don't need them, but some of you really want them. And if you really want them, there's instructions online. There are some pinned posts on Piazza. Those are the ones you maximally want to pay attention to. Those are our announcements that are currently relevant or relevant throughout the semester. So keep that in mind. And then the most frequently asked question is can you add me to B courses? But we don't use B courses for 188. So none of you will be added to B courses. Nobody is on it. Nobody will be on it. We do use other things, so make sure you know there is a website. There is Piazza and Gradescope. Any questions about logistics? Yes? STUDENT: [INAUDIBLE] PIETER ABBEEL: Yeah, so homework one will officially release later today. And there will be a post on Piazza that links to all the relevant things for you to work on. So just a little bit more patience for that. Any the other logistical questions? Another frequently asked question is, how about AI research? Wouldn't it be cool to do AI research? And I agree. I mean, that's half of my job is doing AI research. The other half is AI teaching. So obviously, I'm with you. AI research is good to do, a lot of fun. The way research tends to work is every professor effectively runs their own lab and has their own methodology on how and when to get new students involved. So if you're interested in AI research, check out the bair.berkeley.edu site. Check out the professors listed there. And then check their pages, see what they work on, see who might be good fits, and then individually contact them. Some of them don't read email, then your email will not be read. But they might still have office hours, or they might be possibly other ways to catch in person. Every professor has their own way of communicating or not communicating. It varies. But that would be your best starting point. And there is no unified entry into this. It's all on a per professor basis. Question here. STUDENT: [INAUDIBLE] PIETER ABBEEL: Oh, the asterisks next to the names refers to professors who are kind of at their core working on AI. That's kind of like the main thing they work on. Whereas, the ones without asterisks often work on very closely related topics, but they might not call themselves necessarily AI faculty. But they work on things that in practice are extremely close. In my particular case, the way I tend to recruit students is through email. So just email me your transcript, your resume, and I'll look at that and see from there. I got to warn you, I get like more than 100 per semester. And I can only get three or four or five involved in a semester. But you should try. And the same with other faculty, you should try if you're interested in research, and then see where you might be able to get involved. Any questions about that? OK. Let's get started on the technical topics then. Today's topic is search. What does that mean? We're going to discuss agents that plan ahead, rather than just react to a current situation. We'll formalize this into search problems. And this is going to be a recurring theme throughout the course, is that we will look at some high level intuition, some notion of the type of problem we're interested in. We'll then show a formalization. For example, search problem is a formalization of real world settings into something mathematically workable. And then we'll have algorithms that can work with that mathematical interface to solve the problem. And the algorithms we'll see today are depth first, breadth first, a uniform cost search, and we'll expand on that in the next lecture. So agents that plan. Before we dive into agents that plan, let's maybe contrast it by a more naive type of agent, the reflex agent because that might be the simplest way to just write up some AI agent. So what are reflex agents? Reflex agents are agents that have a current precept, maybe some memory. And based on that, make a decision, but without consideration of the consequences of their actions. So let's see. Who here is maybe like sometimes a reflex agent? Who thinks they're sometimes a reflex agent? OK. Why? STUDENT: [INAUDIBLE] PIETER ABBEEL: So the answer was because sometimes just instinctively you feel what to do, and you just do it. You don't reason through all the consequences. Sometimes good consequences result, sometimes bad, I guess. But a good example would be something where maybe an insect, a fly is flying to your face. And you don't want to go, OK, well, if I keep my eyes open, what will happen? If I close them, what will happen? You just want to close your eyes and be done with it. That would be a reflex compared to planning, which is thinking through all the consequences. So reflexes make a lot of sense, especially when you need to react quickly. Can reflex agents be rational? Meaning, a rational agent is an agent that optimizes expected utility. I hear a yes. Why yes? Anybody? STUDENT: [INAUDIBLE] PIETER ABBEEL: So the answer is by reacting quickly you might be doing the right thing. And if you're doing the right thing, indeed like your hand's in the fire, you pull it back right away instead of thinking through the consequences of will this become burned? Will it become charcoal? What will the result be? You just pull back is much more optimal then to maybe reason through everything and might be too late. So our definition of rational means optimal behavior. And how you reach those conclusions is decoupled from whether the agent is rational or not. And so that's a good example. Let's look at a Pac-Man example. So we'll run a demo here of Pac-Man in a very simple world. And the goal is to eat all the dots as efficiently as possible. And we're going to run a reflex agent, which just moves to the nearest by dot. So it looks at the nearest by dot, tries to move in that direction. Here's what happens. There's nothing better available for you to do. So doing something as simple as move in the direction of the closest dot does succeed. Now, let's look at another case. Same piece of code. Look at the nearest by dot and try to move in that direction. So initially, it's going to go north, a lot of west, some more north. And it's east, east, east, east, east, east, east, east, east, and nothing happens. Because by moving east, it bumps into the wall, encounters the same situation, has the same reflex, repeats over and over and over. And actually, it loses points here. So in the one idiot Pac-Man world, when you waste time or every step that passes you lose a point to encourage you to be more efficient at completing the game. So this is a reflex agent that's clearly not optimal. We'd really would want it to go around that wall and get the dot, but it's not going to do that. So we've seen a reflex agent that is rational, one that's not rational. Oh, quick note here. In the slides that you'll find online, the PowerPoint version has the videos of the demos embedded. So if at home, you're kind of reworking through the slides, and you're like, oh, what happened again in lecture one, this demo was run, you can click Play. And then you can watch the same demo at home as a video rather than a live demo. OK. So we've seen reflex agents. How about planning agents? Planning agents ask themselves the question, what if? So they hypothesize if they were to take a sequence of actions, what would the result be? A requirement for this is that the agent has a model of how the world works. If you don't have a model of how the world works, you cannot reason through the possible consequences of your actions. You also have to have some kind of goal. Because the planning agent would have a sequence of actions its hypothesizing about, and then evaluating them based on whether they achieved the goal or not. The goal could sometimes be a single thing that needs to be achieved, or it could be a test, a condition that needs to be met that can be met in many ways. And you just need to meet the condition in some way. There are questions you will be able to ask and hopefully answer about planning algorithms when we see them. An algorithm could be optimal or suboptimal. Optimal means that you achieve goals in minimum cost. Complete means that when there exists a solution, you find it. And then we'll look at some planning versus re-planning in the next demo here. So we'll start with a showcase of a mastermind agent that is planning through everything. So same maze again as we just saw. Now, it's going to be a planning agent. When I hit Start, we're going to see some planning happening down here. Let's see. It's actually pre-calculating to be ready for planning. It's done that. It's done 1,000 expansions, 2,000 expansions. This is reasoning through many possible consequences of actions. It's then found what it knows is the optimal sequence of actions to clear this board. And then executes that sequence of actions. Now, the reason it took a while for this vision to get going is because a lot of sequence of actions to consider. And before you can ensure that you've found the optimal one, you have to consider quite a few in this scenario. Sometimes it's not practical to wait this long. And you might want to do something slightly different, which you'll do in some of your project ones also, which is re-planning. So what we're going to watch here is an agent that doesn't plan an entire sequence ahead of time, but it just plans to the nearest dot. So it checks where is the nearest dot? What's my path to get there? Executes that plan. After it's executed that plan, formulates a new planning problem where it will plan path to the next nearest dot and repeats over and over and over. So this one can start acting almost right away and is continuously re-planning throughout execution to find the shortest path to the next dot. OK. So we've seen a few examples here. And it's now started formalizing what it means to be a search problem. What are the key components? A state space. So whenever you are formulating a problem in the real world that you try to solve, a state space will be something you have to formulate for it. For example, for Pac-Man, the state space is a set of possible configurations of where Pac-Man is and where the dots are. Then a successor function. The successor function says for any given state what the actions are that are available, and what the consequence states will be. For example, from this particular state, the agent can take either the north action or the east action. And then the results are shown on the right. So the successor function encodes how the world works. They need a start state and a goal test. Start state could be wherever you are currently in. And the goal test would be the condition that you want your agent to meet. The solution to your problem would be a sequence of actions, which we'll call a plan, which transformed the start state into a state that satisfies the goal condition. Now, the beauty of having this kind of interface up here is that once we agree to an interface like this, any real world problem we can cast this way, if we then have an algorithm that can work with this interface, that algorithm can solve that real world problem. And so the unifying theme in this lecture and the next lecture for casting real world problems as things we'll solve with our AI algorithms is by casting them as search problems. OK. Search problems are always just models. And so a lot of the art will be in thinking through what it means to be a good model of the problem that you're trying to solve in the real world. So let's look at an example. Let's say you want to find a path. And this is a map-- well, a simplified map-- of Romania. And maybe you want to start in [INAUDIBLE],, which would be your start state, and end up in Bucharest, where you have the airport. To model this as a search problem, we need to ask the question, well, what are the states? What's the state space? Any thoughts? Yeah. STUDENT: [INAUDIBLE] PIETER ABBEEL: So suggestion is, all locations. And that's indeed also the choice we made here. Since we're interested in finding paths, that seems reasonable. But you might go back and say, hey, the way I'm going to model this is actually different. I don't just care about the cities. I care about more details of the path along the way. And then break it down more and so forth. But at some point, you have to make a decision. And this is one reasonable decision. How about successor function? Any thoughts? For any city, whichever cities are neighboring it, is the possible successors. So it is defined by the graph here. And the cost associated with the transition would be the distance on that edge. Again, you could make other decisions in practice. You might say I don't just care about how many kilometers it is between two cities. I care about how much traffic there is. Or I care about the number of potholes in the road. And you can adjust your cost and increase it if there's more potholes and so forth. But one choice would be distance. Yeah. STUDENT: [INAUDIBLE] PIETER ABBEEL: So the question is, if you choose a different state space, might your successor function be different? And the answer is yes. You need to choose them in a compatible way. And we'll see more examples soon where you'll see that. Indeed, once you start picking a different state space for the same type of problem, you have different successor functions. Start state. Well, if we need to go from [INAUDIBLE] to Bucharest, that's [INAUDIBLE]. Goal test, we need to end up in Bucharest. So is our state equal to Bucharest is our goal test. What's the solution? Well, the solution would be some kind of path, maybe this path over here, or maybe you like this path. We'd have to calculate maybe which one is shorter to find the optimal path. There are even longer ones. But those are possible solutions to the problem. OK. Let's do a few more of these. Here's a Pac-Man environment. And keep in mind the world state encodes everything about the environment. So the world state would have everything about this board situation. But now, the problem we're trying to solve would determine, well, we want to put in our state space for our search problem. So let's say we want to solve pathing. What would we put into our state space? Any thoughts? Location. So location makes sense because if we have location we can track where we are. And we need to go from some start to some destinations. That makes sense. Successor function. Well, it'd be something about north, east, south, west, take [INAUDIBLE] certain direction. And it would have something about where the walls are. Because if you run into a wall, you don't move. You stay put. So that would be encoded into the successor function. And the actions are part of your successor function. And then goal test would be am I at the end state I want to be at? Now, let's change it. What if our goal was to eat all the dots? What would our state space be now? STUDENT: [INAUDIBLE] PIETER ABBEEL: So the suggestion was food locations and the location of Pac-Man. Then we need to think about what does it mean to encode food locations? For Pac-Man, we have xy coordinates for food, different options. One possibility could be, because the map is fixed, we could say for every possible location, there was a 01 Boolean flag saying whether there is still food there or not. So we'd have a long list of 01s, depending on whether there was food or not, together with the coordinates of Pac-Man. How about actions? Action stay the same. Right? Same game-- north, south, east, west. How about successor function? STUDENT: [INAUDIBLE] PIETER ABBEEL: So if we now need to update both the location and the binary state of the food. So that's where we have now a different successor function than before. Before we are able to ignore that. Even though it might change in the real world, we just ignored it because for pathing it didn't matter. But here, we have to keep track of it. And then the goal test would be whether dots have all taken on the value false. And we don't need to worry about where Pac-Man is because that's not part of what we need to achieve. It's just about eating all the dots. OK. So once we understand state spaces, the first question we can ask is, how big is our state space? So here is another maze. There's Pac-Man. There are some ghosts. There's 120 possible agent positions. There's a food count, 30 positions. Then 12 possible ghost positions for each ghost. And there's an agent. And the agent could be facing north, east, south, west. OK. So what's the size of the state space? Well, somehow we need to count this. The way you do this, you look at all the variables involved and see how many are there. So let's first look at the world state. So world state wise, how many do we have? Well, agent positions we have-- not sure about this vertical bar-- we have agent positions, 120 possibilities. Then to know how many total states there are we need to check everything that varies and effectively multiply the count together. So food count, 30 locations that could be food or no food. So that's 2 to the 30 possibilities multiplied by 120. Each ghost could be in 12 locations. So that times 12 times 12 for both ghosts. And then the agent could be facing north, east, south, west. So the total number of world states is this product. And the thing that should make you worried here is this one over here. We're going to have something very large in the exponent. Usually the number becomes very large. And so you have something very big to deal with. OK. That's world state. How about search problem state? That depends on the type of problem we're trying to solve. Imagine we're trying to solve eat all dots. OK. Well, what do we need for eating all the dots? We went through that on the previous slide. We need the agent position. We need to know where the food is. Since the ghosts are blocked off, we don't need to know about where the ghosts are. And we also don't need to know about which way the agent is facing. So I end up with 120 times to 230. What if it was just pathing? Then all we need to know is the agent position. It would be 120. And so you can start seeing here why mastermind Pac-Man who was trying to figure out the shortest path to eat all the dots had to think for a long time because actually eating all the dots means that you have a state space where you need to keep track of which dots are still present or not. And this is going to be very large, especially for a bigger maze. Whereas, just pathing problem can be solved a lot more quickly. Any questions about the counts and how this works? Yes. STUDENT: Do you have to account for instructions or impossible [INAUDIBLE]? PIETER ABBEEL: OK. That's a really good question. So should we account for instructions about impossible world states? For example, should we count maybe for whether ghosts can tunnel through the wall or not? Or do you have something else in mind? STUDENT: Actually, if Pac-Man [INAUDIBLE].. PIETER ABBEEL: So I think what you're asking is, what's the exact definition of how this world works? And the way it works is that once Pac-Man arrives on a position where there's a food dot, it instantly gets eaten. So there is no such notion as being on that food dot, but not yet having eaten it. The eating is automatic. STUDENT: [INAUDIBLE] PIETER ABBEEL: Oh, I see. You're saying in the count. So that's a very good point. When we do this count here, it's a slight overestimate of what actually could happen. The state where Pac-Man is in a particular location and there is food there is a state that doesn't happen. And in principle, we could take it out. And so we can actually start subtracting, in this case, a relatively small number of states from that count because they can not occur. And so what we're doing here, you're absolutely right, is essentially getting a ballpark estimate of how big this problem space is when we estimate it this way. And we try to get it roughly right, but not up to the exact number. Yes? STUDENT: What's 2 to the 30 [INAUDIBLE]?? PIETER ABBEEL: So 2 to the 30 corresponds to the fact that for each of the 30 food locations, the dot could be present or could be absent. And so every possible combination of presence/absence is 2 to the 30. OK. Now, let's do a little quiz. Well, let's first define the problem. The problem is defined as we want Pac-Man to eat all the dots while keeping the ghosts scared at all times. So the way Pac-Man works is that if you eat one of those bigger dots, a power pellet, ghosts become scared for a while. And so what we want here is we want to find a sequence of actions and a search problem formulation corresponding that'll allow us to find that such that we'll keep the ghosts scared at all times and eat all the dots. OK. Why don't you talk to each other for a couple of minutes. And then we'll see what you come up with as the state space, successor function, and so forth. [CHATTER] OK. So let's see. Let's do this by just committing in some sense grow what goes into state space. OK. Who wants to put something in the state space? Raise your hand. Any thoughts? STUDENT: [INAUDIBLE] PIETER ABBEEL: OK. So let's say I'll write it here. Power pellet. STUDENT: And you can also add just the amount of time left [INAUDIBLE]. PIETER ABBEEL: Remaining locations. A timer for the ghosts. OK. Anybody else wants to add something else? Over there. STUDENT: [INAUDIBLE] PIETER ABBEEL: So the regular food pellets, essentially dot Booleans, as we had before. Anything else? Pac-Man location. So Pac-Man xy. Anything else? Over there. STUDENT: [INAUDIBLE] PIETER ABBEEL: So that's a good question. So it depends on how this world works what we might have to put into the state space. So and I didn't really specify much about that. Anything else? What's a good way to check if this is enough? And then we can also later check if it's maybe too much. Good way to check is what do we need from our state space? We need to be able to do a goal check. Like, did we satisfy the goal? OK. Goal is to eat all food pellets. Well, we can definitely check for that. Then, what else is kind of part of our goal in some sense is to keep the ghosts scared at all times. The way we think of that is that the successor function will essentially say there's nothing left in the game here if the ghosts are not scared. That game is over. So it's like a game over successor state. OK? And so to able to have that game over successor state, we need the timer on the ghosts. Otherwise, we don't know when that's going to happen. To be able to know when we reset that timer, we need to know the remaining power pellet locations. Of course, we need to know where Pac-Man is relative to them. Otherwise, we can't encode whether or not these things get eaten or not. So it seems like we need all of these. The food palettes we need for the goal test. And the other three we need for the successor function in coding what's going to happen next when we take an action. And then ghosts locations, well, it's a good one. It kind of depends on how we specify the problem. In Pac-Man, if you run into a scared ghost, what often happens is that it responds alive unscared. In which case, you want that never to happen because you're supposed to be always keeping the ghosts scared. And so then you need to keep track of the ghost locations. If you ignore them, if you say, OK, we're just going to ignore that ghosts respawn unscared, we'll just use the timer as our reference, if we say that's how the world works, then we don't need to keep track of it. And that will often be a question you have to ask yourself about the problem you're trying to solve. Exactly how does this work? And what's the right model? On the slides, we kind of assume that the ghosts don't respawn alive. But if you thought they were responding alive, then the right model would be to also keep track of the ghost locations because you'll need to simulate that process in your successor function. Now, one thing that might be worth highlighting here. When we think about-- let's say food pellets-- why don't we just keep track of the number of food pellets left rather than going into all the details of the locations? STUDENT: If you only kept track of the number, you wouldn't really be able to do planning. PIETER ABBEEL: Exactly. So it's important that we don't just pay attention to what the goal check asks us for, but also we need to do the planning, which is a successor function that reasons about the world. And so whenever you do any of those, think about, can I check for the goal that's needed in this state? And then, can I encode my successor function using this representation? And then, check if there's anything that you didn't need. Anything here that we don't need? Not really. We needed everything that was listed, so we can't discard anything from here. OK so at this point, what we looked at is how the formulate real world problems into search problems. And what results from those formulations is essentially a state space graph, which is a mathematical representation of a search problem where the nodes are abstracted world configurations might look something like this. And the arcs represent the successor function. And the goal test would correspond to some of the nodes in this graph meeting the goal condition, and other ones wouldn't meet the goal condition. Now in the state space graph, every state will occur only once. Keep that in mind. We'll soon see things where that's not the case. And we can barely build this thing in practice, so remember that we had something like to 2 to the thirtieth possible food pellet configurations for even this small world. Well, it's very difficult to draw 2 to the thirtieth different states on a slide or even a bigger piece of paper. And so this is really just an abstract idea for us to think about. We know it mathematically exists. We're never actually going to lay this thing out unless on a slide to explain the principles behind some algorithms. Here's another one, and here's the running example we'll use. This is a really, really, really small state space graph research problem, and we'll use it as our running example. But keep in mind when we use this that this is just to illustrative purposes. In practice, you would never first draw a state space graph and then solve the problem. You would only implicitly ever deal with this state space graph. What we're actually going to build up when an algorithm runs is called a search tree. Search tree is something that starts with wherever the start state is and then calls a successor function to see what's possible from there. And it might call the successor function again, and so forth. So the entire search tree has all possible plans in it, all possible sequences of actions, and their possible consequences. In a search tree, when we think about a node in the search tree, we actually think of it as a sequence of states that have happened. So this node here in the search tree corresponds to starting in the start state, taking the action east, ending up in the state shown there. For most problems we'll look at, we can't build the search tree either. In fact, the search tree will typically be much larger even than the state space graph because there might be multiple ways to get to the same state. And in that case, the search tree will have multiple occurrences of that same state. And so, often, it's much, much bigger than the state space graph, but it is the underlying abstraction we're going to work with. So here for our very small example, on the left is the state space graph. On the right is a search tree. Assuming start state is S, search tree is rooted in the start state, and then from there, looks at all possible consequences of actions. Again, we're never going to construct any one of those when we actually write code and solve real problems, but on the slides for illustrative purposes, we will have them. Key will be in our algorithms that will construct these on-demand as needed, rather then ahead of time. OK, let's do a quick quiz. Here is a four state space graph. What would the search tree look like for this state space graph and how big would it be? I'll give you 30 seconds to talk to your neighbor and we'll see where you come up with. [SIDE CONVERSATIONS] What do we think? Any thoughts on how big the search tree is for this state space graph? Over there. STUDENT: Infinite. PIETER ABBEEL: Answer-- infinite. Anybody wants to throw something else out than infinite? We're all going with infinite? Infinite is a good choice. Why is this infinite? Well, think about how do you build a search tree. You started at S. You can go to A or B. From A, you could go to B or G. From B, you could go to A or G, or you can go to B or G. And even just this first path in the search tree is already infinitely long, not to mention that there is even more of them here. So what we see here is that we end up with an infinite search tree, even though the state space graph is actually quite small. So let's look at an algorithm on how to build up this search tree in a very incremental way, just enough to find a solution and then stop and return the solution. Let's do an example. We'll do pathing for Romania. What does a search tree look like? Well, as you run search, you start with the start states. You then expand out to potential next states. These are potential plans. These are one action plans. Then there is two action plans living here, and so forth. And the hope is that somehow we want to explore as little as possible of this giant search tree, yet find the solution. So it would go tier by tier, and we hope that we find a solution relatively quickly before we've traversed everything in the search tree. Here's an algorithm to do this. It's called tree search. It will be the foundation of what we'll be doing this and next lecture. How does it work? Initialize your search tree with just the initial state of the problem. That's the only thing you put in it, and then you loop. You check. If there are no candidates for expansion-- what's a candidate for expansion? You look at all the leaves of your search tree. Those are all candidates for expansion. If there are a non-zero number of leaves, that means you have candidates for expansion. If there are none of them, that means you only have dead ends left in your search tree, and you're done, and you didn't find a solution. But if there are candidates, leaf nodes, then we pick one according to some strategy. We still have to determine that strategy, but will have many options, and somehow we'll pick one. Then we'll check. If that node is a plan that ends up in one of the goals states, then you return the corresponding solution, and you're done. If not, then you call the successor function on the last state on that node, and expand from there, and go back around. What are the key ideas? There is the fringe, which is a set of leaf nodes that are waiting to be expanded. There is the process of expansion, where you pick one out of the fringe and expand it, but before you do, of course, you check if it might already achieved the goal. And then there is the strategy. Which one of the elements in the fringe are you going to pick first to expand? And there's a lot of different strategies we'll look at. Let's run through an example. So how do we do this? We start with the start state and I will have two things going on. On the left, I will expand the search tree, and on the right, this is what will be in code happening for your projects. So how does this start? There is just S, and you'll just have S on your fringe. So this is the fringe, S, stored in your code, and this is the search tree. Well, what can we do? We only have one option, so we pick S. What happens after S? We can end up in D, E, or P. The way we'll denote that here is that S got expanded, got taken out of the fringe, and instead we have S to D, S to E, S to P. Now, we again look at our fringe. Pick one. Which one do you want to pick? I'm hearing E. Let's go with E. We pick E from our fringe-- first step. So pick this one here, which corresponds to this one here. We say, OK, does this achieve the goal condition? No. Then we expand. What does it expand to? From E we can go to H or R. So we expand to H or R. In your code, this one would have disappeared, and you'd have S to E to H, and S to E to R. Now, our fringe in it has four members. Which one do you want to pick? Any preference? People choosing H. Interesting choice, because the goal is here. But that's a strategy. That's a thing, and we hope that in the future our computer programs will have good strategies. So H. From H we can end up in P or Q. And here, the way it would look is that this would disappear from the fringe, and instead we'd have S to E to H to P. S to E to H to Q. What do we pick next? Q. We're really going down a rabbit hole here. We're picking Q next. It is a strategy. It's this year's 188 strategy. So we pick Q. Is it at the goal? No. So then we expand. Does it have successors? Actually, it doesn't have any successors. So this kind of just dies off here. Nothing can happen from here. With pick this off, and fringe had one less in it. We need to pick again. What do we pick next? STUDENT: R. STUDENT: R. PIETER ABBEEL: Let's pick R, because search trees are very big. If we build the entire search tree, even for this problem, it's going to take a long time. So let's try to be effective. R seems a pretty good choice. From R we can end up in F. Over here, it means that this guy disappears, and instead we have S to E to R to F. What do we pick next? I hear many. Let's do F, among the many choices. F allows us to get to G and to C. This one goes off. We have S to E to R to F to G. S to E to R to F to C. At this point in our algorithm, we do not declare success. You might say, why not? We found a path. It'll matter in the future that we don't, and it's one of the most frequently occurring bugs in your project one that you declare success at this point. It's too soon. That's not how the algorithm works. We wait. We go to our fringe again and look for candidates for expansion. What might we pick? Well, let's pick the one that ends in G. We pick it for expansion. We check, does it achieve the goal? The answer is, yes. Now, we declare success. It's not going to be obvious if you haven't seen search before why this is important, the sequencing, but it will start mattering and you will see soon why. So at this point we expanded this one, which is this one here. We declare success, and we found S to E to R to F to G as our path. Great, we did it. Here is a-- on the slides, type set version-- the same thing. A slightly faster version than what we chose. It also highlights the actual search tree. So the actual search tree is a lot bigger than the part that needs to be explored to find a solution. Let's take a two minute break here. And after the break, let's explore different strategies to choose nodes for expansion. [SIDE CONVERSATIONS] Let's restart. Any questions about the first half? Let's look at our first choice of strategy, depth-first search. Who here has seen depth-first? Many of you. So it should be a good review that will ground everything else we're going to see in something you already know. If you haven't seen it, we're not going to leave anything out. This will be a full coverage of depth-first search. What would depth-first search mean for this pathing problem here? Well, what makes depth-first depth-first? It's the choice of which node to expand first from the fringe. Initially there's only one node, S. So no choice has to be made. Every search algorithm will do the same thing. Every strategy expands S. Now we have three choices. Which one to pick? Depth-first says, pick the deepest one first. They're all equally deep. So we need to break some ties, maybe break ties alphabetically and pick D first. Now on the fringe we have five candidates. The depth-first search says, pick the deepest one first. There's three deepest ones. We'll break ties, alphabetically again, expand out one. Now, there is, again, five on the fringe. Depth-first search, we'll pick the deepest one first. There's only one deepest one. The only choice is to pick this node over here, which encodes going from S to D to B to A. As always in tree search, you check, did it achieve the goal? It did not. Then call the successor function. It has no successors, and this has been explored, and nothing left there. Four left on the fringe. Pick the deepest one. Again, there is ties. We pick one of them, we break ties alphabetically here, and this process kind of repeats. Streaking left to right, if we do alphabetical tie-breaking, through this search tree until it decides to expand the goal state, at which point it's done. One possible strategy-- depth-first. Now let's think about the properties of this algorithm, and let's first take a step back. What are properties we might want to quantify about any algorithm, not just depth-first search? Well, one is completeness. Is an algorithm guaranteed to find the solution, if one exists? Another one is optimality. Is it guaranteed to find an optimal path, the least cost path, if one exists? Time complexity-- how long does the compute take to find a solution? And space complexity-- how much memory do you need in the process of that compute to get the computation done? So to do this, we'll use a cartoon of a search tree. Here's our search tree cartoon. We have a start state at the top, and from there we might expand through the successor function. And we're going to have a few variables here to quantify things. So B is the branching factor. It says-- essentially saying, how many successors are there from any given node? And for simplicity in this cartoon, we'll assume every node has as many successors as any other node, which is B. So then after one choice of action, there is B possible next states, after which we have B options in each one of them, which gives us B squared possible states after two actions, and so forth. Actually, I should be more precise. These are not states that we're counting, but nodes in the search tree. We'll assume there's some maximum depth. You cannot go deeper than a certain depth. We'll call that M. So somehow, once you have taken M actions, there is nothing left. So that means that this entire search tree in the last layer will have B to the M nodes. Again, M in the exponent is a thing that should worry you. If M is very large, this could be a very large search tree, and you wouldn't want to explore all of it. There could be solutions at various depths. For example, there might be a solution all the way at the end, but also a solution somewhere that takes less actions to get to the solution. And it could be multiple ones. So in this case, there is two states that satisfy-- or two nodes that achieve satisfying the goal condition. Number of nodes in the entire tree tends to be dominated by the number of nodes in the last layer. It's a little more than that, but because of the exponential growth, the last layer dominates what you have. And so we'll say order B to M. You can make it B to the M plus 1 if you want, but that's kind of the ballpark we're working with. So now let's look at properties of depth-first search. What nodes does DFS expand? Well, here's our cartoon tree. It streaks through left to right until it finds a solution. So based on that, we can start thinking about, what work does it do? What's the time complexity of depth-first search if it has to traverse this thing? Worst case, it could be the entire tree. If that solution lives all the way at the end here, it's the entire tree, so that would be a lot of work. But let's say M is finite. Then that means that worst case you need to do order B to the M amount of work. How much space does it take while doing this search? Now we need to think about what the algorithm does. The algorithm maintains a fringe, a fringe of possible nodes for expansion. Let's say we're going depth-first and we currently have gone all the way here. What's on our fringe at that moment? Well, from the node above here there were a bunch of options, and the ones on the right we haven't done yet. So those are on the fringe. Actually, they will be living over here. So those are on the fringe. Then the node before that will also have had a bunch of options, and these are on the fringe. This line doesn't belong. Then same for the node before, and so forth. So how many is that? If we count our way to the top, it's M deep. So there's M such successor split points, and each of them can have B successors. So there's B times M on our fringe. So that's not too bad. The space complexity of depth-first search is actually very nice. Is it complete? Will it find a solution if one exists? Can a tree be infinite? So one answer is yes. The other one is, well, might it depend on whether the tree is infinite or not? It's a good question. So let's assume the tree cannot be infinite. What can happen? Well, the first search will streak through the entire tree, and then at some point we'll find a solution if it exists and return it. How could this tree be infinite? Well, we saw a state space graph with only four nodes that had an infinite tree. So definitely infinite trees exist even for small state spaces. So it's complete if the tree is not infinite, if we have some finiteness assumptions. Is it optimal? No. It just goes left to right, and it might find whatever happens to be most left and could be pretty bad. How about breadth-first search? Breadth-first search is a different strategy, where instead of taking the deepest one, we'd take the shallowest one first. So it's like we're stripping off layer by layer what's in the search tree. So what would this look like? On the same problem, we can start with S. We have no choice. Only one to pick from the fringe, expand. Then shallowest first. That's breadth-first. Well, they're all equally shallow. Well, pick one, break ties alphabetically-- D. How about now? Now these two are more shallow than the other ones, so they're going to be called upon first. Arbitrary breaking, let's say alphabetical, E comes first. And so we go through this search tree layer by layer by layer until at some point we reach a level where we find the goal, and then at some point we can declare success. So a very different way of traversing this search tree. What nodes does BFS expand if we look at our cartoon here? Well, essentially it would expand all the nodes until you explore this guy. So everything up here would be expanding, and then actually it would have expanded into a little bit here. Because we only declare success once you expand the goal, once you're about to expand the goal, so that's part of the tree it would have covered. So what's the time complexity then? Well, the time complexity of visiting all of those depends on the depth of the solution. If the solution is pretty shallow, it'll find it relatively quickly. So let's call that depth of the solution S. Then time complexity would be roughly B to the S. How about space complexity? What do we store on the fringe in this process? Well, as we go deeper and deeper, this search tree grows. It's exponential expansion. So at the very end is when we're going to have the most in our fringe. What we'll have on the fringe is these nodes here that have not been expanded yet, and then these also that are waiting to be expanded next. How many of those are there? Well, we're at depth S, so it's going to be roughly order B to the S nodes that are sitting on the fringe when you expand that goal node and declare success. Is it complete? Does it always find a solution if one exists? Who thinks yes? Raise of hands. Most think yes. Yeah, because it just works through the search tree, and once a solution is there, at some point it will find it. Is it optimal? Who thinks yes? Who thinks no? Most people think no. It could be somewhat debatable. In general, it's not optimal. Maybe it depends how you define optimality. If all costs of actions are the same, it's always action cost of one. It is optimal, because it finds the sequence of actions that's shortest to achieve the goal. But if your costs are different for different actions, it's not guaranteed in any way to find the cheapest sequence of actions. Let's do a little quiz on DFS versus BFS. I'll do some fun animations here. Let's see-- first one. What I want to showcase here is either depth-first search or breadth-first in action. And what we're going to show is in the state space, which is every grid square is a possible state, and green is the start, red is the goal. Whenever for the first time we call the successor function on a particular state, we'll highlight it. Let's run one of the two algorithms. What's this one? STUDENT: Breadth-first. PIETER ABBEEL: Breadth-first, because nearby states get expanded before faraway states get expanded. How about this one? It did find the solution. It's not the shallowest one in the search tree-- depth-first search. Now, what if we do this in the context of some obstacles. So black squares are ones you cannot get through. They're walls. The blue squares are squares you can visit. Let's again see what's what. I'll run one of the two. You call out which one it is. Who says breadth-first? Everyone. Great. It was breadth-first search. And the way you see it is that it expands from the start state out. It's not radially symmetric now, because the walls are blocking some paths. And it's based on the length of those paths, not based on straight line distance. But this is breadth-first search. How about this one? Who thinks breadth-first? Nobody. Depth-first search? It finds a solution again, but not maybe the one you'd hoped for. Might there be some trade-offs here though? When might BFS outperform DFS and the other way around? Any thoughts on when which one might be preferable? Here. STUDENT: If the-- if we're thinking in terms of a tree, if the goal is shallow and to the right, it would be [INAUDIBLE]. PIETER ABBEEL: So the suggestion was if the goal is shallow and to the right in the tree, BFS will drastically outperform DFS, which will sweep the entire tree before finally getting there. Yeah, definitely a big advantage there for BFS. Any other thoughts about trade-offs? Here. STUDENT: All of these examples, we've only had one goal. But if you had many satisfiable outcomes, they're all in our way, then you'd want to do a depth-first search. PIETER ABBEEL: So the suggestion is that maybe when there is many, many goals in the tree, and they're all very deep-- so you need to go deep anyway-- then maybe depth-first search will find them first, because breadth-first will be so busy before it finally is willing to look at anything at the bottom. And actually, we'll see exactly that scenario in lecture-- let's see-- four next week, Tuesday. So that's a nice case where DFS has the advantage. And other thoughts about trade-offs? Over there. STUDENT: If you have memory limitations, and you can't actually use BFS [INAUDIBLE],, you'd have to use DFS. PIETER ABBEEL: Yeah, so the suggestion is, if you have memory limitations, DFS is so much better in terms of memory than BFS. So maybe you just have to use it, even when maybe you want to find a shortest path. You just have no choice, because you'd run out of memory using breadth-first. Any other thoughts? Yes. STUDENT: You have [INAUDIBLE] path problem. PIETER ABBEEL: So BFS has the advantage of finding the shortest path. Now let's see if we can combine some of those all in one. So we like BFS because it finds the shortest path based on counting number of actions. If a short solution exists, it doesn't spend time exploring the entire tree. It just needs to find the short solution. But DFS has better memory properties. So can we bring them both together in one algorithm and get the benefits of all of these? It turns out we can. It's something called iterative deepening, and the idea is to get the space advantage of DFS built into a breadth-first search. Or he can think of it the other way around, you're essentially just bringing them both together. What's the idea here? You always run depth-first search, because that's the memory efficient one and you're not willing to forego that. But you cap the depth to which you're willing to search. So your first run, when you hit depth one, you stop. Your successor function is modified to say there's nothing beyond depth one. This is it. You have to stop. If you don't find a solution that way, then you made the cap two. If you find a solution, that's great. You found the solution with only two steps. But if you don't find a solution, you made the cap three, and you keep expanding your cap. You can always run depth-first search, so you never run into memory issues. Yet, you are also now going down rabbit holes on the far left that might lead nowhere, but be really, really big. Because remember, the bottom of the tree is exponentially large compared to the top, and if you spend a lot of time at the bottom, you're going to be spending a lot of time overall. So a very simple way to solve search and getting the best of both worlds. You might wonder, is this not wastefully redundant? Am I not redoing the work for the first one every time, and the second one every time, but the first time, the third one every time, but the first two? Yes, there is some waste happening. But if you think about it, it's not bad. And the reason it's not that bad is because the last layer is so much bigger than the previous layers. In fact, the last layer tends to be as big as all previous layers combined. And so the redundant work you do is not that much compared to-- you need to expand that last layer anyway to find out new results. That's also where this cartoon is kind of misleading, by the way. This looks like something that just grows linearly effectively with depth and size. The width there is linearly with that, but in practice, exponential, which we can't really draw in just two dimensions. Now let's switch gears to a different kind of problem formulation. What if we care about the cost? Not just cost one for each action, but different actions could have different costs. For example, this transition costs three. This one costs two, and so forth. How do you find the shortest path accounting for cost? Uniform cost search will do that for us. So how does this work? It's kind of inspired by breadth-first search, which expands shallowest first, but now we'll look at least cost first. So it starts out with expanding from S. We have three nodes in the fringe, and instead of picking shallowest or deepest, we pick based on lowest cost. This one has 1. This one, 9. This one, 3. With pick P, we expand. Then we check again. Which is lowest, 3, 9, or 16? 3 is lowest, we expand D. And we repeat this process. Now what's lowest? 4 is lowest, and we expand that one. Now its lowest is 5. And keep in mind, this is actually cumulative cost. So when it says 5, that is a cost of 3 going from S to D, a cost of 2 going from D to E-- cumulatively, 5. And so our lowest cumulative cost on the fringes 5. That's what we expect next. The next lowest is 6. We'll go with 6 next. And we'll keep going, popping things from the fringe based on lowest cost first. Right now, lowest cost is 9 over here. We also see the goal sit on the fringe. This is where it starts mattering what we do. We're always picking the lowest one. Even though the goal is there, you might say, why don't we call it quits? We see the goal. We call it done. We found the path. No. We still pick the one with lowest cost, 9, because you don't know. Maybe from that one with 9, there is one with cost only 0.5 that leads to the goal, and if we already declared success here with the goal, we would not have found that one. So we're still got to try this, because it's only 9 so far, and it could maybe achieve goal at a cheaper cost than the one that we see there. Now the lowest cost on the fringe is 10, and it corresponds to the goal state, and we can declare success. And we're guaranteed it's optimal. Why? We have in the search tree explored every single path that you can traverse with a cost of 10 or less. There is nothing left you can do with a cost 10 or less. Everything else that exists will cost you more than 10. And we found the goal with cost of 10, so we know that's the optimal way to get to the goal. We, again, get a tiered expansion, but this time the tiering is not based just on layers, but based on cost encountered so far. Let's look at the properties. What nodes does it expand? Well, it's based on cost. So to quantify this, we'll have to say something about cost. And so if, let's say, the optimal solution is a cost C star, and each individual step costs us at least epsilon, then we might have expanded plans that take C star over epsilon actions to get there. So if every action costs at least epsilon and the goal is C star, then the longest path we could get below C star is C star over epsilon. So that's what we expand in the tree, anything that is less than that many actions. We will call this the "effective depth." And so the computational cost in principle could be branching factor B to the power effective depth. So it'll be exponential in the effective depth. How much space does the fringe take? Well, what's happening? When we're expanding, for example, when we're here, and we're about to expand, let's see. So we're about to expand this one. Let's say, if this is our fringe right now here, and we're about to expand this one here, how much is on the fringe? Well, how many nodes we can have here depends on the depth in the search tree. Our effective depth is C star over epsilon. So the amount of space taken would be B to the effective depth, which is the same as our computational complexity. Is it complete? Will it always find a solution if a solution exists? Yes, because it will systematically keep working through to search tree until it finds that path. There is some subtleties there about cost being positive and not getting looped into some negative cost cycles. But assuming that's all satisfied, we're good. Is it optimal? Yes, because whenever we expand the goal state, we know there's nothing left that's cheaper than the path we just found to the goal state. Everything on the fringe has a higher cost, and so it can never-- assuming all costs are positive-- come below what we have right now as our path to the goal. We'll do a formal proof with A star search, which is an extension of this in the next lecture. What are some issues? Well, it explores this increasing cost sets of nodes. It's nice. It's complete and optimal. But the bad is that it explores in every direction, which could be very expensive. It doesn't really think about where the goal might be, and that's where the next lecture will come in, to focus your search on things that are promising rather than things that have been cheap so far. So uniform cost search thinks of all these things as equally good. But if you are more informed, going towards the goal would be preferable. Let's look at the demo of UCS in action. So in this maze, black is still intraversable. There is shallow waters and there is deep waters. The dark blue is deep water, and it's more expensive to traverse, and the shallow water is the fainter blue, and it's cheaper to traverse. So if we run uniform costs search, what would we expect? We would hope that it would spend more time in the shallow waters, because it's cheaper to expand there, but as needed, also visit the deeper waters if that turns out to be the cheapest way to get to the goal. Let's see what happens. We see, indeed, that it kind of blocks on the deeper water while more quickly expanding in the shallow water. Let's play this again. And this is the behavior you hope for for uniform cost search. Remember of course, if you run something like breadth-first search, it would completely ignore the shallow versus deep water. It's just based on number of actions, and expansion is just as fast in the deep water versus shallow when it's doing the search. And of course, depth-first search completely ignores kind of everything, but still finds a solution. So we'll fix this soon to pay more attention to how far you're away from the goal and target your search. For this lecture, let's spend a little bit of time on unifying what we've seen so far and then show some of the limitations. So remember, when we defined a search algorithm, tree search, we had the same algorithm for DFS, BFS, and uniform cost search. The only thing that differed was the strategy of what to pick next from the fringe. And so one way, in your project one, for example, to implement everything with essentially one piece of code is if you have a priority queue and you just use different priorities for different types of searches. If it's depth-first, priority is based on how deep you are. Deeper is better. If it's breadth-first, priority is based on how shallow you are. Shallow is better. And if this uniform cost, priority is based on how much cost you've encountered so far on this path. So a single implementation that can unify everything. Can search go wrong? Well, let's look at some examples drawn from the real world. Here is MapQuest, an old path planning thing for driving. Let's take a look at one. It was asked to find the path to a destination, and all goes pretty well until over here it decides to take this turn, which happens to be this turn over here for your car. It doesn't work. So what we see here is a mismatch between how the map was made and what your search problem then could work on compared to the real world. So it doesn't mean that A star failed, or uniform cost failed, or depth-first/breadth-first failed. It just means that your search problem formulation was not abstracting the world the right way. Here's another example of search in action. What's happening here? Is this the search algorithm having a bug? It's possible, but most likely that's not what it is. What's going on is probably somewhere near to the destination here somehow the map is incomplete, and it lacks a way to get to the destination from this side. The path is just not in the state space graph. And as a consequence, when you run search, well, it's just calling successor, successor, successor, and this happens to be what it finds, which involves taking your car on a boat trip, multiple boat trips before you get there. So keep this in mind. Whenever you are doing search for real world problems, building the right models is really critical to get good results. That's it for today. See you on Thursday. [SIDE CONVERSATION] |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181002_Probability.txt | PROFESSOR: OK. All right, let's get started. How is everybody? STUDENT: Great. PROFESSOR: I'm glad. Today, we're going to talk about probability. Probability is going to be the basis for how we model and compute under uncertainty and artificial intelligence. So if we take a step back-- if we take a step back and think about what we've done so far in this class-- we've thought a lot-- mostly-- there have been some exceptions [INAUDIBLE] there were probabilities. In reinforcement learning, there was some learning. But mostly, what we've done in this class is we've thought about symbolic ways things can chain together and how to do reasoning using computation along those kinds of models. What we're going to start to talk about today is something we've mostly ignored to date, which is uncertainty. There's a lot of things that happen in the real world where either you can't or you don't want to have a fully detailed model that explains every last thing to the point where what's left is deterministic. Often, your model is incomplete or incompletable. And that shows up as uncertainty about what's going to happen. Things can be out of your control, or they can simply be things that you can't compute. So this middle part of the course, we're going to talk about probabilistic reasoning, which is our main formal mechanism for talking about and reasoning about uncertainty. And then, in the last, part of the course, we're going to talk about machine learning, which is-- you can think about that as how we learn the parameters that go into the probabilistic models or models in general that we'll be talking about now. So probabilistic reasoning is used for a lot of things. If you remember back, the brief history of AI, the first-- kind of in the first decades of AI, we tried to write down lots of rules and chain them together. And that gave rise to interesting kinds of computations, particularly for things like games that actually do work according to those kinds of logical rules. But then that didn't scale because of uncertainty, and we needed methods to cope with that uncertainty. That's what we're going to talk about now with probabilistic reasoning. And a lot of the recent advances in, say, the past decade in artificial intelligence have to do not just with reasoning about uncertainty, but about learning about how the world works through data, through experience. And that will be the last part of the course. So today, in some sense, is going to be a preview of all the things we're going to be seeing and working with and all of the techniques and ideas for the next several lectures. So on one hand, we're going to see all kinds of stuff today. On the other hand, pieces of it are going to seem familiar because you've all seen some kinds of probability or statistics in other courses. And we're going to lay all that out today. So in some ways, today is going to be the most sweeping lecture. And in other ways, it's going to be the most nitty gritty kind of computational low-level lecture there is in this class. And so please keep in mind that the stuff we're going to use today-- that we're going to see today, we're going to use over and over again. So the goal for today is to really know all the stuff we talk about today cold so that when you see these things you just intuitively get what they are. And so this is something that it's worth making sure that by next lecture, anything that's new today, you've really sort of gotten a chance to wrestle with it and internalize it. You're going to see this stuff a lot for the next few weeks. We're going to talk about random variables, which is our key method of modeling uncertain outcomes. We're going to talk about joint and marginal distributions. We're going to talk about how to compute conditional distributions, which are the quantities we actually care about-- uncertain values given evidence. And we're going to talk about a lot of the mechanics that form the basis of the graphical model infrastructure that we'll use over the next few lectures, both for models that don't have time, and models that do, for which you will have hidden Markov models later. All right. So here we go. We'll begin with Ghostbusters. So I'm going to show you a demo. This is like CS 188 Ghostbusters. The probability is sound. The spectral theory is totally made up. OK. Let's see. We're going to play Ghostbusters as a class. And this is going to illustrate both what's involved in accumulating and managing evidence and uncertainty, and also how frustrating it is to not have something to help us compute posterior probabilities of variables we care about given evidence. So the way this game works is, on this board somewhere, there is a ghost. OK. You'll see later versions of this in your projects for which there can be more ghosts. But, for now, there's one ghost on this board, and it's somewhere. Our job is to figure out where it is. We get one shot. We click bust, and we're either right or wrong, and we get an appropriate utility. We can also take sensing actions. So we can drop a little probe, and it will tell us about the density of spectral ammunitions or something like that. And we get a color back. What does the color mean? The color is going to be red. That means we're right on top of a ghost here, all the way down to green, meaning ghosts seem to be distant. But there's going to be noise. Before I do anything, I want to show you the noise model. The noise here is given by a bunch of probabilities. You can see on the bottom, of color given distance. So, for example, if the ghost is three away, the probability that you get a red is 0.5. So everything is uncertain. And sometimes you're right on top of the ghost and you get a green, and sometimes it's clear across the board and you get a red. And that's super frustrating. So let's play. All right. I will sense here. All right. Ghost is sort of nearby. I'll sense here. You can see my score is dropping because it costs 1 point every time I sense. I'm going to sense here. OK. Maybe we're getting close. Let's dig around here. All right. All right. OK. Is it there? Is it there? Well, who knows, right? Maybe. All right. Apparently-- you know what? I bet uncertainty is off. No. There we go. All right. So we got a bunch of sensor readings here. And at some point-- OK, there's this trade, right? At what point should I stop paying for sensing moves and just bust? That's a value of information computation. It's a super cool concept, and we're not going to do it today. Today, we're just going to stop. OK. So I've decided to stop. I'm going to bust. What should I do? Well, where do you think I should bust? CS 180, a guided search here. Should I bust here? No. How about here? No. I'm just trolling now. How about here? Yeah, OK. Let's do it. Ready? We got him. OK. Good. All right. So that is Ghostbusters. And we got 228 points because we got him, but we sensed a whole bunch of times. All right. So let's look at that one more time. I just can't get enough. OK. All right, let's look at this one more time. Right now, if I asked you, hey, where's that ghost, what would you say? Well, first of all, you don't know. You can't know. You don't have the-- you don't have enough evidence to know. So you would say something like, I don't know where that ghost is. It's somewhere. Each location might be equally likely if that's how the game is set up. OK. So your prior distribution here says uniform probability. But then once I start getting some sensings, some sensor readings here-- at this point, where do you think the ghost is? You know, maybe it is more likely to be kind of left or kind of top. That sort of heat map in your mind, how likely each square is given the evidence you have, that's the posterior distribution. That's what we'd like to be able to compute on the basis of the evidence. We don't have the tools to do it yet. We'll get a lot of those tools today and even more of them in the next couple of classes so that we can say precisely, exactly what probability is in each square. And then, if I get enough of them, maybe-- OK, I still have no idea. All right. So I'm going to bust. I'm going to go for it. Miss. Close. OK. There we go. You don't always win at Ghostbusters. All right. So that's the basic idea. That's going be a running example through a lot of this-- is that there is some quantity you don't know. In this case, that quantity is the position of the ghosts. And that will be represented by a random variable in your model. There are other quantities, like the results from the sensor probes, which you do know. In addition, you know how the things that you know are connected up probabilistically to the things you're curious about. And then you do some computation in order to compute the probabilities you want. That's what probabilistic inference is about, and we're going to lay the groundwork for that now. So here's a general situation. You're going to have a model. What is a model here? A model is going to be a collection of variables. These are going to be random variables, because sometimes you know their value, but sometimes you don't. When you know the value of a random variable, that's evidence, right? It's not uncertain anymore. So there's going to be the observed variables. And you're going to know certain things about the state of your world. Here, that's sensor readings. But if we were doing medical diagnosis, the random variables that you observed as evidence might be symptoms or lab tests or something like that. There are going to the unobserved variables. And these are all the things you don't know the value of-- for example, in this case, where the ghost is. But there are other items or variables like what reading you would get at a different location. In a medical diagnosis case, there might the unobserved variables like other tests you didn't perform but still could. So there's the evidence. There's the query variables that you actually really care about right now. And then there are going to be other variables that may also be unobserved. And you need to figure out how to work with them as you do your probabilistic reasoning. And then, finally, you've got this model. And what the model is here-- that's a word we use a lot in AI. And what the model means here is it's a description about how the known variables relate to the unknown ones. Today, that's going to take the form of a giant joint distribution, which is a big look-up table of probabilities. And as we go further into this unit, we'll get more and more sophisticated ways of specifying compact models that let us do very complicated and powerful queries. Problematic reasoning is going to be a framework for managing our beliefs and knowledge. That means taking all those relationships you know about between the variables, taking the evidence you have, and then doing computations to produce quantities that you can use to make decisions. For example, the decision of where is the ghost most likely to be, or what is the expected utility of doing a sensing action at a certain location. Those are going to be things that we need to know probabilities conditioned on our evidence in order to do well-founded decisions. All right. Any questions before we start? Because we're about to do a tour of probability and probabilistic concepts that's going to serve you well for the rest of this course. Yep? STUDENT: Will we also consider uncertainty on sensors and stuff like that? PROFESSOR: Are we also going to consider uncertainty over sensors and things like that? So in the formulation here, that uncertainty will show up in the random variables that represent those sensors. So if I had a deterministic ghost sensor, the probability of green, given the ghost is 6 squares away would be, maybe, 1. It would be deterministic. Probability 1 means deterministic. A noisy sensor might say, well, at that distance, you're 90% likely to get a green but 3% likely to get an orange. That's noise in the sensor. Now, a lot of sensors are real valued kinds of things. Like, I take a laser measurement or something, and I get a distance back, and it might be slightly off. So the format of the noise in a sensor is going to vary based on what kind of distribution . And that's something we'll talk a little bit more as we go. Yes? STUDENT: [INAUDIBLE] PROFESSOR: Will this week's material be on the exam? I think so. We'll see. We'll tell you clearly where it's stopping. The exam doesn't exist yet. [LAUGHTER] OK. All right. Starting with probability and probabilistic concepts. First concept, random variables. These are actually a lot like something from CSPs, which were variables. A random variable is an aspect of the world which you care enough about to put in your model. You give it a name. Like, for example, the variable R might represent whether or not it's raining. T-- in this running example, we're going to have a-- that might be whether it's hot or cold. D might be how long it takes to drive to work. L might be where's the ghost. And so we're going to put variables in our model. And these are things either that we can observe, or that we might want to infer about, or other variables that help us write down how the world works in a probabilistic way. We write these down with capital letters. They're just like variables in the CSP in the sense that they have domains. So, for example, R-- is it raining-- might have the domain True or False. All right. You guys have been seeing true and false forever, but there's an important shorthand we have in Cs 188, which is that, when possible, when names aren't ambiguous, we have a shorthand where we write something like R equals true. Instead of writing, R equals true, which takes a lot of space-- and I can't spell-- instead of writing, R equals true, we're just going to write plus R. OK. So plus means true. Minus means false. And the lower case r means the variable we're talking about here is the capitol version of that letter, so capital R. So you can think of this as true and false with typing so that we can write things more compactly and more clearly. So we might have a variable like T, hot or cold, that takes on values hot and cold. Now, it's binary but not Boolean. We might have variables that are continuous values. So something like, how long will it take to drive to work, could be real value that could be a time. You could have things that are discrete valued but not binary, maybe even structured in their value. So whereas the ghost might be some enumeration of positions on the board, 0-1, 0-2, and so on, x-y positions. OK. So we can give these variables big domain, small domains, continuous, discrete. In this class, we will almost always be talking about discrete domain variables. All these same concepts we talk about work in the continuous case, but the mechanics are more complicated. So all your sums turn into integrals, and you've got to think a lot more about some kind of corner cases where things get degenerate. OK. So it's just like CSPs. We've got variables that talk about the important properties in the model. And they have values in a domain. What's new is we now have probabilities that are associated with the values. So, for example, for temperature here, we might be hot or cold. We might have a probability distribution over that random variable. And here, the probability distribution assigns a number to each outcome. And those numbers have certain properties like they're positive, they sum to 1. OK. So here, 50-50, that's a uniform probability over the outcomes hot and cold. Here is weather, right? This is not-- it's still discrete, but it's not binary anymore. So this one says sun has probability 0.6. Rain has probably 0.1. Fog has probability 0.3. And there is an outcome meteor that has probability 0. OK. To give you a sense of what a probability distribution over four values looks like, there is actually a-- this is a well-formed probability distribution, but it's actually a big no-no to put 0s in your probability distribution. So I probably wouldn't do this if I was building a real robot weather crisis model or something. I would probably replace that with something like some appropriate small number. Because once you put a 0 in your probabilities, all kinds of weird stuff can happen. And what you generally want to say is, well, that's very unlikely, but should evidence come in to my observations that is consistent with that outcome and not with the others, I eventually want to conclude it anyway. So you only put 0 in for things that can never, never, never, never, under any kind of evidence, happen. For the slides, it's nice to have 0s. But that's a big no-no in real models. All right. So unobserved random variables have distributions. So they look like this. A distribution is a table of these values. So when we talk about a distribution, that's a table. When I write a distribution, I like to write something like P of capital W. When I see a capital, that means-- it's like an array. I have a value for every value in the domain of that random variable. When I write something like P of W equals rain, that's not an array anymore. That's now a single scalar value. So a probability is a single number. So, in this case, P of W is this whole table. P of W equals rain is this one entry in that table and therefore is associated with a scalar rather than a table, or vector, or matrix, or whatever of numbers. All right. The properties, in order to be a legal licensed probability distribution, all the entries have to be positive, and they have to sum to 1. 0 is allowed but usually a bad idea in practice. And we'll talk about why later. All right-- oops-- let's erase this first. We have a shorthand notation-- don't let it confuse you-- which is that we often drop the variable equals when it's unambiguous. So of T is the only thing that can be hot, we'll sometimes write something like P of hot or P of cold if T is the only thing that can be cold. But if there are multiple variables that can be hot or cold or true or false, we have to write the variable names as well. That's why we have that other notation before where we might write P of plus R to indicate variable R has the value true. This only works if domain entries are unique, which we'll try to do for examples. All right. Joint distributions-- joint distributions are the heart of probabilistic models. They have all the information about the domain, about all the variables, about all the interactions between the variables. But they're very big. So we avoid actually computing them in their full glory whenever we can. They're sort of like search spaces. They're too big to actually write down. But if you use them in the right way, you can sort of do anything you need to in a probabilistic mode. So a joint distribution over a set of random variables-- it could be 1, or it could be 2, or could be N-- specifies a real number-- the probability-- for every assignment. And assignment is just like it was in CSPs. It is an outcome for each value assigned to each variable. So we might write this as the long way, variable X1 has value lowercase x1. Variable X2 has value lowercase x2. But much more often, we use the shorthand. So this probability over these N variables is one number. If I put capital letters there, it's a big N dimensional array that has a number for each outcome. So here is the joint probability over the variable T for temperature-- which is binary here in this model-- and W for weather, which is also binary. And so their joint probability has the outcomes hot sun, hot rain, cold sun, cold rain. This should feel like a truth table. It's the same thing, except now there's a probability assigned for each of those. From this, I can now answer any question you want about temperature and whether in this model. I can answer questions like, how often is it sunny? Well, it's hot and sunny 0.4 of the time, and it's cold and sunny 0.2 of the time. So how often is it sunny? 0.6. I can go to this table and compute any derived quantity I want. This joint distribution has to obey some rules, but they're pretty lax. The rules are just positive numbers and sum to 1. The actual numbers in which numbers are bigger than others specify what's likely, what correlations exist between the variables, what the probabilistic dependencies are, and will become much more important when we start breaking these tables up into graphical models. Problem with joint distributions-- on one hand, they're your key to answering any question about a domain you can ask-- probability of this given that, and so on-- is their size. So let's say you have variables like up here. And each one has a domain of size D. How big will this table be? Any guesses? STUDENT: ND. PROFESSOR: ND. We have ND as a guess. Anything else? I heard D to the N. Anything else? Well, let's think about it. What does this table do? So I've got each of these variables-- one variable, X1, X2, X3, X4. And for every assignment to that sequence, I get a number. So as that sequence gets one more element-- let's say it gets another true-false element at the end-- how many more numbers do I need? Twice as many, because I have all the values where it's true. And I have all the values where it's false. So this is going to be exponential. You have one number for each variables assignment. And that means that it's going to be exponential in the number of variables. That's usually your sign that something is going to get very expensive and very bad. So for any real model that has-- real graphical models can easily have thousands of variables. You can never write this joint distribution in the same way that you can never write out the whole search space. But that didn't matter to us back in Search, because we only needed, sort of, the part of the search space we found during our search process. We started at the top, and we were selective about creating and instantiating the search. It's going to work a little differently here, but as we get more tools over the next several lectures, we're going to be very careful to not build the whole joint distribution. We'll only build the corners we need in a, sort of, on-the-fly, in-demand way. But today, we're going to do the gory, expensive, brute force thing so that we can make sure we understand all the concepts there. So joint distributions get very big very quickly. OK. Probabilistic models-- what is a probabilistic model? Well, it's just a joint distribution. That's it. A probabilistic model is a joint distribution over some random variables. They're going to have random variables which have their domains. The assignments here are called outcomes, and the joint distributions say not whether they're possible like in a CSP, but whether they're likely. So this probabilistic model over temperature whether says that hot and sun together is reasonably common, but hot and rain together isn't very common. These probability models are normalized. You're going to see this word a lot. Normalized means when you add everything up, you get 1. It's one of the properties you need to be a legal distribution. And ideally, only certain variables directly interact. The thing that's going to be the foundation for our ability to do efficient inference over very big models is that the interactions are-- not every variable connects to every other variable. For today, that won't matter. But that will be very important starting next lecture. This should look a lot like a CSP. So what's the difference between something like a distribution over T and W here with hot and sun and cold and rain versus a CSP over those same variables? They're actually very, very similar. The differences-- a CSP can be thought of as a giant-- specifying a giant truth table. For each assignment to all of the variables, it's either true, meaning it passes the CSP's constraints. Or it's false, meaning it's illegal according to the constraints. Now, we never wrote CSPs out like this. You would take those 300 countries and their colors, and you would write this giant exponential sized assignment to all of the colors to the map. And then you would write at the end, T or F. We didn't do that. We broke into little pieces that says, these two can't be the same, and these two can't be the same. And we're going do the same thing with probabilistic models. We're going to say the probability of this variable depends on this variable in the following way. But, again, not today. Couple more basic definitions, and then we're going to start to derive interesting computations from these things. Events-- an event is a set of outcomes. Remember, an outcome is a complete assignment to all the variables. OK. So if I say the probability of some event, and there are multiple assignments that match that event, I have to add them up. In general, when you collapse together probabilities, you talk about the sum of their probabilities, because you don't know the probability of anything in that bundle happening. So from the joint probability, we can calculate the probability of any event, many of which we don't care about. But there is a certain class of events that we're going to care about all the time that we'll get into in a second here. So let's look at the probability that it's hot and sunny. Well, that's this row. So computing that event-- it's a single outcome. I can look it up on the table. But the probability that it's hot is not directly in this table. It has to be derived from this table. So I can take this table, and I can say, well, that row is a case where it's hot. But so is this row. And how do I get the total probability of hot under this model? I add them up. In this case, I would get 0.5. OK. This particular event where one of the columns mattered and the others didn't, this is a marginal. We'll talk about these in a second. But this is a common kind of event we think about-- is a case where we care about some variables' values but not others. But we can do all kinds of stuff. Like, we could do, instead of hot and sunny. That's a single outcome. Hot. That is a collection of outcomes there that ignore one of the variables. Or we can talk about hot or sunny. That's the top three rows of this table. And if I add up all their probabilities, I get 0.7. OK. So as my events get bigger, their probability is going to grow. OK. Any questions on that? Almost all the time in this class, the events you care about are partial assignments. So something like T equals hot, or W equal sun, or T is hot, and W equals rain, and not these kind of "or" events, kind of a bunch of arbitrary things, or together. We tend to not actually use those. All right, let's do a quiz. We're not going to do, like, a stop and discuss quiz. We're just going to do a really quick eyeball the thing and give yourself a couple seconds. So what is the probability according to this probabilistic model, which is a joint distribution of X and Y? What's the probability of X and Y both being true, or plus X plus Y? 0.2. We looked it up in the table. All right. What's the probability of plus X? 0.5. You get that by adding up these two rows. Another way I can talk about this is I can say, those two rows are consistent with plus X. We're going to use this term, consistent with probabilities, a lot. And you'll notice we talked about consistency in Search. That had to do with heuristics and a certain kind of triangle inequality. We're talking about consistency in CSPs, and that means something entirely differently. This is going to mean something entirely different here. We talk about consistent with probabilities. We're talking about matching variable values that are known. So the rows here in red are the ones that are consistent with plus X, meaning they match that partial assignment. All right. So I could also talk about the event minus Y or plus X. What would that be? It's also this. That's 0.6. OK, great. Marginal distributions-- a marginal distribution is one of the harder names to remember. They're called marginal distributions because they used to publish certain kinds of distributions that have two variables as a two-dimensional table. And they would sum up for you the rows and the columns. And when you have a two-dimensional table and you sum up all of the rows, those sums would go at the end in the margins. That's where that derives from. So a marginal distribution is a subtable. So if I take a probability-- if I take a joint distribution, it says for every combination of multiple variables what their probability is. A marginal distribution is a smaller table in which some of the variables have been eliminated. When you eliminate variables, rows collapse together. Like if I eliminate W here, hot sun and hot rain collide. They're both P of T equals hot. When things collide, you add them together. That process is called marginalization, or summing out. So for P of T-- I can take P of T. I can derive it from P of T comma S by summing out all values of S for the given value of T I care about. So if I want probability of T equals hot, I add up probability of hot sun and hot rain. That is this equation. So connect to this equation with that idea that you've already done intuitively. It's summing out the dimension S, which here is the variable W. Of course, there's also another marginal of this table. The strength distribution also has a marginal probability over W, which gives the probability of sun and rain with the temperature marginalized out. OK. So from the joint distribution, you can derive the marginals. You think it works the other way? It will be a puzzle for next time. It's certainly the case that if I give you the thing on the left, you can derive the thing on the right. But it's actually tricking you to go the other direction. There's a problem with that. We'll talk about that next lecture. OK. All right. Let's do a quiz on this marginal distributions. Here is a marginal distribution over X and Y, which both take on the plus and minus value. That's their joint distribution. So for their joint values, we have the probabilities. I'm going to compute a marginal analysis. So let's compute the part marginal over P of X. Well, it's got entries for every value of X, but ignoring the distinction in Y. So for P of plus X, I look at the consistent rows in the joint distribution, which here the top two. I add them together. This is called summing out for that reason. And what do I get? I get the 0.5. That was this row and this row are consistent with plus X. I add them together. I get plus 5. Now, I could do minus X. I could do it the same way algorithmically. I could take all of the rows in the joint-- which are consistent, meaning matching-- plus X. That's the bottom two. And I could add them together. And what would I get? 0.5. Or I could do it the sneaky way, and I could just take 1 minus what I've already gotten, because I know they had 1. All right. With P of Y? Stare at it for a second. Think about it in your head. What's P of Y? 0.6. And P of minus Y? 0.4. OK, great. That is creating a marginal distribution from a joint distribution. Now, in this case, the joint distribution had two variables with a two-dimensional object. And I created one-dimensional objects. The joint distribution might have 100 variables, and I create a marginal distribution over just 1 of them. I have do a whole lot of summing things. Or I could create a marginal distribution over 99 of those 100 things. And the table is still big, but less big. So any subtable, whether it's the smallest, or nearly the entire thing, or even, technically, the whole thing, can be called a marginal distribution. OK. Questions on that? Now, we have the core concept in probabilistic reasoning, which is the computation of conditional probabilities. In and of themselves, conditional probabilities are pretty simple to state. The conditional probability of some event A given some other event B. So, in general, you've got some universe of outcomes. You've got the region here-- you can see on this Venn diagram. What do we know? We know that B has happened. That means we are in this circle here. And what we'd like to know is, how likely is it that A also happens given that B happened? So, graphically, that's how likely is it that we're in this purple part in the middle given that we are in the red. OK. Mathematically, how do we compute that the definition of a conditional probability is the ratio of the joint probability? A and B divided by the probability of B. And you can think of this as, what proportion of the B outcomes are also A outcomes? OK. That's the definition of conditional probability. So that's one of the rules we're going to have when we start having to compute things is this definition that we can compute-- conditional probabilities, A given B, from joint and marginal probabilities. A comma B, And just B. OK. Let's do some. So here is T and W again. We can compute things like, what is the probability of sunny given that is cold. Well, we remember the definition first. We can do this the long way first. So first, it will be the probability of sunny and cold divided by the total probability of the conditioning event, which is C, which is cold. And the numerator, P of S comma C is sitting in this table, probability of cold and sun. So that's 0.2. And the denominator is not sitting in this table. But I can derive it. What's the total probability of cold here? It's 0.5. But notice it's also 0.2 plus 0.3. OK, what does that mean? Notice the numerator is part of the denominator. The denominator consists of cold and sun but also cold and rain. And this is going to be something that's important for some of the algorithms that come up. I'll show you more examples of it as we go. OK. So there is the competition there. OK. Quick quiz on conditional probabilities. Let's do probability of A plus X given plus Y. And for this one, the first thing we'll do is remember the definition. That's the probability of A plus X and plus Y divided by the total probability of A plus Y. So what's the numerator? Plus X and plus Y? It's 0.2. Plus X-- total probability of plus Y is 0.6. And that's going to be 1/3. All right. I'll let you do the next one yourselves. Probability of minus X given plus Y? All right. Who's got an answer in their head? OK. I'm going to do it. I'm going to do the whole thing. This is probability of minus X and plus Y divided by the total probability of plus Y. Minus X plus Y is this row here. So that's 0.4. The total probability of A plus Y is that row along with that row. Those are the two rows which are consistent with plus Y. And so together, they're 0.6. That means 2/3. And you could have also gotten that sneakily by noticing that this is everything other than the 1/3 we just computed. All right. I'll let you guys do the last one yourselves. OK. So this is the point where you start getting like, yes, I understood all the pieces. But, actually, I'm having to access them all in all kinds of combinations really fast. It is really good to practice this stuff, because it does have to get mechanical. All right. In addition to conditional probabilities, which so far have been numbers, there are conditional distributions. Like any other distribution, they are a vector of numbers that describe a probability of various outcomes that sum to 1. Except these outcomes are conditioned on an event. OK. So here's a joint distribution. This is that same distribution over the hot and cold variable T and the sun and rain variable W. And we can compute various things. For example, we can compute the probability distribution over W given the evidence T equals hot. Where did we get this? Well, first of all, what does this mean? Because there's a capital letter here, this will be a distribution over W. And you see a capital letter, you think, at a data structure level, this thing is an array in that dimension. So when I see P of capital W, this is an array over all of the values W can take. So it's going to be an entry for sun and an entry for rain. Each one of them is going to be P of sun given hot and P of rain given hot. OK. Let's compute one of them. Let's compute P of sun given hot. That's equal to P of sun and hot divided by the total probability of hot. And that is P of sun and hot is 0.4. Total probability of hot, 0.5. And you can see that 8 is sitting there in that slot. You can compute the other one as 0.2. But there are other conditional distributions. There's also a distribution over P of W given T equals cold. And there's at least one other thing you could talk about here, which is the marginal probability of P of W. So let's compute that. P of W, this is a distribution. It has a value. Remember it's a vector for W. It has a value for sun and has a value for rain. And what's the total probability of sun here? 0.6. Total probability of rain, 0.4. And so I can look at this and I can say, all right, before I had any evidence-- if you had said, hey, what's the probability of sun today? I would have said 0.6. But if you tell me it's hot, that belief that posterior probability condition on the evidence goes up to 0.8. That sort of makes intuitive sense. So maybe this joint model isn't totally crazy. But if I tell you it's cold now, the probability of sun drops from 0.6 to 0.4. So what I'm starting to get is I'm starting to get qualitative changes in variables of interest. Like, hey, what's the weather? That change as a function of the evidence I get. And where that changing probability is a function of the joint model, which specifies sort of everything and the evidence I get. And this is a lot of what we do with probabilistic models, is we build a giant model. And then we ask various questions. How does this evidence and this evidence tell me about this? I've got more evidence. How should my beliefs change? And that updating of beliefs in a well-founded way is a big part of what we're doing here. Any questions about conditional distributions? A conditional distribution is a distribution. So as a data structure, it looks like P of W. But semantically, it is conditioned on evidence. So it's not the same as the marginal. Let me stop and ask questions before you discover the normalization trick, which will serve you well. OK. So one thing you do a lot in probabilistic reasoning is you compute conditional probabilities. And in particular, you usually want the whole distribution. Like you don't really care how likely is the ghost to be right here. Oh, 0.004. I don't know what to do with that. But if you give me the whole distribution, I could do something like take the maximum and bust there. Or I could do a calculation, a utility calculation. Usually you want a conditional distribution over your query variable but given the specific settings of the evidence that you have. So a lot of your queries look like this. You have some joint distribution that has the variables you care about, and also the other variables, including your evidence. And then you want something like-- I want to know the probability of the various weather outcomes given that I've observed that the temperature is cold. All right. Well, how do we compute that? Well, OK, so we could just-- we could actually compute it. So, first of all, what's the thing going to look like? The thing's going to look like-- it's a distribution over W. So there's going to be sun and then some number, and then rain and then some number. So as a data structure, it's going to be a distribution over W. How am I going to get each entry? Well, we did it before. You kind of compute-- you use the definition of conditional probability. And you say, all right, well, first I have to compute the top one, probability of sun given cold. What's that? I go back to my definition. Oh, that's the probability of sun and cold in proportion to the total probability of cold. And you say, well, the numerator was easy. It's sitting in the table. And for the denominator, I went and I added some things up. OK. So where is this on the table, sun and cold? It's right there. OK. So what's that total probability of cold? OK. Don't go into simple shock here. Look at what these entries actually are. This is sun and cold in the numerator. And the denominator is that and the other thing-- rain and cold. So then I go to the other entry in the table and I say-- and there the numbers-- I go to the entry on the other end of the table and say, OK, what's the probability entry for rain given cold? Well, I pull up my definition of conditional probability, and I say that's the probability of rain and cold in proportion to the total probability of cold. This is sort of like what I did up above when I expanded out. I see now it is a different entry of the joint distribution. So now, this is rain and cold. OK. Rain and cold-- that's this one-- divided by the same thing. So that denominator is the same and still the total probability of cold. And, importantly, that denominator is just the sum of the two numerators. So that's a key thing whenever we do these conditional probability computations. We're always dividing by the total probability corresponding to the sum of all the different numerators. And that means we don't actually have to compute it. So the hard part about this computation is the denominators. You don't have to compute them. So the normalization trick looks-- is like this. So ignore the other computation for now. Here's the algorithm for computing a conditional probability. You say, all right, I have a joint probability distribution. Great. Someone has asked me for a probability. Like, somebody has asked me, hey, I would like you to compute for me the probability distribution over the variable W given T equals C. So what you do rather than trying to think about each cell of that independently, we're going to think about this like a database or a table operation. And this intuition will really help you when we get into building and doing inference over base nets and graphical models. So you're going to take your joint distribution. You're going to think of it as a big multi-dimensional array, or a database table. And what we're going to do is we're going to do various operations. The first operation we do is a select. We select the rows that are consistent with meaning. They match our evidence. So what's our evidence? Our evidence is that the temperature is cold. So I object to those rows. They are not consistent with my evidence. So I'm left with only the rows that match my evidence. Now, they still vary on W, but that's good. But they all match my evidence on T. So that select. You select the joint probabilities matching your evidence. And now I have a smaller table. What did this original table add up to? It adds up to 1. It is a well-formed normalized probability distribution. When I selected some of the rows, what does that had up to now? Not 1. Right. Because I just took some of the rows out. And the other stuff that just got adjusted was part of the summing up to 1. So I look at this and I say, all right, well, it sort of looks like what I'm looking for. It's sort of like a distribution over W for my evidence, except those not add up to 1 anymore. So what you do-- you can write this thing as P. You can write a lowercase c, because it's no longer an array over that dimension, comma W. But then I normalize. I make the selection sum to 1. So the step is select the probabilities matching your evidence and then normalize. So when I normalize, what does that mean? How do I make it sum to 1? I don't go in and just change the numbers around sort of arbitrarily. I keep their proportions. But I divide them by their sum. So in this case, I would divide them by their sum, which is that same quantity that kept showing up in the denominators. So mechanically, I've got a probability distribution popping out, because it sums to 1. And there's a value for sun and a value for rain. How do I know it's the conditional probability over W given T equals C? Well, selecting got me the right numerators. And normalizing is going to get me the right denominators. Because if I think of these values-- if I add them up-- cold and sun plus cold and rain-- well, that's just the total probability of cold. And so the total there is going to be P of C, which is the denominator for the conditional probability I want. So to compute a conditional probability, select and normalize. Any questions? Think about this like operations on a big table. You select rows. You normalize what's left. The algorithms for base nets are going to work like that. We just talked about that. OK. Now, we will compute the conditional probability over X. That means plus X and also minus X. The conditional probability over X given the evidence Y equals minus Y. So what do I do? First, I select some rows. What rows do I cross out? OK. Here's the first one. That one's toast. It Doesn't match my evidence minus Y. Third one doesn't match my evidence either. That one's toast. So we select-- oops, I have to write them out. So you select, and you get plus X minus Y and 0.3. And you get minus X minus Y 0.1. Hey, look. Everything that's left matches my evidence. Of course it does. I ejected everything else. All right. So I've got the rows that match my evidence. And now I normalize. What do they up to now? At present, they add up to 0.2, which actually turns out to be the probability of minus Y. So I'm going to normalize. And I get-- for plus X-- and for minus X-- for plus X, I get 0.75. And for minus X, I get 0.25. And then if I draw some lines and write P of W over that, it's a distribution over W. But it's not the marginal distribution over W. It's the distribution over-- sorry-- X is the distribution over X OK. It's not the marginal distribution over X. It's the distribution over X given the evidence minus Y. There is also a distribution over X. It's a different distribution, and I'd find it in a different way. I'd do that one by collapsing all the rows that match on the x-coordinate. It would be a different computation. Let's do it. Let's do that one. Let's compute P of X. That is also going to be a distribution over X, so it's going to be a number for plus X and a number for minus X, except this time I still extract it from the same starting position. Pretend nothing's crossed out with red. OK. So start thinking in your head how you're going to do this. Well, for plus X, I have to add together or marginalize here. I have to add together all the values that match plus X. That's 0.2 plus 0.3. That's 0.5. And that means the other one is going to be 0.5 too. So this is a distribution over X. This is also a distribution over X. The one on the left is the marginal distribution of X, meaning no evidence. The one on the right is the conditional distribution of X given the evidence minus Y. There is also a conditional distribution of X given plus Y. You can find that one for yourself if you like practice. Any questions? All right. We're getting pretty close here. We talked about how to normalize. You might wonder where this name comes from. You can look up in the dictionary. It says, restore to normal condition. What is the happy, normal condition we are storing when we normalize a row of numbers? It's there. Sum to 1s. They sum to 1. We selected some rows. They don't seem to want anymore. They're freaking out. We normalize them. They're all happy again. They sum to 1. They're a good probability distribution again. OK. So we talked about the procedure there. OK. Let's take a two minute break, and then we'll talk about probabilistic inference in general. Well, we'll talk about what happens when you have much bigger things that have other variables in there that you don't want and also don't know. And then that's going to give us, basically, the overview of what we're going to then redo in the next few lectures in an efficient way. So, a two minute break. And then we'll continue with generalized probabilistic inference. All right. Let's get started. We only have a couple of more pieces, and then aside from this minor issue of exponential costs in time-space and sample complexity for learning later, you actually have-- you will have all the tools for probabilistic inference. That minor thing is actually a major thing and will drive a lot of what we do for the next few lectures. But as a basic idea, we've been talking about probabilistic inference. Probabilistic inference is when you compute a probability you want. Like, hey, where's the ghost given my evidence? Or what's the probability of this heart condition given these evidence values? Or how likely is this to be spam given the words in it? Whatever that is, you're computing a desired probability that you don't know until you compute it from other probabilities that you do know. So, for example, you have the joint probability, but you'd like it conditional. So you do some work, and you add these things, and you normalize, and so on. Or you have one conditional, and you want to compute the other. That will be through Bayes' rule. We'll talk about that in a minute here. But, in general, probabilistic inference is going from a set of known probabilities to the desired probability. The desired probabilities are usually conditional probability. So I might want to know something in my model of driving, driving to work for a lecture in that model. How likely is it that I will be on time given that there are no reported accidents? And maybe the answer is some computations on my joint distribution. And I compute 0.9. These will represent the agent's beliefs given the evidence. So a rational agent in this sense will have beliefs which correspond to the conditional probabilities over the variables of interest given their evidence. Now, as you get new evidence, your beliefs can change. So the probability that I'm on time given no accidents might be 90%. But if I'm also driving in at 5:00 AM, it's even more likely I'll be on time. But if it's also raining, it may be less likely again. And this fact that observing new evidences causes your-- observing new evidence variables causes your beliefs to update. OK. That's an important part of rational intelligence. All right. The general case for computing a quantity from a joint distribution is an algorithm called inference by enumeration. It is an exponentially slow thing operating on an exponential large table. So this is, in general, not going to be tractable. But the algorithms that we are going to use that are often going to be more tractable, you will be able to see that they perform the same computation but in some kind of interleaved or efficiently structured way. In the same way that enumerating outcomes for a CSP until you find one that works does actually tell you what the CSP does, but it's not particularly efficient. So we're going to do the inefficient thing to build intuition. And then we'll start doing more efficient computations next time. So the general case is you have a whole bunch of variables through X N. And you have a joint probability distribution over them. So it's big. It's exponential in the number of variables N. It's very big. OK. From this very large table, you're going to have some evidence, like maybe variables E1 through EK. That might be X1, 3, and 7 have values. I know that it's raining. I know that the lights are on. Whatever it is that you know, those are your evidence variables. You then have a query variable-- sometimes variables, but often it's only one. And that's the variable you want to know. Given all this information, do I need my umbrella? Given all this information, where's the ghost? Give me a distribution over that thing I don't know given what I do. And so what you want to compute here is you want to compute the probability of the query given all that evidence that you have. Now, there's another class of variables. These are variables that you don't know, so they're not evidence. You also don't care about them, so they're not query variables. So, for example, in that Ghostbusters case, the sensor value at a position you hadn't sensed, you don't want it as part of your answer. But you don't know it either. Those are going to be hidden variables. In a medical diagnosis thing, that might be a sort of intermediate variables that sort of describe the functioning of various biological systems. It's going to be different in every application. But you can take your variables, divide them into evidence, which is things you know; query, things you don't; hidden, things you don't know and also don't care about. OK. So how are we going to compute this? This is going to be a three-step process inference by enumeration. You have this giant, exponentially large table. Step one, select the entry that's consistent with the evidence. You've already done this in a small case. So we cross out almost all of the entries. All that matters of this giant table are the ones that match your evidence. So if you know it's raining, and it's hot, you cross off everything where it's not raining and hot. OK. If you know that square 7 has a green reading, you cross off everything else. So you select the consistent entries, then you sum out H. You take all those variables that you don't want, and you get rid of those dimensions of this multi-dimensional array. How does that work? You get rid of them one by one. And for each one, you sum together all of the rows that now appear to be the same because they only varied on that axis. We'll talk more about this metaphor of squeezing these distributions going forward. But you sum that out. That means that what you're computing is, first, when you select the evidence, you get the probability of Q and all the values of H and the specific values of E. You then sum over all the values of H, and everything that collapses together goes into a table together. And then at the last step, you normalize. That's it. Take your whole distribution, select what's consistent with your evidence, collapse out variables you don't care about. We will be left with only one dimension left, which is your query. And you normalize-- that's it-- any quantity you want. Now what makes this hard is, often, you don't actually have the joint distribution to begin with. Once you get it, getting down to your query is easy. Module the exponential sides of everything. All right. So let's do some inference by enumeration. OK. And when we do this, think about this as we're computing distribution over W. That's a distribution over weather. Maybe this is an umbrella-bringing bot, or something like that, and wants to know what the weather is today. So given no evidence, P of W, this is a query that also happens to be a marginal distribution. Because there's no evidence. So what I can do? In this case, there's nothing to kick out. There's no evidence. The whole table is consistent with my evidence, but I need to sum out-- that means collapse-- S and T. Because they're not observed, and they're not part of the query. So a bunch of things are going to collapse together. So I'm going to compute, well, W is either going to be sun or rain. And for sun, what are all of the rows that are going to collapse together? Well, it's every row that says sun. So it's this 0.3, and this 0.1, 0.4, 0.5, 0.65. Did I do that right? OK. And then rain, what's that going to be? 0.35. Did anybody actually check my work? Because if you think I can't make an arithmetic mistake up here, you are wrong. OK. All right. So we have a distribution over W, which says, OK, given no evidence, I believe given this assuming the joint distribution is the ground truth probabilities. There's this question. Where do we get these probabilities from? We don't worry about that. That's through the course. For now, we talking about the probability fairy expect a max that comes in and tells you what all the probabilities are. The probability fairy gave you this. This is just truth. Later on in the course, we're going to figure out where did that 0.65 come from? How did I know that that row was 0.3? All right. So P of W. Now, let's say we have some evidence. We have evidence that it is winter. So we're going to cross out everything inconsistent with our evidence. Gone, summer, gone, gone, gone. Get out of my table. You're inconsistent. OK. So I'm left with a smaller table that's consistent with my evidence. This is the subtable for the evidence winter. Now, I still only care about P of W. So I still need to collapse out T, that hot cold distinction. So now I'm going to get another distribution over W, sun and rain. All right. So what's sun? Well, first, I just collapse stuff. So sun occurs here and here. So that's 0.25. And then rain occurs here and here, and that's point 0.25. Is that a probability distribution? Has it been normalized? It still requires normalization. So we're going to normalize. And what do we get? We divide it by it's sum of entries. And so we're going to get 0.5, 0.5. OK. Is this plausible? I thought it was 65% chance of sunny. But somebody said, hey, it's winter. My beliefs update. I'm a good rational agent. Probability of W given winter and it's hot. So, all right, we need to kick stuff out that's not consistent. Consistent, consistent, inconsistent, inconsistent. Those have to do with cold. So I'm left with even smaller distribution because I have more evidence. And I don't have to marginalize out anything. I don't have to sum anything out. There no hidden So I'm left with W. I'm left with sun, rain. And for sun, I get 0.1, and for rain, I get 0.05. That's definitely not normalized. So what do I get when I normalize it? OK. 2/3, 1/3. No-- yes. OK. And so now, suddenly, the probability of sun goes up. So it was high, and then I found out it was winter, and it went low. And then I found out that it's actually hot, so it went back up. Now, the exact values of how much it goes up or down or whether it goes up or down at all has to do with the specific numbers in this table. But the computations that lead to them, you can see the structural form they take. And if the table has numbers that sort of correspond to reality, then we hope that these computations will give rise to derive quantities which are also useful. Any question? Yes? STUDENT: [INAUDIBLE] PROFESSOR: Yes. Yes. So in the first one, P of W, when I selected what was consistent with my evidence, I selected the whole table because there is no evidence. The hidden variables-- the query variable was-- so Q, the query variable in all cases, was W. So in the first one, the hidden variables are T and S. So on the axes of T and S, I needed to remove that distinction, which collapsed together a bunch of entries that needed to be summed. Once I saw that there was winter, W remains the query variable, but now the season variable S becomes evidence. But T remains a hidden variable that needs to be collapsed. And then, in the end, there are no hidden variables. There were just T and S as evidence. And so, as I observe evidence, things that were hidden-- and I just had to sort of sum out over them-- become evidence. They become known. And there's less summing I have to do and more conditioning I have-- more selecting I have to do. So what you actually do with the joint distribution-- nobody's going to walk up to you and say, hey, what's the probability of winter and hot and sun? And you're going to say, 0.1. That's not actually useful. What's useful are these derived conditional probabilities that tell you about variables given evidence. OK. Some obvious problems with this algorithm. You are doing exponentially large number of summations. That's bad. To store this table in the first place was exponentially large. That's actually bad. And there's a third, slightly more subtle thing, which is, to learn the probabilities, you need an exponential amount of evidence. Because each entry has to get learned, and you're not going to learn much about it if you don't see it to a first approximation. So somehow, this whole giant table of everything is not going to work. And so what we need to do is we need to build this big table out of little pieces that we can actually estimate and compute and store and then compute with efficiently. So what we're going talk about now is how you-- so up to this point, we've talked about the tools for taking a big table and shrinking it. Now we need tools that let you turn one thing into another or let you merge things. So we're going to quickly go through those tools. And then we'll come back next lecture, and we'll talk about taking it all the way the other way to very, very small pieces and then building up large distributions. So the tools you don't have-- though, it's possible you've seen them before in other classes-- have to do with how you produce joint distributions. Because we can't use inference by enumeration until we have a joint distribution. Sometimes, you have conditional distributions, but you want the joint distribution. The canonical case is the product rule. This says, all right, I would like-- this says, I would like the joint distribution over X and Y. But I don't have it. I have a marginal distribution over just Y itself. And I have a distribution over how X behaves conditioned on Y. Well, if you write out the distributions here and you use the definition of conditional probability, you see that if you multiply P of Y with P of X given Y-- well, P of X given Y is just P of X and Y over P of Y. I can cross those out. And so it makes sense that the thing on the right is, in fact, the thing on the left. Because this is basically the hidden denominator down there. When I look at this P of X given Y, I actually think P of X and P of Y kind of hanging there as a denominator that can cancel with a P of Y that's sitting there to the left, if that's helpful. So that's the product rule. How does that work? Well, let's say somebody comes up to you and says, well, I have this distribution. I know it's sunny 80% of the time. And you say, all right, well, how likely is it that it's wet and sunny? And you say, I don't know that. I don't have the distribution. But I do know how likely it is to be dry given sun and also how likely it is to be dry given wet. So I might have P of D, this other variable given W. So right here, here is a distribution over D given that W is sun. It sums to 1. Uh-oh. It looks like it's been stapled to another distribution over D that also seems to 1. The bottom two rows are the distribution over D, wet or dry, given rain. The top two rows are the distribution over D, wet or dry, given sun. That is actually two distributions which have been stapled together. This is a convenient data structure. It is not a distribution. It is a family of distributions stapled together. When I sum it up, I don't get 1. Each distribution will be 1, so I'll get the number of distributions. So I'll get domain of W if I add all these things together. But this lets me do the product rule. So what I could do is I could say, all right, let's do an entry. How likely is it that it's wet and sun? Well, first, it would have to be sunny. That's 80%. And then given that it was sunny, the probability of wet is 0.1. So I get 0.8 times 0.1 is 0.08. And then I could fill in the rest. Hopefully they're all there. So this is a case of going from, in this case, a marginal over W and a conditional P of D given w to the joint. Any questions about that? In general, there is the chain rule, which you may have also seen. The chain rule says that if you ever want to join distribution over any number of variables, you can build it as follows. You can take, simply, the marginal distribution of the first variable, P of X1. And then you can multiply X2 given X1 to get X1 comma X2. And then you can multiply in X3 given X1 and X2. And you can keep doing this until you have all of them. It's each variable condition on everything before it in the variable ordering. And you say, well, that looks like a lot of symbols, and I don't believe you. So let's write the thing out. Why is this true? Well, what's P of X1? OK. If you only want P of X1, you have P of X1, great. OK. But let's say you want P of X1 and X2. Well, I can multiply that by P of X2 given X1, which, I know, is the same as P of X1 times P of X2 and X1 divided by P of X1. Because that's the definition of a conditional probability. And, look. They cancel. What if I multiply P of X3 given X2 and X1. Well, down here, the definition of that conditional probability is P of X3 comma X1 Comma X2 divided by P of X1 comma X2. I re-ordered them. And, again, they cross out. And so each conditional probability is, basically, the thing you had before crosses off the numerator and denominator. And now you have just a joint probability. So that's all you need in order to get-- in order to get a joint probability is this correct sequence of conditional probabilities. The first variable, second given the first, third given the second and the first, fourth given the third and the second and the first. And that's the chain rule. And you can see that just symbolically that has to be true. Because the most recent conditional probability that you airdrop in contains the joint divided by what you have so far. All right. And so you say, OK, well, that's fine. But what if somebody had P of X1, and they had P of X2, but not conditioned on X1. And then they have P of X3 but only conditioned on X1. What happens if I multiply them together? Like, in general? Like, I don't know. Like, you'd just get some number. There are conditions under which you can know that something less than the change rule suffices. And we'll get into that in upcoming lectures. OK. But for now, if you want the joint distribution, your recipe is the product rule, which is how you turn a joint into a conditional, and the chain rule, which is how you let those things telescope up to a large number of variables. OK. One more thing, which is, in fact, not something new. This is, I think, a funny thing. This is Bayes' rule. It's a famous rule of conditional-- it's a famous rule of probabilistic inference. But, in fact, it's just a consequence of what you already know, even though it has a name. There are two ways to factor a distribution over two variables. So if I have P of X comma Y, that's a joint distribution of X and Y. I can write that as P of X given Y times P of Y. That's just the product rule. Or I can write that as P of Y given X times X. That's also just the product rule. OK. Does everybody believe that? All right. But I can send this one to this-- can divide both sides by both, send them to the denominator on the other side. And what I get now is P of XY. I can take this one and send it down there. P of XY is just-- P of X given Y is P of Y given X times P of X. That gives you the joint value back. Divided by P of Y, that gives you the other conditional. So I like to think about this as, I have the conditional Y given X. I stick on P of X, it inflates to a joint distribution. And then I pull out P of Y, and it deflates back to a conditional distribution of here X given Y OK. That's Thomas Bayes, and that's his rule. But it's really just the definition of conditional probability written two ways, or the product rule. That's it. So what does Bayes' rule actually do? Why is this helpful? I had a conditional probability of-- I wanted X given Y. And so what did I do? I started with Y given X, which looks basically the same, and I did a bunch of math. Why is that even helpful? The reason that's helpful is, often, you have Y given X and you don't have X given Y, and you want to turn one into the other. Bayes' rule is a device by which you can flip the conditional around. You can build a conditional out of the reverse conditional on the appropriate marginals. And the reason that's useful is sometimes you just know one but not the other. But more generally, usually one is easy to model, and the other is hard. Or at least one is easier, and the other is harder. So for things like speech recognition, it is much easier to model the dependency of sounds-- between sounds and language in one direction than the other. And, of course, the direction that's easy to model is the opposite of what you want to compute. And this happens very often, is that the easy thing to model is causal, going from the cause to the effects you observe. That's the way to write a model. But the easy way. But the thing you actually want to do with the model is start with the things you observe and infer backward up to the underlying cause. And so you're always flipping things around. Probabilistic inference is almost always about taking facts and then propagating them sort of upward in the model to the underlying cause that you care about. OK. So that's one of the more important equations. Like, that's really founding most probabilistic inference at some level. And so I'm not going to run through the example now. But it's in the slides for you to look at. But an example is-- for example, building a diagnostic probability from a causal probability. So, in generic, this is, you know cause-- you want cause given effect. But you have effect given cause. You might then know marginal probably of cause and effect. So, for example, here's a case of a simple caricature where the cause here-- the cause here is meningitis. And you'd like to know whether you have meningitis. That's the random variable M. The effect is you have a stiff neck. That's the variable S. And so you'd really like to know, what is the probability of meningitis given stiff neck? If you happened to have that written in a medical textbook, you go. You're done. You don't need to compute. But if you have a bunch of fragments of probability, you might need to do some computation. So if somebody tells you, all right, well, yeah, I don't know what M given S is. So I can't tell you how likely you are to have meningitis. But here's some things I know. I know in the population, the probability of meningitis is 0.001. OK. That's a reasonable fact you could know. I also know from studies of people who have meningitis-- I know how often a stiff neck presents, maybe 80% of the time. That's a reasonable thing to know. And I also know how often stiff neck is prevalent in the general non-meningitis population. From this, we can then take those equations that we had before. We can churn them, multiply things, and so on. And we can end up computing the posterior probability of meningitis, which will be some small number, but it will be bigger than 0.001. Your belief that you have meningitis given the absence of symptoms is, probably not. It's like that meteor. No, that doesn't usually happen. But suddenly, there's that stiff neck. You update, and you get a bigger number. So that's an example of using Bayes' rule to go from causal probabilities that you have to diagnostic probabilities you want. Slightly but not entirely off topic, if you have a stiff neck, you might want to get checked out even though the probability of meningitis is small still. Why is that? Is that because deep down, you know the probability is high and that all of this Bayes' rule stuff is wrong? Is that why you get checked out? No. The probability is actually small. Why do you go get checked out? It's not about probability. What is it about? It's about utility. If you have it, and you don't get it checked out, that's really bad. And so even though it's unlikely, when you compute your maximum utility, maximum utility is not the same as the most likely outcome. And when we get to decision diagrams, we'll disentangle the difference between how likely something is, how much utility there is, and how that enters into this computation. So far, there's been no utilities. This is only talking about the likelihood of unobserved variables given variables. We'll do that next time. I'll show you one more thing, which is-- let's say we have some distributions. Like, we know where the ghost is a priori, uniform distribution. And we know the sensor model. We know given each possible distance how likely that different colors are. Given that and Bayes' rule are a generalization of it here, we can compute the posterior probability over ghost location given readings. So I'm going to do that for you on the app. And suddenly, Ghostbusters goes from frustrating to fun. OK. So here is the current belief distribution for a ghost. I'm going to take a reading. Boom. Green. So there's this sweep of low probabilities. I'll take a reading up here. Yellow. Green. Going to take some more readings. See how it's till jumping around? There's still uncertainty. All right. Should I sensor or bust? I'm doing it. I'm going to bust right here. Right here? This is the next big video. Let me tell you. It's possible I oversold how much fun Ghostbusters is with probabilities. But you can see how useful it is to be able to compute in a precise and well-founded way this synthesis of your uncertain evidence in order to give you a belief function of your query variable. So we'll pick this up next time with talking about how do these things efficiently over large distributions. Thank you. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180925_Reinforcement_Learning_Part_12.txt | [SIDE CONVERSATIONS] PROFESSOR: Hi, everyone. Welcome to the tenth lecture of 188. Today, we'll look at reinforcement learning. A couple of quick logistical announcements. Your Project 2 was due last week. The mini contest extends till Sunday. Once in that contest, which is optional, you get to program a AI agent for Pac-Man-- a team of Pac-Man playing another team of Pac-Man. On your own side, you're actually ghosts now. And you can defend your pellets on your side. And then when you go to the other side, you're Pac-Man. You're supposed to eat as many pellets as possible, bring them back to your own camp. If you're carrying pellets and get caught, you explode into lots of pellets and get reset to your own side. So this is optional. There's a little bit of extra credit associated with it based on beating staff agents. So if you beat the baseline agent, you get a half point. You beat staff agent 1, you get a half point, saff agent 2, a 1/2 point, staff agent 3, a 1/2 point, on top of your Project 2 scores. And then in addition, we have a leader board, which also has some extra credit, but mostly it's for glory and to show that you have the strongest AI for this Pac-Man game. This runs till Sunday. At this point, there are nine participants in the competition. So a lot of opportunity to land in the top 10 still, especially if you're fast. First team right now is team No Bug. Is team No Bug here? Over there? Congratulations. That's great. [APPLAUSE] Second team right now is [? Uchen ?] [? Woo. ?] Is [? Uchen ?] here? Congratulations. And third team, right now, is Run Pac-Man Run. Is Run Pac-Man Run here? Over there? Congratulations. [APPLAUSE] But this is not the final ranking. You're not guaranteed to be third, just because you're third right now. You might all be first come Sunday. There's still a lot of time. I encourage you to try it out. The final contest we'll have will also be a little bit related to this contest where you get to play against each other on a board where, on one side, you're ghosts, the other side, you're Pac-Man. Any questions about the contest? Project 3, reinforcement learning, will be released very soon, meaning probably tomorrow. It will be on the two lectures from last week, on Markov decision processes and the two lectures from this week on reinforcement learning. And then it'll be due next week Friday at 4:00 PM. Your homework 5-- which, again, will have three components, as all homeworks have an electronic, a written, and a self-assessment of the previous written-- will go out soon, probably today, maybe tomorrow, and will be due on Monday. Any logistical questions? [SNEEZE] Bless you. OK, let's dive in. Reinforcement learning-- in reinforcement learning, we're going to be studying how to learn behaviors. And this is a long-standing discipline that is studied not just in AI, but also studied in psychology, in cognitive science, and so forth, to try to understand how people and animals learn behaviors. In fact, a good example would be if you had a dog. And it's a little puppy, and it doesn't really listen to you yet. And then sometimes, you yell at it when you're not happy. And sometimes, you give it a treat when you are happy. That's you giving it rewards. And the hope is that, somehow, as a consequence of you either yelling, or giving treats, or saying nice words, the dog will somehow become a well-behaved dog. That dog is running reinforcement learning, in some sense. It's somehow trying to optimize behavior for rewards. And these rewards, in this case, you are providing them and, in that way kind of guiding what the behavior is that you want from the dog, maybe as a function of maybe you call their name, or you whistle or something. You want them to do something. That's effectively reinforced learning, an action that probably many of you have already seen in real life. What does it mean formally? There's an agent. The agent gets to choose actions. After they choose an action, the environment will change. There will be a new state of the environment, and there will be also a reward associated with what just happened. So reinforcement learning is essentially like Markov decision processes, but it's different in the sense that, right now, we're not going to know ahead of time what the models are for the environment and the rewards. We just get to interact with the world, see what reward we get, interact again, repeat, repeat, repeat, and hope that over time, from the observed rewards and the observed next states that we experience, we can figure out how to optimize reward. So this is going to be learning, rather than direct planning. So people apply this with robot dogs, also. So this is some results I'm going to showcase from Peter Stone's group at the University of Texas Austin. And there's something called RoboCup. In RoboCup, robots play World Cup soccer among each other. And one of the leagues is the Dog League. This is the Dog League robots. These are Sony AIBO robots, which just came back into production. And now, if you want to do well in soccer, it's actually important to run fast. If you run much slower than your opponent, it's hard to beat them. And so one of the things that they used reinforcement learning for is to train their robot dogs to run as fast as possible. So you can imagine you run reinforcement learning in the lab, and these dogs learn how to run. But then you put them on the actual field, and the field is a little different from your lab field. And they might not do us well, because there's different friction, different properties of the terrain. So what now? Or maybe the dog has played a lot, and a little bit of wear and tear makes it different than how it behaved in the lab. Then what you want to do, and what they did is they say, well, we have an initial thing. Before we play a game, we can run our lab-trained control policy. It's OK. In the lab, it worked really, really well. But out here on the actual scene of the World Cup for robot dogs, it is kind of slow. Then they say, let's rerun a bunch of trials and run reinforcement learning in the background. And we'll understand better what that means, but this is what it would do before a game starts. It just starts running on that terrain and practicing, understanding how that terrain interacts with its legs. And then after full training, before the game, here is what you get. And you get maximum speed and also very stable head to see what's around you, locomotion. And these dogs beat, pretty much, every other team all the time. Here's another example of a result attained with reinforcement learning. What we have here is a snake robot. How do you build a snake robot? Essentially, just a bunch of motors sequenced together. And that allows you to build a snake, so with a bunch of motors sequenced together here. The training has already happened. So it's been trained to control itself, to climb on top of the step and sidewind. Let's see how it does. And this is a project in Andrew Ng's lab at Stanford, about maybe 10 years ago now, a time I was still there. And you see, indeed, the snake is able to get onto that step and actually wants to get to the other side. So once it's on, it'll start sidewinding and make its way over. And you might start seeing a pattern here. And the pattern is that, for solving these problems, it's very difficult to build reliable simulators. So far in 188, we've looked at scenarios where we can have a model, either a deterministic or stochastic model of how the world works. And we could plan in it. But for things like this, how do you build a reliable snake simulator? Very hard to do. Not clear if anybody can do this at this point in time that would match this particular snake. And so if you build a simulator and it's not good, well, then whatever you plan in that simulator might not be a good plan for the real snake. And it might not work in the real world. And so that's why reinforcement comes to the rescue here. You can let the thing learn on its own on the task it's supposed to do well at. Here's another example. This is from Russ Tedrake's lab, Russ Tedrake's PhD thesis at MIT. So what are we looking at here? This is a toddler robot. Making a two-legged robot walk is actually pretty tricky, because they only have two legs. By default, they're going to fall over. And then once you fall over, it's hard to get up. Especially, this robot design wouldn't know how to get up once it's fallen over. And also, you might damage the robot. So Russ kind of used some cleverness on both the design side of the robot and the reinforced learning side. On the design side, if you look at this robot, actually hard to see from here, but the feet are curved. And so what it's naturally really good at is going like this. And if you know how to go like this, then you have lifted a leg at any given time, you can swing it forward, take a step, and repeat. And it turns out if you put this robot, I believe it's roughly eight degrees downhill slope, and you start it off to the side so it's wobbling because of gravity, the downhill slope, it'll swing a leg forward when it's off the ground. And it will actually gradually walk down that 8 degrees or 15 degree slope that it's designed for. So Russ designed it to passively walk down hills without any control needed, if the hill has the right slope. So that means your design is in the right space, that it, in principle, should work. If you now kick a little bit of energy into it, maybe it can walk on flat surfaces too. And that's exactly what the reinforced learning took care of. So initially, when the reinforcement learning starts, it doesn't know how to control the system. And it's just kind of wobbling back and forth. That's what happens when you just randomly put energy into this system, it'll start wobbling left right. But it gets rewarded for making forward progress. And so over time, it starts figuring out how to make some forward progress. Right now, it's kind of just circling around, not exactly straight line forward progress. But over time, it becomes better and better, making this consistent progress. And after a good amount of training, it just walks off. It's gone. [LAUGHTER] In your Project 3, you'll get to do something very similar for this robot here. So this is sometimes the simplest locomotion robot you can imagine. You might wonder why did we keep it so simple? Why don't we have like a full humanoid robot? Wouldn't that be awesome? The running time for reinforcement, the amount of experience it needs to control a full humanoid, will be much higher than for a lower dimensional robot. And then it would take maybe a day or so to run your project code. And then if you had a bug and it didn't work, you'd have to go again. And you'd only have a few trials before deadline is set. Whereas this one, you can run relatively quickly. If it doesn't work right, you can debug it and repeat. So how do you make this thing move? Well, you can control two angles. There is an angle over here, and an angle over here. That's where your motors are. You can control those angles. And if you're smart about how you do it, this thing can move. So let's take a look at the video of this in action. This is the robot kind of just-- [VIDEO PLAYBACK] - So here, what do we see? PROFESSOR: Oops, lets mute this a little bit. So what are we watching here? The robot is just randomly moving its arm around. And sometimes, that means it moves forward, sometimes it means it moves backward. But we give it reward-- negative for moving backwards, positive for moving forward. And the hope is that reinforcement learning, based on that signal, can figure out what is a good policy to control the robot. Now, this takes a while. So in this video that we recorded here, we'll skip forward a lot of attempts. So we're not watching all the learning in action. You might wonder why is this hard? Why isn't it easy to just pedal yourself forward? Well, it has a lot of options in terms of what to do. And it doesn't know ahead of time how it works and has to figure it out from its own experience. And in fact, when you reach forward, you don't move. It's only once you put your pickle down and start pulling you move forward. So first, you need to go through a phase where nothing happens for you. You don't get rewarded. And so it's hard to learn to do that, because you don't get any signal that's the right thing to do. That's like giving your dog no feedback for a whole day, let's say. And then at the end of the day, say, good dog, that was a great day today, or bad dog. It's not going to learn. The more your reward is spaced out over time, the harder to learn. And so what makes this hard is that you do need that period of getting no reward and somehow discover that that's the way to achieve reward in the long run. So if you successfully complete your Project 3, you will have implemented a reinforced learning algorithm that learns to control the crawler bot. OK, let's formalize a little bit what we're thinking about today-- reinforcement learning. We'll still assume that we're working with the Markov decision process, which we've looked at for the past two lectures already. As a reminder, what does that mean? There's a set of states, the configurations the world can be in. There's a set of actions available to the agent to take. And then there's a model. And this model is a probabilistic model that says, what is the probability of landing in a state s prime if you start in state S and took action A? And then there's a reward function that says, how good was that transition that you've just experienced? And that's what we want to optimize. So we'll still be looking to optimize behavior in an MDP, but the twist is that we now don't know tier R. So remember previous two lectures, you would see things like value iteration, policy iteration. If you look at those equations, what appears in there is T and R. And because you have T and R, you can run through those equations and get a value function and a policy that might be good. But now, in this lecture and next lecture, we're not going to know tier R, yet still want to solve this problem. So the problem is the same, but what we have available in terms of information about the problem has changed. What that means in practice is that you need to try things. Because as an agent, you don't know how in the world functions. You need to experiment. And so reinforce when the agent that experiments in the world, from that, figures out how the world works, where the rewards are, how does it dynamically evolve, and then from there is able to, hopefully, achieve high rewards. So if we look at this picture here from last time, now, the picture would become like this. We don't have access anymore to how the world works or where the rewards are, unless we experiment and experience them. So offline would be value iteration, policy iteration, which we can do when we have the full MDP model available. There, our agent would-- when it's supposed to navigate a maze-- would think about the consequence of its actions. Say, well, if I did this, then that. If I did this, then that. Let me do this thing, because that gives me high reward. In RL, the agent has to go try it out. This can get pretty painful at times. So often, people prefer to do RL in simulation when possible. Because in real world, if you need to experience things, often there's a high cost associated with experiencing the negative rewards. But this is what effectively needs to happen. OK. And actually, one thing to remark here is that this agent, when it's not fallen on a fire pit yet, it doesn't know that a fire pit is bad. And so in reinforced learning, you only know something is bad once you've experienced it. And that's why you'll experience some of these things before you know what is the right thing to do. OK. There's a few ways of [? incarnating ?] reinforcement learning. There's model-based learning, and there's model-free learning. We'll start with the model-based, and then we'll transition to model-free. But both are totally good approaches. Not one is better than the other. One is a little simpler than the other, which is model-based, to understand what's going on. So we'll do that one first. So what is a model-based idea? You learn an approximate model based on experiences, OK? So you act in the world. You see how the world works. You have an estimate of how the world works. And based on that, you build a model. Then once you have that model, you just solve against that model. You can use the techniques from last week to solve against that model. So step one would be something like you're acting, acting, acting, and if it's a discrete world, you keep track of counts. How often when I was in state S, took action A, did I land in state S prime? How often did I land in S double prime? How often in S triple prime? And then you can, from those statistics, build a model-- say, well, it looks like one-third of the time I land in this state, two-thirds of the time in that state, and zero in other states. That might be your model for that particular starting state and that action. And you'd have that for every starting state and every action that you're willing to consider. Same for the reward. You could keep track of, for each transition, what reward did you get? And then, in this case, let's, say, build a table of what reward is about to happen for a specific triple. Once you've done that, then you can solve the learned MDP. Learned is important here. So it won't be the true MDP, most likely. Because if you have to estimate how the world works from experience, usually, you're not super precise. It's going to be approximate. But you're going to solve this approximation of the real world by, let's say, running value iteration. That gives you a policy. You can use the policy and act. Let's look at an example. So initially, we don't know nothing about the world. Now on the slides, it's a little hard to signal we know nothing about the world, so bear with me for a moment here. But even though we can see that this is a world with five states and that there's some pattern of how you can transition and that you probably can go from B to D, or from E to D, those things are actually not available to the agent when the agent starts out. The agent will just know, I'm in a state right now. I have some actions to choose from. Let me see what happens. So maybe after four episodes in this world, this is what has happened to the agent. And maybe this is the strategy they used for acting-- from B, go east. From C, go east. And from E, go north. And from A and D, you can only exit, so no options available than just that one option. So we've done that. We've collected experience. Next step in model-based reinforcement learning is to now turn that experience into a model, an MDP, Markov decision process, that models the world based on this experience. What would that mean? Well, we can look at when we do east in B, that happened twice, we always landed in C. So our model would be transition, B east, C. Probability of landing in C from B after taking action east is 1, if we just look at the frequencies here. And this middle one third of 188, we'll look a lot at probabilities and how you might want to maybe estimate this slightly differently and not be so deterministic from a small amount of data. But for now, let's just use the frequencies that are present as our estimates of the probabilities. So we saw it was always C, so we set the probability equal to 1. How about when we go from C, go east, what happens? It looks like three out of four times we end up in D, and one out of four times we end up in A. So we'd have T, C, east, D. Happens sometimes in T, C, east A happens. And this happens three out of four, and this happens one out of four. OK, so that's what we can do to build a model for rewards. We can also read them off, spelled out on the slides in typeset font, rather than hand scribbled. This is what we get. Once we have the model, we can run value iteration or policy iteration, whichever you prefer, and get out a policy that will be, sometimes, as good as this model captures how the real world works. Any questions about how model-based RL works? Because that's it for model-based RL. Yeah? STUDENT: So all of the states and actions are already known? The only unknown [INAUDIBLE] is the T and R? PROFESSOR: It's a good question. So the question is, what is known and what is not known? I've made very clear T and R are unknown ahead of time. Whether you know the states based on the action space, that's more debatable whether you consider that given or not. In principle, it doesn't matter too much in this setting because, if you don't know the state space, I mean, you still know the current state. And you'll only build models probably around states you've experienced, rather than some external states. But different people will consider that differently when they see a model-based RL. For 188, you can assume that the state space is a given, action space is a given. We just don't know T and R. Yes STUDENT: Do you ever update the policy? PROFESSOR: OK, good question. So model-based learning-- STUDENT: [YAWN] [LAUGHTER] PROFESSOR: So in model-based learning, you first collect data, build a model. Once you have your model, you find a value function, V star, against this model. It will not be the real V star, it will be against this particular model that you just learned. After you have that, often you go back. You might execute that policy or a variation of that policy to see how well it actually works in the real world, collect more data, improve your dynamics model with the new data, and repeat. And so you'd go around in a cycle through this. This step of collecting more data is something we'll see more about in next lecture. What's important there is the notion of exploration. You need to try things you haven't tried before. So two things will happen at the same time. Because you have learned a policy, or value iteration has given you a policy against a learned model, you can go check how well that actually works compared to in your simulator. If it works equally well, then probably your simulator was good, and you might be all set. If it doesn't work as well, that means that you're getting new data that can inform you about how the world works differently from what your simulator thought. And so you learn something new. There's some notion of either it works just as well, or you learn something new. If you learn something new, improve your model and repeat. Now to learn something new more quickly, you often don't want to just directly execute this policy, but you might want to do something called exploration. And more about that next lecture. Yes? STUDENT: In order for model based learning to work well, do we need to have [INAUDIBLE]?? PROFESSOR: That's a question about exploration. Let's revisit that, once we're covering exploration. Yes? STUDENT: So this requires from the agent preprogramming, [INAUDIBLE]. PROFESSOR: So the question here is, where does the reward function come from? That's your question, effectively, right? STUDENT: Yes. PROFESSOR: So it varies. So for most AI agents, the reward function would come from a human designer who decides what they want. Now, there's a lot of tricky issues with that. You need to design it correctly. If you're naive about how you design it, things won't work very well. Let's say you have a vacuum cleaner robot. You say, oh, just any time you pick up dirt, you get positive reward. Then it might start emptying trash cans, so it can pick up a lot of dirt. And that's actually the optimal way against what you then specify, but it's not what you intended. So definitely, there's a lot of challenges in specifying it. But specifying the reward is also our control over the agent. It's how we tell the agent what we want. But there is a good body of research that specifically is about, how do we go about specifying this in a clever way? Because if we don't, then this thing will just optimize against whatever we asked for. And what we asked for might just not really be what we really wanted. Yes? STUDENT: Do we have [INAUDIBLE]? PROFESSOR: So for this setting here, the way we calculated T was by just looking at the frequencies. So the reason we got 1 here is because, in the four episodes we looked at, whenever we were in state B and took action east, every time we landed in C. So that's why we gave it probability 1. It's just one procedure. It's a relatively simple one that we're using for now. Then when we were in C and took the action east, three times, we ended up in D, and one time in A. So that 0.75, three out of four, is the empirical estimate of how often you end up in D. And the 0.25 is empirical estimate of how often you end up in A. As I said, the middle one third of the class, we'll go into a lot of depth about estimating probabilities and reasoning with probability distributions. And you will then see ideas that might make you want to do this a little differently. But for now, let's just assume we use the frequencies we observe and are happy with it. OK, that's model-based RL. That wasn't too hard, I hope. Now, to contrast model-based RL with model-free RL, we'll look at an example of model-based estimation versus model-free estimation in an extremely simple setting. So it's not going to be reinforced learning. It's going to be really, really simple, but just to highlight the difference. And then from there, we'll go to the more complicated setting again. OK. Let's say we want to compute the expected age of CS 188 students. OK, what does that mean? Well, that means we need to somehow have a distribution over ages and take a weighted sum for each age, multiply it with probability of that age, and that gives us the expected value of age in 188. We've seen that before. That's how you compute expected values. What if we don't have this probability distribution? That's the simplified counterpart of not having the model T in the MDP slash reinforced learning scenario. What if we don't have this P? Well, what can we do? We can then go collect samples-- ask a bunch of students what's your age, what's your age, what's your age? After we've done that, we could then build a model of this probability distribution. We could say, how often was a certain age mentioned? Let's say, how often did somebody say 20, divided by the total number of samples we collected. And that's our estimate of the probability of 20 years old. Once we have those estimates, that's our model that we learned. We can use it to estimate the expectation, using the exact same equation. But this little hat here denotes that this is a estimate of the probability. We don't know the true probability, but this is our estimate. So this is a way that we can compute what we wanted, even though we didn't initially have access to a model. But then we have an estimate of the model. There's another way you can do this. And you probably would have done this a different way, actually. Probably what you would have done, instead of building this model and then computing an expectation, you would have just taken a bunch of measurements and averaged them, because the expected value was the average. So why does this work? Let's look at these two equations here. We have this equation here, this equation here. What's different about them? Well, a very striking difference is that, on the left, there is a weighting by the probability. And on the right, there is not. How come? How come, on the left, we have to weight by the probability, and on the right we don't? Well, it's because on the right, we drew samples, randomly picked people from the class and asked them what their age is. And the way these samples work is that they already obey the distribution. If there's a lot of people of age 20 in our samples, we get a lot of samples with age 20. And so the probabilities are reflected in how often a certain age appears in this average that we compute over n samples here. And so they compute the same thing actually here, but in a different way. Any questions about this? Because this is going to be fundamental to understanding model-free RL. Yes? STUDENT: [INAUDIBLE]? PROFESSOR: Sorry, say that again. STUDENT: Just what is the I? PROFESSOR: Oh, what is the I? So i here is indexing over students. And so we have, let's say, 800 students, but the capital N might only be, let's say, 50, because we're only going to sample 50. And then capital I is indexing from 1 through 50. We randomly pick a student each time, and then average their age. STUDENT: And on the other one, A is the age? PROFESSOR: Here, A is indexing over age. Correct. So you would go-- I mean, I don't know, maybe, not from 0, but from some reasonable age it would be probability non-zero, probably, and average it out. OK, yeah? STUDENT: [INAUDIBLE] how the probability [INAUDIBLE]?? PROFESSOR: So yeah, that's really the fundamental thing we need to understand to go from model-based to model-free. So the question was, how come it's here, but we don't have a PAI appearing over here? We don't have this. Why not? Intuitively, the reason we don't is because the way these things appear here, the AIs-- the I corresponds to a random student. So we randomly pick a student in the room and ask, what's your age? Randomly pick another student, ask their age. And so if there's a high probability of, let's say, age 20, then a lot of these randomly picked students will say 20. And so 20 will appear many times in this summation here. Whereas, if, let's say age 15 is unlikely and rarely appears, then when I randomly sample people and ask their age, it might only appear once out of 800 or something. And so automatically, it's downweighted, because it doesn't show up much in the samples. If you were to add this probability here, nevertheless, you'd be somehow double counting the probabilities. And that's not good. You don't want to double count them. You want to count them exactly the right amount. And here, the counting happens through the sampling process, rather than through explicitly multiplying with the probabilities. STUDENT: Is this a strong assumption to make though? PROFESSOR: Is it a strong assumption to make? The assumption we make here-- well, a couple of things. I'm not claiming this is-- I mean, there's an approximation here. If you don't sample everyone, it's not going to be super precise. Then if you wonder about these two, actually they compute the exact same thing. If you collect a set of samples, A1 through An from n students, and either you go this route or this route, you'll end up with the same number. It's a different way of computing what ends up being the same number. So it's not that on one side you make a bigger assumption than on the other side. So let's then transition to model-free not age estimation in 188, but model-free reinforcement learning. We'll do this in two phases. We'll look at passive reinforcement learning and active reinforcement learning. Passive will mean that we just are trying to estimate quantities we care about-- let's say, values-- but we don't worry about acting in the world. We somehow just watch things in action and try to estimate the values of states for this agent. In active reinforcement learning, we'll also worry about how we collect the data to estimate these values from. So let's start with passive, because then we don't have to worry about taking actions. We'll just observe them. Then since we don't get to choose actions, we somehow observe a policy in action. So there'll be some fixed policy, which we get to observe in action. We don't know the transitions. We don't know the reward. But we see a sequence of state, action, state, reward associated with the transition, then again action, again state, then reward associated with the transition over, and over, and over, coming from this policy pi of S. The learner here is just along for the ride, watching this in action. You don't get to choose what actions are taken. The policy is just executed as is, and you try to evaluate quality of the policy. What does it mean? Quality of the policy is the value it achieves. Right? The value is expected reward you get over time. And high value is good. Means high expected reward. And so we want to evaluate for policy. How good is this policy? So the goal-- computing values for each state under policy pi. In direct evaluation-- so we have passive versus active-- well, we have model-based versus model-free. Then under model-free, we have passive and active. We're in the model-free passive now. Under model-free passive, there will be direct and indirect. We're doing direct now. So we're slightly deep in, but that's the simplest thing to start from for model-free. So direct evaluation means that we just average the observed sample values. So we observe agent acting according to pi. And every time you visit a state, you just say, OK, how much reward did I get from then onwards? And that gives you a sample measurement of how good that state is under your current policy. And then you average over many experiences. So let's do this for a small example. Here's the input policy. Then again, a bunch of episodes get observed. And now we can ask the question, what are the values of each of the states not based on knowing exactly how does MDP works, but based on only having observed this here? OK. Well, let's think about this, and let's try to draw this into this grid. There's five states. What are the values of the states? Well, let's see for A. Where did we visit A? We visited A only over here. And then what happened is we got negative 10. So A, we've only experienced once, and the result was negative 10. Episode over. So our estimate of the value for A is negative 10. Also happens to be the exact value in this case, but this is also our estimate based on this experience. How about another simple one? D. D, we experienced three times. And every time we experienced D, we got a plus 10. So we averaged the three plus 10s together. That gives you plus 10. How about B? We've been in B here and here. When we were in B, we got a negative 1, a negative 1, and a 10. So this is plus 8. When we were in B here, we got negative 1, negative 1, 10. That's plus 8. We assume our discount factor gamma is 1, to keep the calculus simple. So this is only two times we saw B. The average of 8 and 8 is 8, so the value for B is 8. OK, let's take a look at C. We experienced C over here, over here, over here, over here. Here, we've got negative 1 and 10. That's 9. Here, we got negative 1 and 10. That's 9. Here, we got negative 1 and negative 10. That's negative 11. Here, we got negative 1 and 10. That's 9. So we have three 9's and one 11. We average that. So we sum it all together, divide by 4. Let's see. 27 minus 11, 16. Divided by 4, that is plus 4. So this is the first more interesting ones where we have different types of experiences. We averaged what we've got from them, and that's our estimate. How about E? E, we experience here and here. Here, we end up with negative 1, negative 1, 10. That's 8. And here, we have negative 1, negative 1, 10. That's also 8. So for E, we'll also have 8. Is that right? No, that's a negative 10 here. This is bad. Negative 12, rather than 8. So we have an 8 for E over here and a negative 12 here. The average of that is negative 2. That's what direct evaluation does for us. That's the procedure-- fairly simple. So whats good about this? It's very simple. You just look at, for every state, whenever you were there, what happened afterwards? Compute the discounted sum of rewards, and average it, and that's your estimate. You don't need to do anything with T and R, you just average these sum of rewards. What's not so good about it? It actually wastes a lot of information about how the world works, because it never really considers correlations between states. Look at this here. I mean, this is a crazy way to assign values because, when you're in E, you always go through C. Or when you're in B, you go through C. But somehow you say, for B, my value is plus 8. And for E, my value is negative 2. But really, they should be the same because when we go from B or E into C-- that's the only thing we've done from there-- we get negative 1 for the transition. And then after that, we're in C. And we should get whatever C is worth, not something different, depending on where we came from. So the consistency is lost here between consecutive states. And then even between these two states, how is it possible that, from E, you expect negative 2, but from C, you expect plus 4. That's not compatible. I mean, that must remind you a little bit of inconsistent heuristics in one of their early lectures. You can't say it was going to be negative 2 from E, and all of a sudden, it's actually going to be plus 4 now from C. That's not compatible. But that's just what it gives us, because we don't look at a lot of detail of what's in the data. We just look at summaries, how much did you get from each state, and average it. And so that's why this is not super precise. But if you keep collecting data, and you collect data infinitely long, it will average out to the right thing. You just need to collect more data than we collected in this case. I want to say, for infinite data, all of this will work out nicely. But often in learning, it's not about what you learn after infinite data. You want to learn more quickly than after infinite time. So what else can we do? Maybe we can do something closer to policy evaluation. We saw policy evaluation last lecture. What did it mean? Well, you were in a state. And then your policy chooses an action. And then you might be in some kind of Q state where you are committed to an action for that state. And then from there, you randomly transition into what the next state is going to be. And with this diagram, there was a set of backup equations, called Bellman equations, that told you how to do dynamic programming to efficiently find the values of states. The way it was done was, well, if you have 0 time steps left-- so notation, the bottom index means how many times steps left-- 0 time steps left. Top is the policy you used, policy pi. Then S is the state you're in. If you have 0 times steps from state S using policy pi, you'll get rewards 0, because there's nothing left to do for you. Then there's a recursion that says, the value, when you have k plus 1 steps left in your agent's life, can be computed as a function of what happens in the first step of that agent's life, which is some action gets chosen. And then there's some distribution over possible next states, based on that action. That's this transition here. And then there is a reward associated with that transition. And then there is the rest of the agent's life, which, at this point, is k long. Because you start with k plus 1, you took one step in your life, you have k steps left. And assuming we know the value with k steps left, then we can use that to compute this weighted value for k plus 1 steps left. If we use an equation like this, we are definitely using the connections between the states and exploiting that if states are next to each other and one comes after the other, their values will be very related, thanks to this equation. But how to do this, because we don't have T and R? So what we're going to answer in the remainder of this lecture is all centered around this question-- how do you solve a Bellman equation like this one-- and we'll also see the value iteration equation later-- without having access to T or R? Well, let's give it a shot. This is the equation we want to work with. So what we want is, essentially, somehow, starting state S, use our policy a few times from there. See what happens. And then that gives us sample experiences. And then we want to average that. Because we don't have the model, but we can act in the world to see what happens, see where we land, see how much reward is associated with it. And let's for now assume we have the small k time steps left value function, because we'll start V 0 anyway, right? At V 0, we have that 0. That's easy. And so we can then get to V 1, if we can make this work, then V 2 and so forth. So this is the way to compute this quantity over here, by averaging experience in the real world, rather than computing an expectation with the probabilities in there. Remember the age averaging? The age averaging on the left on the slide, model-based. You use the probabilities explicitly. The age averaging equation on the right would be corresponding to averaging these sample values. We average them. We get a value for k plus 1 steps to go. What's the tricky part here? Well, the tricky part here is that, typically, you cannot just put your agent anywhere and now say, OK, let's act from there. And then, let's reset you. Let's go again. Because the agent is acting in some kind of environment, and the dynamics of the environment might not allow you to reset it every time to wherever you want it to be. So we might not be able to actually collect this data the way it's described here. So in practice, we might need a little different than what we do here. But if we could collect the data from a specific state multiple times, this is what we could do. And we could effectively run policy evaluation with an averaging version from samples, rather than a weighted expectation using the equation up there. OK, we can't rewind time, unless we have some kind of helpful daemon for us. So what can we do, if we don't have this daemon? Well, the main idea is that we want to learn from every experience that happens. So any time we get an experience which is state, action, next state, reward, we want to learn something from it that ties into this Bellman equation. How can we do this? OK, let's think about this. Our policy will still be fixed, so we're still doing evaluation. But the values will hopefully become more and more precise about what the value of each state is under that policy. So let's say we get an experience. What it means is we get a sample of the value of the policy in state S expressed as immediate reward plus gamma times future expected reward from the next state onwards. That's our sample. Just like we had samples here, same type of samples. But now we consider only one, instead of having many. Now, how do you average one in a meaningful way? Well, maybe this is what we can do. We can say, we assume we already have some running average. And maybe we initialize it to 0, but we assume we already have a running average. And then what we say is, we have a running average, V pi of S. And we're going to bootstrap off that. We're going to keep that, mostly. So alpha will be, let's say, maybe, 0.9-- let's see. We want to mostly keep what we already have, so it's a 0.1. Let's say alpha, to make it concrete, equals 0.1. That means 0.9 times what we already have. And this will not be precise, this is just something we hope that might be somewhat precise. And we correct it with the sample value. This Is not a normal way of computing an average. It's a running average calculation. But what it allows us to do is, as new experiences come in, go look at our table. What is our current value estimate for this state? And add on the current experience to improve that estimate. Another way to write it is as follows. We keep the current estimate, but add a alpha-sized or scaled correction to it, based on the difference between the current experience, which is noisy-- because one experience doesn't tell you the whole story, so it's a noisy estimate-- the current experience and what we have our best estimate so far. Look at the difference, and add it on times a small scaling factor. Let's look a little bit of this running average. What does it mean to compute such a running average? OK. Well, what it really means is that somehow let's abstract it for something that's not necessarily values. We have some x. We get an x 1, an x 2, an x 3, and so forth. We want to compute the average of all the x's. And the running average is this x bar thing. After we've seen n minus 1 samples, x bar n minus 1 is the running average. Then we see an n sample. And we're going to correct the running average by adding on the n sample, by taking an average between the new sample and our running average that we have so far. It makes recent samples more important. It's not the same as computing the actual average. It's a little different. So here's what it'll do. If you expand this, this is the kind of weighting you get. And remember, 1 minus alpha ends up being a number between 0 and 1. So the higher the exponent, the smaller this becomes. And so this is the biggest contributor. This the second biggest contributor, third biggest contributor, and so forth. So it's kind of a skewed average where later experiences count for more than earlier experiences. You might think of it as a bug. Don't I want the real average? But soon enough, you'll see why this is actually even better. It's better to skew towards the later ones. OK, so you forget about the past. And intuitively, what's happening is that, when I showed you this equation here-- this is the previous slide-- I said, let's assume we already have some estimate. And we're going to average with it, and we're also are going to use it here. So that estimate, initially, is not correct, and we're just using it. But after we do an update, it becomes more correct. And so the further we are in this process, the more precise these V pis are, and so the more precise are these sample values that we calculate here that we average in. And so that's why the later ones are good when they're weighted more highly than the early ones, which are pretty random. If you make your alpha go down to 0 with the appropriate scheme-- don't worry too much about the specifics here for now-- but you make it go to 0, then this will converge in the limit. OK, let's look at this in action. Same environment. We have now we're going to learn from every experience on the fly. So we start in state B. We have some current estimates of our values, and we're going to try to improve them from our experience. First thing we do is we move east, land in C, get a reward of negative 2. What does that mean? That means we experienced a sample value of-- what is the sample value? Sample value will be reward, which is negative 2 plus gamma, which is 1 in this case, times the value of the next state, which is C. So let me write this as reward we experienced plus gamma times value. Reward is negative 2. Gamma is 1. And the value of the next state C that we have as our estimate right now is 0. So our sample value is negative 2. Now we can do an update to our value for B-- V pi of B becomes 1 minus alpha V pi of B, that we had before, plus alpha times the sample value, which is negative 2. Alpha is 1/2. Our previous V pi of B is 0. So the result will be here that this becomes negative 1. So after one experience, we've updated our value estimates. And this became negative 1. The other ones didn't change. The only one that changed is the state you left, because the state you left is the one for which you get a new estimate of the value of that state. Other states, you didn't get any new information about. Then we can repeat this process. Now we take, again, action east, land in D. Reward of negative 2 associated with that. We can go through the same process. Reward plus gamma times V of next state, which is D, is our sample value up here. Reward, negative 2 plus gamma, which is 1, times the value of D, which is 8. So this is equal to 6. So our sample value will be-- let's make this for C now. We experienced an exit from C. We're going to update the value for C, by adjusting it towards the sample value of 6. Learning rate is 1/2. So it happens to be in C. So far, it was 0, so we have 0-- oops-- we have-- well, to make it explicit, 1 minus 1/2 times 0 plus 1/2 times 6, which is 3. So our new value for C-- estimate for C's value is going to be 3. And none of the other values changed, because we didn't learn anything new about the other states. Actually, let's take a break here. And then we'll start looking at some of the remaining issues we need to resolve to get this working more fully. [SIDE CONVERSATIONS] PROFESSOR: All right, let's restart. Any questions about anything we covered so far? Yes? STUDENT: So n-- actually, when [INAUDIBLE] reward of negative 2-- Is the reason that the value of C went up because of the 8 in V, even though we got a negative reward? Why [INAUDIBLE]? PROFESSOR: Yeah, that's a really good observation, and it's worth emphasizing. So the question was, why did the value of C go up, even though you got a negative reward when exiting C to go into D? And the reason is that when we compute values, we want to estimate the total expected discounted sum of rewards over all future times left in the life of the agent. And so it's exactly what you're pointing at. There is still time left in the life of the agent. They're in D then onwards. And from D, they're going to, based on our estimate, get 8. And so, even though there was a negative 2 associated with the transition itself, there was a plus 8 associated with the estimate of everything that's going to be accumulated over future times. And that's captured by this term over here. So the reward, instantaneous reward, is this term. And then all future times are summarized in that term. And that's the 8 discounted. In this case, discount factor is 1. So it doesn't really get discounted. And that's how we get to minus 2 plus 8-- 6 as the new estimate based on the current sample. And then, the old estimate was 0. Our learning rate, alpha is 1/2. So that means we average half of what we had and half of the new thing. So 1/2 of 0 plus 1/2 of 6 makes for 3. So what we can do now is we can have an agent act in an environment. We can decide to never build a model of how the world works. We never build a T, we never build an R, yet we can recover V values, the values of the policy for each state. And then it turns out, if you run this long enough and the learning rate goes down over time, you get accurate values for each of your states. Now last week, you saw an algorithm called policy iteration. And one of the two components in policy iteration was policy evaluation. Once you know how to evaluate your policy, then you still need to do something else, which is improve your policy, which was the policy update step. Now to improve your policy, actually, you need to somehow have access to a model, because you somehow are looking at reward and next state value. But then you need this model to see which action actually maximizes that. And so we're still kind of stuck with what we've seen so far, because we don't have the model. So we don't know how to update our policy. So we know how to get values for our policy for each state, just not how to update it. Now, the key idea for the remainder of this lecture is that maybe we've been doing it all wrong in some sense. Because if just instead of learning the values of the regular states, the V states, why don't we learn the values of the Q states? Because if we learn the values of the Q states, then this is all very easy. Once you have the Q values for every state, you have the value of state S and taking action A in state S, followed by whatever you do after, then you can now see which action achieves the highest Q value. And that would be the one you want to take. So maybe we just need to swap it around, start learning Q values. You might wonder, why didn't we just from the beginning learn Q values? Just the math is a little simpler learning V values. In some other methods, it actually is quite relevant to learn V values. But now, we'll switch to learning Q values, because that will give us the extra power to be able to improve our policy as we learn the values. And so what we can then do is we can, as we update our Q value, we actually, also if we want to, update how we act in the world. And repeat. And become better, and better and better over time. OK. So this now will be active reinforcement learning, because now, when we learn Q values, the Q values can prescribe to us what we want to do. And we can actively collect our data, while learning the Q values. Now how do we learn the Q values? It'll be very similar to how we learn the V values. We need to do a little bit of trickery, still. We still don't know the transitions. Still don't know the rewards, but we get to choose the actions. And we'll do it based on our Q values that we have so far, which will be approximate, but might give us some guidance in terms of what's promising, what's not promising. But the goal is that, ultimately, we end up with the optimal Q values that tell us, this is the optimal action to take and will give you this much value. So the learner now makes choices. There will be a fundamental trade-off. And we're not going to go into that this current lecture, but we'll see a lot of that in the next lecture. Because it turns out, you don't necessarily always want to follow based on what you've learned so far. Sometimes, you want to fundamentally just try something new you never tried before, because that might accelerate your learning, rather than just keeping doing what already looked good in the past. That's the exploration/exploitation trade-off. More in next lecture. But keep this in mind, because it's an important concept that I want you to be aware of from the beginning. Keep in mind also, Q learning is not some offline planning. We're not building a model and planning in it. We're actually collecting data and improving our Q values, just like we did with V values, but we haven't seen the math yet for the Q values. So in value iteration, we start with V 0 for each state is equal to 0. And that's intuitively meaningful because, when there's 0 time left, you'll get 0 reward for the remainder of your life as an agent. And from there, you can recurse to compute values for more and more time steps left. From V 0, you can get V 1. From V 1, you can get V 2, and so forth. Can we rewrite this in terms of Q values? Because if we can, then we can maybe do the same thing as we did with V values and learn them from experience, rather than computing them from having a model. Well, Q 0 with 0 time steps left, we can also set equal to 0. That's correct. Nothing different there. How about Qk? Can we compute it from a Q with less time steps left? OK. Well, here's what we can do. If we have Qk plus 1-- I'm going to step through this in detail. Qk plus 1 is the value for being in state S, taking action A, and then keep going from then onwards, and to compute how much value you're going to collect from S onwards for k plus 1 time steps. You can decompose that into what happens in the first step and what happens in the remainder k. The first step, it is whatever reward you get. And remainder k, well, the recursion tells you that you already computed it for the smaller k. You can just plug that in. What value do you get from state S prime? Well, it's the max over all actions available to you of Qk S prime, A prime. This is the recursion we want to work with. And then this is all weighted by the probability of landing in a state S prime. These are essentially the same, but just reorganized a bit to be written in terms of Q values rather than V values. But is the same dynamic programming principle that the value of for k plus 1 time steps left is what you get in the first step, plus what you get in the remainder k steps. And we assume we already know the values for k steps left, and so we can bootstrap all of that to get the ones for k plus 1. Now, if we look at this equation, the difference between the two, the difference is that this one starts with a max, and this one starts with a averaging. The one that starts with the max, we cannot just have samples and make it work. Because a max based on samples-- that doesn't work out. The samples are happening behind the max. It's not clear how to do this. Nobody's figured this out. But here, we have the sampling upfront. If we have a sampling upfront, an expectation being computed, we can do that based on just samples we draw. Without having access to a model, we can just average the actual samples we experience. So what we've seen so far, to contextualize it-- we've seen policy evaluation. That's what we've seen so far. And there, we have an average upfront too. Then we have value iteration, which is this one here. But because of the max upfront, we can't do the sample-based estimation. So we're stuck. But by reorganizing it in terms of Qs, we have samples upfront again. And this is a Q value iteration, which also, if you iterate this, computes the optimal values, just in a slightly different iteration scheme. But the beauty is that we can use this iteration scheme to start taking averages, rather than using the model itself. So we have Q value iteration. We're going to make it sample-based. What does that mean? Well, as we get a new experience, let's say, in some kind of grid world-- and I'll show some examples soon-- you receive a sample SA S prime R. You have a current old estimate of your Q value for S and A. You've going to update it. You're going to say, I have my sample right now, which tells me, hm, I got this much reward. Plus in the future, I expect to get this amount, which is one of these terms. So only one of these terms, so it's not an accurate estimate. It's not the precise Q value. It's just one of the terms, but we can use it in our running average to update our current estimate. And as we update often enough from enough new experience, we will get the correct average in here, assuming again alpha goes down, so we don't keep hopping around. Let's look at some demos of this in action. So here, so what are we looking at here? This is a standard grid world. We know how this works, but we're going to watch what happens when Q learning is taking an experience and trying to learn the values for each of the state action pairs. So each state now has four values when there is four actions. So most of these states have four actions, so there's four values. There's two exit states which only have one action, so they only have one value. We initialize everything equal to 0. Now, I'm going to do a first experience, moving up. What do you think is going to happen to the Q values? Well, we need to know about rewards, of course. Here, the rewards are 0, unless you take these exits. So if you move up, nothing's going to happen. All values stay the same, because your previous estimate was 0 for the Q value of the bottom left state for going up. You experience 0 reward plus gamma times max overall Q values in the new state. But that max overall Q values is also 0, so the sample value is 0. All the value is 0, nothing changes. When we move up again, what's going to happen? Same thing. There will be 0 reward. And a max over all the values there is 0, so the sample value will be 0. Averaging the sample value of 0 with the existing value of 0 will remain 0. Same moving to the right. Same moving to the right. How about here? Still the same thing, because the value of the next state is 0. But now, when we take the exit action, a non-0 sample will come in. A reward of, in this case, I believe, 1 was received, plus 1. The learning rate is 1/2. So we had 0 before, reward of 1, and then it finishes, so no future value. So the sample value is 1. Average the 1 with the 0 gives us 1/2. Now we go again same way. Everything stays 0 here, because the sample values are all 0. But now from this state here, if we go right, we will experience a sample value that's non-0. Why? Not because the reward is non-0. Reward is still 0, but the sample value is reward plus max of Q values in the next state. Max of Q values in the next state is 0.5. And we'll average the 0.5 with the zero that we have, making it 0.25. And the last one averages at 0.5 with 1 to get the 0.75. Now, we add a 0.12. And as we keep running that same trajectory, we see the values propagate from that termination state all the way to the start state, gradually, through averaging of values from future states. Let's do it a few more times. Well, let's go back here. Oh, what happened here? So I went right. I came back, and I got 0.13 at the bottom. Why do we get 0.13 here? I've never from this state experienced a reward, but you don't need to. From this state, we transition into this state. This state-- the transition had 0 reward, but the value here is 0.25. That means this sample estimate is 0 reward plus 0.25 future value. So a sample estimate of 0.25. 1/2 of that rounded is 0.13. So as we keep going through this, these values get closer and closer to the correct values. Now, let's see what happens if I go up here. And now I go down, instead of sideways. What do you think is going to happen? Well, moving down, it's going to stay 0 because there's no reward on the transition, and there's no value yet in any of those actions in that state. So the sample value of 0, things just stay 0. What if I now go here? Still 0. Now exit here, I get something in here. Let's do this again. Now I go to the side, the negative values are propagating, because the sample value was instantaneous reward plus the negative 0.5. And we average it with the 0 we had before, and now this thing gets a negative value. Now interesting things happening here. I keep going to the negative. And actually, the values of all the states in the beginning keep going up. Like look at the values here as I'm moving up, they keep increasing, even though I'm always experiencing pretty negative rewards. Why is that? It's because the way these values are computed is based on what the value is in the next state. And not what I have experienced particularly, but the best thing I have in that next state, which is actually based on going to the plus 1. And even if I go now to the negative 1, that does not affect these other values. In fact, going down from here still has 0 value, even though I've always gotten negative 1. Why is that? Because I still have some zeros in that state. And in fact, I can even make going down positive. If I go down now, come back up, now I get a positive value for going up. Now I exit. Now I go again. I go down here, and now the value of that state on the top for going down has become positive because it knows that the best thing to do after that is come back up, and then go right. And that's encoded in these Q values. And so it knows that there's positive value even in going down. It's better to go right than to go down, but it's also still a positive value to go down. What if we follow the other path here? Well, we see here something positive-- 0.37. Why is that? Because from this current state, if we were to go up, we have high value. And that's propagated. Even though there's also a negative there, it doesn't take the negative. It takes the max of all those values, which is the 0.75 used [INAUDIBLE] in the sample estimate. And even if I go now-- even if-- well, I did it for so long, it exited. But you can go to the negative 1 for as long as you want. It will still keep grabbing the most positive Q values to propagate. Let's look at another example, a more extreme kind of maze of this type. This the bridge world, bridge or cliff world. Essentially, the way it works is that the middle is safe and the ends have rewards. But if you go off to the side, you die with negative reward. So let's see what happens. We move to the right, to the right, to the right, to the right, and we get positive reward there. The Q value becomes positive. So at a plus 10, learning rate of a 1/2 becomes 5. Let's go again. And now, when we transition from here, we expect a non-0 value, because the sample will be 0 reward plus a value of 5. 1/2 of that, with a learning rate of 1/2, will be 2.5 in there. Now, we can also jump off the cliff from here. Negative 50, not great. We can jump over the cliff here. Negative 50, not great. We can jump off the cliff here. Negative 50, not great. We can go jump off the cliff over here. Not great. But actually what happened is, even though we ran this trajectory where we jumped off the cliff, the value here went up. And now, if I again go jump off a cliff, at this state, the value will nevertheless go up because it knows that there is something good available. And that's what the max Q value is. And that's what's propagating. Yes? STUDENT: So the [INAUDIBLE] jumped off the cliff first, so that it wouldn't exit. Would it always be [INAUDIBLE]? PROFESSOR: The question is, what happens if we jump off the cliff first? Let's just do it. So we need to reset this thing, of course. Otherwise, it's not first. So let's do this again. Let's just jump off the cliff right away. And in fact, it's not that bad a decision, per se, because it doesn't know how the world works. It's not, oh, I'm going to jump off a cliff. It's more like, I've never been here. Let me go check it out. Oh, it turns out to be that I jumped off a cliff. That was not good. That's the reinforced learning world. And that's why, typically, reinforcement is more easily experimented within simulation than real world. So let's jump off of the middle cliff here. Boom. We jumped off. OK, now we go again. What do you expect to happen? Well, as we go here, if we now move up again, it should know that that's a bad thing. And let's see if that happens. Yes, it knows. What if we're here again, and now we actually move somewhere else? Let's say we move right. It did not do anything. We move left, nothing. Nothing. Nothing. No values get updated here. Why is that? Because you look at the max value, it's still 0. So even if you're neighboring this negative 25, the max of that square is still 0. And so the max stays the same, and the negative 25 can't make it out. Even if we were to jump off this one, there's still some zeros available in that square. Let's see. What else can we-- we cannot get negative values in there, just because the 0 is always better is the problem. Well, not the problem, it's a feature. It's a good thing. What it also showcases is the notion of off-policy learning. You're not learning the value of the policy that I'm executing. This thing is learning the value of the optimal Q values-- the values of the optimal policy, not my dumb policy that's just jumping off cliffs. No, it's learning. And now I'm maybe not so dumb, so I can get some positive values every now and then. And then these values are the ones that are going to propagate, even if at some point I start jumping off the cliff again. So now, we get these positives to come through. And now, after they have kind of settled in place a bit, I can go through that again, but at the end try another cliff. And still positive values will have propagated. Now, let's take a look at how well this works for our crawler bot. There'll be a video. So what we're going to watch here is the crawler in action running Q learning. That's what you're going to do in your project. So what are you seeing at the bottom left here? Bottom left is values. There is a two-dimensional state space. Really, the state space is continuous, but we discretized it. There's buckets. Like if you're between certain angles, you're in a certain state. If you look at both angles, you fall into a bucket. And that's what's shown on the bottom left here-- so based on your first angle, based on your second angle. We see the values are going up for if you're down here. So these are good states to be in. Here, we have Q values, which is then compartmentalized. In any state, we have four actions available. For each joint, we can either increase or decrease the value of the angle of that joint. And we do that for both, so 2 times 2-- 4 total actions. And what we see here is that after it's been training for a while, it gets into a cycle where it goes through the same state over, and over, and over, which allows it to move fast to the right. Let's watch this one more time. So initially, these values are all initialized, in this case, 0 effectively. And it's exploring. But of course, we're accelerating the exploration a little bit in this demo where we are letting it run many, many steps behind the scenes. Boom-- one million steps, I believe, we just let it learn. And the values got a lot more precise after an extra one million steps. And you see it learns to locomote. The beauty here is that it doesn't know anything about how this world works. It just has been experiencing states, actions, and rewards, and figures out from that what are the optimal Q values to maximize reward. So this is actually a pretty amazing result. Q learning converges to the optimal policy, which is encoded in these optimal Q values, even if you're acting suboptimally. Just the way the propagation equations work. The suboptimal stuff doesn't propagate, only the good stuff propagates somehow through that max. And this works out. That's important, because you don't know the optimal policy. If all you can do is learn the value of an existing policy, then it's very hard to find the value of the optimal policy, because you don't know what the optimal policy is. Chicken and egg problem. But with Q learning, you can use any policy, as long as it visits every state sufficiently often to learn the value of the optimal policy. This is called off-policy learning. Caveats-- you need to explore enough. I said, if you visit every state often enough, that's exploration. You need to go see all the states sufficiently often to understand what their values are. You have to make the learning rate decay over time-- we'll see a scheme for that next lecture-- and also not decay too quickly because, otherwise, your later experience cannot contribute enough to correct maybe some noisy past experience. But the beauty, again, is that, in a limit, it does not matter how you select actions, as long as you satisfy those properties. OK, that's it for today. Next time, we'll look at these issues. See you on Thursday. [SIDE CONVERSATION] |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181004_Bayes_Nets_Representation.txt | PROFESSOR: Let's get started. All right. So today we're going to talk about Bayes' Nets, also known as graphical models, which are a technique for building probabilistic models over large numbers of random variables in a way that is efficient to specify and efficient to reason over. I think this is a really exciting topic because this is one of the main tools that we just didn't have in the sort of first go around of AI in say, the '80s. And it's not like people weren't aware of probability. It's not like people weren't aware that there was ambiguity and uncertainty. We just didn't have the tools we needed to manage and reason over uncertainty at scale. And one of the main tools that was developed was what we'll be talking about today. So in general, what we're talking about now is probabilistic modeling. In the first part of the course, we mostly talked about actions, selecting actions, sequence of actions, chaining reasoning along actions. And what we're talking about now is not so much about actions, but about beliefs. So you want to be able to describe how some portion of the world, some variables that you care about, how they work. And in a problematic setting, how variables work means how they interact in a noisy way with each other. So this is modeling. So there's going to be a whole bunch of math. We're going to talk about the machinery and the formalisms today, but this is modeling. And whenever you do modeling, you're in the game of making simplifications. So models are always going to be simplifications. Our probabilistic models are going to be simplifications. So we'll pick some random variables and then there will be some variables we don't model. So there are going to be some variables that just don't appear, either because we don't want to take the time to model them or we don't know how to model them. And it may be that even amongst the variables we have, there are interactions that are too minor or too expensive to capture. So whenever we talk about models, there's sort of the formulation of the model-- which variables do I include? Which interactions will I model? Where do I get those probabilities and how do I learn them? And that's tricky. Often there you're making judgment calls and trade-offs. And then once you have your model, there's the algorithms and queries you run against it, the formal questions. In this model, what is the probability of x given y? And that's what we'll be starting to talk about here. And as we go, we'll try to sensitize a little bit the questions that occur in terms of thinking of trade-offs. I love this quote from George Box. "All model are wrong, but some models are useful." And our job is to come up with models that are useful, not really models that are exactly right, because only in rare cases can we model a domain exactly. And we'll see examples of that today. In the back of your head, think, what's the price of not modeling a domain exactly? The price is uncertainty. All the variables you leave out, all the interactions you leave out, they show up as noise on the other variables. And we'll see examples of that. So what are we going to do with probabilistic models? Remember, this is in the framework of rational AI, which means making decisions that maximize your expected utility. Part of maximizing your expected utility is inferring what's going to happen or inferring something about underlying causes given evidence. And so, or really our agents, need to reason about unknown variables. And that usually takes the form of, given some things I do know, my evidence, and some things I don't know, but I'm curious about, the query variables, and a model that connects them, what can I conclude? And that's what we'll get into today. One example of this is explanation. This is diagnostic reasoning. I see some symptoms and I want to know the underlying cause. Another example of this is prediction. This is causal reasoning. I have a model. I think, what will happen if I make this change? And then I play it forward. And that seems like simulation. And in a way, it sort of is noisy simulation. Another kind of thing that we'll see probably in about two weeks, another use we can use these probabilistic models for is value of information queries. And that's for things like-- remember in Ghostbusters? I could probe and gather information, or I could act and there was a trade off between gathering information and making a decision. And so you have questions like, how much-- whether it's in dollars or utilities-- how much value is there in finding out this piece of evidence? Those computations where knowledge is associated with gains and utility are really a fundamental thing in rational decision making. And those are called value of information computations. And we'll take a look at that once we have all the machinery. And that's going to be, I think, a really cool lecture where we connect the action, decision making, up to the probabilistic inference. Any questions before we get started? So we're going to, in some ways, continue where we left off in the past lecture talking about probabilistic models, , which are formally joint distributions of our collection of random variables, and talking about properties of those models and computations we can do over those models. One major property that's going to be really important today that we didn't get into last time is the notion of independence between random variables and then following onto that, the more important notion of conditional independence. So we say two variables in a joint distribution are independent if the following holds. So informally, two variables are independent if there's no interaction between them. That's an informal notion. What does that mean mathematically? It means that if I take a distribution p of xy-- and remember, that's a big table that tells you every value of x and y how likely that outcome is. And that's going to be the probability of that outcome. If for every x and y, the probability that x and y happen together is simply the probability that x happens-- nothing said about y-- times the probability that y happens. Nothing said about x. Now this is, in general, doesn't hold. What do we know in general? We know in general that p of x,y The probability of x and y, is the probability of x times the probability of y, given x. That's always true. That's the product rule. We saw that last time. But you might have a very special distribution in which y, in fact, doesn't depend on x at all and the joint probability is equal to the product of the marginal probabilities. What this does is that this joint distribution, which is a big two dimensional table, factors into the product of two simpler distributions. Here, two one dimensional tables. And that means it's a special kind of simple distribution. Here's another way you can write that. Another way you can write this is just what we showed up here, that if it's true that the joint distribution factors is the product of the marginals, it's going to be true that for all x and y, the conditional probability of x given y-- that is, how likely x is to take a certain value given a value of y-- is the same as it's marginal probability. That is, learning something about y doesn't change my belief in x at all. And if that's true, then we say x and y are independent. We write that this way. That little thing between x and y, you can think of that as a sort of perpendicular sign. It says they're, in some deep sense, they're independent. They're doing their own thing. So it looks like a perpendicular sign. Now independence is our first example of a modeling assumption. I can take two random variables like coin flip one and coin flip two and I can ask what is their joint distribution, or I can make an assumption. I can say, you know what? I know enough about how coin flips work that I'm going to assume they have nothing to do with each other and I can think of them as being governed by independent distributions with no interaction terms, which means if I want to know their joint probability, I multiply their independent probabilities. There's a couple of reasons why independence might not hold. One is that there are often very subtle interactions or weaker interactions. You may choose not to model them, but they're often these sort of like lower order interactions. We'll see some examples of that today. And of course, if you actually look at a joint distribution that comes from data, often there's going to be imperfect independence. So if I flip coins a couple of times, they might not exactly fall independently. That would be something that would only be true in the limit. So let's take an example. And this we'll do intuitively. Let's say we have a domain which has the following random variables-- weather, so what kind of weather is there, traffic, how much traffic is there. Maybe that's been discretized into light traffic and heavy traffic. Cavity-- this represents, do I have a cavity in my mouth? And tooth ache. Do I have a tooth ache? So here's four random variables that I may care about. And if I build a probabilistic model over them, we might be able to simplify that model. Instead of being a big four dimensional giant table, I might be able to split it into two tables were parts of my domain don't interact. So what's a good split. Or maybe all these variables may be independent. Are there any variables there that probably aren't independent? What's an example of things that are not independent. One is weather and traffic. There's probably some correlation there. Weather gets bad, traffic gets bad. Those aren't independent. How about cavity and tooth ache? Independent? No. Go see your dentist. Dependent. All right. But these two subsets of variables maybe don't have any major interactions. And so I might say that the variables weather and traffic, although they interact together, they may be governed by one distribution. Cavity and tooth ache are governed by another distribution. And there maybe don't need to be any interaction terms. So if I want to have the probability over all four, I can write that as the product of the probability of the two pairs. That's the idea behind independence. Let's do an example. Here is a joint probability distribution over the variables t and w from last time-- remember, t was either hot or cold. It was the temperature. And w was either sun or rain. It was the weather. And here's some probabilities. There are many probability distributions over these variables. They all look just like that, but with different numbers. And of course, they all add up to 1. They're all positive. That kind of thing. Now what I can do is I can compute the marginal over t. So I can say, well, t is either hot or cold. And if I want to know what's the total probability of t equals hot, I look at my joint table, I look at all the consistent rows and I add those together. And so I get 0.5. And so I can compute this. We did this last time. This is projecting, a joint distribution over the two variables onto a marginal distribution over just one. And of course, I can do that for p of w, too. And I look in there and I do my computations and I see that the total probability of sun is 0.6, for example. So p1 here on the left is a joint distribution over these variables. pt and pw are marginal distributions of that joint distribution. So these are derived. Now I can ask a question, which is, what is the probability of hot,sun? Well, I can look it up in my table in p1. And I see, oh, it's 0.4. So in p1, that's 0.4. In a distribution in which these distributions, pt and pw were the marginals and where independence held, I would know that p of hot,sun is equal to p of hot, the marginal, times p of sun. And so I can look those up. Here's p of hot. Here's p of sun. And that's 0.3. What do we know? We know that p1 is not a distribution in which t and w are independent. And in general, things aren't independent. If I grab an arbitrary probability distribution over some variables, things won't be independent. Only a simple subset of them are. For example, here's p2. In this one p of hot,sun is 0.3. And if you check all the other entries, you'll see that that definition of independence holds. The product of the marginals is equal to the joint for all of those different outcomes. So p2 and p1, just to sort of exercise, some brain muscles for next time, p2 and p1 are two different joint distributions over the same variables. p2 has the property that t and w are independent. p1 does not have that property. So in some sense, p2 is simpler. One way in which it's simpler is I can write it by just supplying you p of t and p of w and giving you the information that I'm thinking about the independent distribution. So it's more compact. Any questions about that? Yep? STUDENT: P1, the probability is [INAUDIBLE] based on some measurement, and we deduce that it's not independent [INAUDIBLE]. If we re-measured p2, we could simply just get these probabilities-- 0.3, 0.2, 0.3, 0.2. How do we suddenly say they're independent, just because they [INAUDIBLE]. PROFESSOR: So there's a couple important things here. And let me let me kind of split them apart. One, I'm giving you probability distributions. I'm not telling you what data you learned them from or what sampling error there is or any of that. We will get to that later. Right now, these are simply two probability distributions. They have different parameters. One of them is independent, and I can verify that-- that's p2-- by checking these equalities. And p1 is not independent. And what that means is t and w interact in p1. So for example, as we can see if hot and sun don't interact, the probability of hot and sun is 0.3. Hot and sun do interact in p1, and the probability is 0.4. And so there's a correlation there. And you might informally say that correlation is, it's more likely to be hot if it's sunny. The sampling question's a good one and we'll get to it later. All right. Here's an extreme example of independence. Let's say your are an octobot and you're flipping a whole bunch of coins and you're flipping n coins. And you want to describe this kind of mathematically imaginable giant joint distribution over these n variables. It's of size 2 to the n, because for every sequence of heads and tails, there at the end is the probability. But of course, we know that when you flip a coin and you flip another coin, those things are independent. And so there's no need to actually ever write down this giant exponentially large distribution. I can instead just write down each of the n marginal distributions and arm you with the knowledge that in my model, these random variables are independent. And then if you ever ask me a question from the full joint distribution, we can just multiply together the pieces and get the answer. In fact, it could be even more compact than this, because in addition to being independent, these coins could also be identically distributed, which means I only have to give you one probability distribution, and then you can imagine replicating it. But this already has an exponential speed up. This thing on the bottom of is of size 2 to the n and this thing on the top is just of size 2n. So it's a lot better. All right. So independence is great. And let's just take a multiplicatively large table and break it into smaller pieces. This should maybe remind you of something and in CSPs, how much easier the CSPs were if I could take my big CSP, break it into smaller CSPs, and solve them independently. But-- we'll bring back the mouth in the second-- but just like in CSPs with where we talked about some problems that completely don't interact, it's also the case that independence probabilistically is very rare. And it's very rare for a couple of reasons. One, variables do tend to have interactions, but in particular, when you choose to build a model of something, you typically throw in all the variables that you care about. And those are the variables that interact. Yes, it's true that weather and traffic don't interact with cavity and tooth ache. But they generally don't show up in the same model either. So in general, all of your variables do have some degree of interaction. And independence is too strong a notion. But it's the building block of something that we can use efficiently, which is called conditional independence. So in conditional independence, we don't go so far as to say that two random variables have no interaction. What we say instead is that their interaction is sort of mediated by another variable. And the way we write that formally is rather than saying that the joint distribution is a product of marginals, we have some other simplification of the chain rule that is less radical. So for example, if we take the variables-- so imagine you are building a dentist robot and it's going with its little probe and it's poking at your teeth. And the random variables are, do I have a tooth ache or not? Do I have a cavity or not? And will that little probe catch on a hole in your tooth? So these are three random variables. And for three variables, I could imagine just having a full distribution. It wouldn't be that big. But I might be able to do better than that, because somehow it seems like if I have a cavity, my tooth might hurt or it might not, but if I have that cavity, whether or not the probe physically catches on my tooth maybe it doesn't have much to do with whether or not I'm feeling pain. And if that's the kind of simplifying assumption you can use via conditional independence. So I could make an argument. In this domain that goes something like this-- if I have a cavity, the probability that the probe catches in it doesn't depend on whether or not I am experiencing pain. What is that formerly. Formerly, that's saying the probability of plus catch, given the tooth ache and the cavity, is just the same as the probability of the catch given the cavity. Somehow the tooth ache doesn't add any information. So the conditional probability of catch given tooth ache and cavity is the same as the conditional probability of catch given just cavity. Now in an actual joint distribution, this could be true or not. You could go check. You could go churn some numbers and check. As a modeling assumption, I could say before I even get a probability distribution, I want to assume this to simplify my life and make my distribution more compact. Now that same independence would have to hold even if you don't have a cavity. So in this case, well, if I don't have a cavity, there is some probability that the probe will catch anyway. Now the question is, what's the probability that the probe will catch if I don't have a cavity but for some reason I'm experiencing tooth pain. Well, you don't have a cavity, so who knows why you're experiencing tooth pain? And you might make the assumption that those conditional probabilities are the same. What does this mean? This basically means that once you know about cavity, you know all you need to about catch and the tooth ache doesn't matter anymore. Is that independence? Well, it's not independence, because it's not the case that the catch and the tooth ache are independent. If that were true, if I learned that I had a tooth ache, it wouldn't change my belief of whether or not the dentist would find a catch. But it should change my belief, right? Because the tooth ache maybe means I have a cavity, which in turn maybe means that there's going to be a catch, but there's not a direct connection. That's formalized by these equations. What we say here is we say that the random variable catch is conditionally independent of tooth ache, given cavity. That's much weaker than being completely or marginally or absolutely independent of tooth ache. It's only once you know cavity, there's no sort of correlation left between them other than what was mediated by that variable. So if these properties hold, we say catch is conditionally independent of tooth ache, and there's a bunch of ways to write that statement, which are all mathematically equivalent. Conditional independence is an assumption. It may or may not hold in a given distribution. And when you assume it, you simplify your distribution. But that same assumption can be written in a lot of ways. One is this way, what's above, that if you-- it's catch given cavity is the same as catch given cavity and tooth ache. But you can write it the other way. You can say the probability of tooth ache given cavity is the same as tooth ache given cavity and also knowing about the catch. You can also write it like this. I actually really like writing it this way. This way says that once I know the value of the cavity random variable, whether it's plus or minus, the remaining distribution over tooth ache and catch, that distribution is independent. So tooth ache and catch, you can think of that as its own little distribution over two variables, conditioned on cavity, and if that little distribution itself is independent for that conditioning environment, then you have conditional independence. Each of these can be derived from the others. You can go apply your definition of conditional probability and cross stuff out and show that. Yep? STUDENT: [INAUDIBLE]. PROFESSOR: That's a great question. So does this-- does the knowledge that-- let's write this somewhere. So I could write tooth ache is independent of catch given cavity. Does that imply that tooth ache and catch are not independent? Not necessarily. I could go back to my coin flips and I could say coin flip 2 coin flip 4 our independent. Yep. Are they also conditionally independent given coin flip 3? Yes, but you know something stronger. So in general, we want to use the strongest assumption we can make. But just because you have one assumption doesn't mean that other assumptions are not simultaneously true. So we already talked about this. Unconditional, or what's called absolute independence, is very rare. On the other hand, conditional independence is one of our most basic ways of saying something about a probabilistic model that simplifies it in a way that allows us to tractably compute important quantities on large distributions. So independence, rare and theoretical. Conditional independence, very important. And we'll see conditional independence and its consequences a lot today. So mathematically if I have a distribution over three variables, x, y, and z, and I say x is conditionally independent of y given z, what that means, what that has to bottom out, is this statement-- for all values of x, y, and z. So you would have to check the whole distribution. You go down and check, check, check, check, check, check. Or prove it to yourself mathematically, if you wanted to verify this property holds on a distribution. So for all x, y, and z, the probability distribution over x and y or the probability of x and y given z is equal to x given z times y given z. So if you ignore z there, that's independence. But since that independence only holds when z is known, it's conditional independence. This is another way to write it. This says for all x, y, and z, the probability your belief over x given your knowledge of z and y depends only on what you know of z. This says that if you already know z and you have some belief over x and you learn something about y, your beliefs should not change. All right. Let's do some practice. We're going to come back to these toy domains I think three times today, depending on how far we get. So let's think about a domain with the following random variables. Traffic, t, that's how much traffic there is. Umbrella, whether or not I have an umbrella or whether or not I observe an umbrella on my robot. And then raining, whether or not it's raining. So are any of these things independent? You look and you say, oh well, the umbrella and the traffic don't have any kind of correlation. Well, that doesn't sound right, because if I see the umbrella, I might think, huh, maybe there's going to be traffic too because maybe it's raining. So nothing's actually going to be-- it's probably too strong an assumption, and maybe not a useful assumption, to assume independence. But what can we assume is conditional independent? Two variables that may interact, but that interaction is completely mediated by a third. Any votes? Yep? STUDENT: [INAUDIBLE]. PROFESSOR: I guess I'm looking for a statement of the form, something is conditionally independent of something else given some third thing. STUDENT: I was thinking that the probability of seeing an umbrella, given that it's raining and there's traffic, is the same as the probability of seeing an umbrella [INAUDIBLE]. PROFESSOR: So that's the presence of umbrellas and traffic on the freeway are conditionally independent once you know whether or not it's raining. So let's think what that means. That means if I know it's raining, well, traffic is likely and so are umbrellas, but not in a correlated way. It's not like when it's raining and the umbrella comes out, suddenly everybody's like, turning their head to see the umbrella and that causes more traffic. There's not an additional correlation. And you say, well, wait, but you said it's a little weird, but may be possible. Yeah, there's always conceivable additional interactions that you can declare too minor to be modeled. And of course, we could quantify what too minor to be modeled means in terms of sort of information gain and things like that, but we're not going to do that today. So for today, it seems reasonable to say that traffic and umbrella are not marginally independent, but they seem to be conditionally independent given rain, and that is an assumption we can make to simplify our model. And you say, well, how exactly does that simplify my model. I thought you were just handing me a giant probability distributions. Today, we'll see how it simplifies your model by allowing you to specify it in a more compact way. Let's do another one. Here's this model. f is whether or not there's fire. s is whether or not there's smoke in the room. And a is whether or not the alarm goes off. Now in this one, maybe it's tempting to start thinking about things causally, which can be misleading, but often helpful anyway. So let's think of, are there are two variables that maybe have some interaction, but only mediated by another one. Anyone want to make an assertion that something should be conditionally independent of something else given a third variable? So let's first think about what caricature I have of this robot fire alarm situation. It's that the standard way these things work is there is fire or not. And if there is fire, that fire has smoke. And that smoke gets into the alarm. And the alarm has a smoke sensor and goes off. And maybe that's all a noisy process. The alarm doesn't always go off. Sometimes it goes off anyway. So now what can I say about these variables? Like let's-- yep? STUDENT: The fire and the alarm are conditionally independent, given smoke. PROFESSOR: Yeah, so that seems like a reasonable statement. I could say, well, the fire and the alarm are certainly not marginally independent. That's the whole point of a fire alarm is when the alarm goes off, it makes you worry that there's a fire. That's the point of alarm. But if the mechanism is via the smoke, then it may be that if I put smoke into the room and I know I've put smoke into the room, now suddenly there may be no additional correlation. And you say, but what if the fire alarm also had a temperature sensor? Well then, this is a poor model now. So here's an assumption that may make sense to make or not, based on what is actually going on in the world and how closely I want to model it. Let's think about how conditional independence, which so far has been a relatively intuitive concept, let's see how that's going to anchor into expressions we can write about joint probability distributions and ways we can write them using the chain rule. So remember the chain rule says that if I have n variables and I want to talk about the probability of some assignment to those n variables, one way I can write that as a product of other probabilities is I can say, well, it's the probability of x1 taking on whatever value it takes on, times not the probability of x2 by itself, but the conditional probability of x2 given x1. So given what I already have at x1, I extend that to x2 and similarly times the probability of x3 given everything that precedes it in that ordering of variables. This is always true. And remember, this was true because you could think, here's probability of x1, but the probability of x2 given x1 was just p of x2,x1 divided by p of x1. And then the x1 is canceled and this whole thing sort of telescopes. So that's no assumption. That's just the chain rule. That's always true. And by the time you get to the end to xn, it's xn conditioned on all variables preceding it. So not super helpful. So the trivial the decomposition here of traffic, rain, and umbrella, is that that joint probability can be rewritten as the product of probability of rain, times the probability of traffic given rain, times the probability of umbrella given traffic and rain. This is always true over any distribution over three variables. But we decided something. We decided that for the purposes of our model, it seemed reasonable to assume that the probability of umbrella given rain and traffic was the same as the probability of umbrella given just what? Given just rain. Because umbrella and traffic were conditionally independent given rain. And if we make that assumption, now that joint probability is equal to the product of these simpler conditional probabilities. And now we're starting to get somewhere because now I have a way of talking about a large potentially exponentially sized joint distribution as a product of little pieces that don't just keep growing and growing and growing. And the chain rule, by the time you get to the end, these conditional families are just as big as the joint distribution. But not anymore, because they're not getting more complicated as I go. So now I finally accomplished something from the simplification, which is I can specify entries of a joint distribution using products of simple probabilities. And how simple they are depends on what conditional independence assumptions I can make. So what we're going do now is we're going to get into Bayes' Nets, which are also called graphical models, which are a tool to help us express conditional independence assumptions and think about them in a graphical way, which can both help us design algorithms and also help us sort of think about these probability distributions in new ways that has been helpful. All right. Let's think about Ghostbusters. So remember, this is the tiniest Ghostbuster board. It's got two squares. That means the ghost is in either the top square or the bottom square. So that's it. So one random variable is where is the ghost? And I could phrase that as ghost location or true false, ghost is in the top. That's going to be the random variable g. Let's say it's uniform marginally. Here's two other random variables. Remember, we took sensor readings. And let's say let's imagine the sensors just say red or not red, or red means close. So I could have random variables b and t, which are the reading in the top square-- could be red or not. It's noisy. The reading in the bottom square, could be read or not. It's noisy. And my sort of verbal description of this model is each sensor reading depends only on where the ghost is. What that means is the two sensors, they're certainly not marginally independent because if I re-read up top, I'm probably not going to read red it on the bottom. If I read red on the bottom, I'm probably not going to read red up top. So they're certainly not marginally independent. But it's reasonable to say that they're conditionally independent given the actual position of the ghost. That is, if I know where the ghost is, now these are just noisy sensor readings that aren't correlated. And you say, well, but they could be correlated because what if your sensor is sometimes busted and then it reads red everywhere, or what if the first time you read red, it blows out the red light and now you can't read red any more somewhere else. So you could imagine correlations, but the basic idea that there's an underlying variable with independent measurements seems appropriate for this domain. So what does that mean? That means we're given the following things. So I can tell you, hey, the probability that the ghost is in the top versus the bottom is 50-50. These are givens. I could also tell you the probability of a red reading if the ghost is the top is 80%, but the probability of red reading given the ghost is in the bottom as is only 40%. So this does not fully specify a joint distribution over all these variables in general. But it's enough to specify the full joint distribution if I have conditional independence. Why is that? That's because if you come along and say, hey, I'm curious about this entry of the joint distribution. I have some value of t, b, and g. Well in general, I know that I could look that up by telling you, how about I find p of g. I have that here. And how about t given g. I have that here. So far, so good. The chain rule demands be given g and t. That's what the chain will requires. I don't have that. So I can't compute this entry you asked for in the joint distribution until-- conditional independence to the rescue-- if I assume that b and t are conditionally independent given g, meaning the sensor readings are conditionally independent given the ghost's position, then I know that's the same as that without the t. And suddenly now I have the right variable-- sorry, the right parameter-- to plug-in, which means with those givens on the bottom and the assumption of conditional independence of the sensors, now I can fill up this whole table and now I can do all of the computations that I did last class, like compute how likely is it that the ghost is at the top given that they both read red or something like that? So Bayes' Nets. What's the big picture? Yep? STUDENT: [INAUDIBLE]. PROFESSOR: So are you asking, how do I know that-- so the assumption here that really let's me go here is that t is conditionally independent of b given g. Are you asking how I know I have that assumption or how does having that assumption unlock the joint distribution? STUDENT: [INAUDIBLE]. PROFESSOR: Yeah, so this is-- as modelers, we have to sort of take the information we have about the problem, which in this class will usually be specified in words. Like, we'll give you some word problem or appeal to some sort of fairly clear causal structure where you can kind of tell what is directly dependent on what and translating that into conditional independence. So what I would have to tell you is that I have to tell you I'm imagining a case where I have sensor readings, and those sensor readings are conditionally independent given the underlying variable. Because like I said, if I could come up with some story that makes that conditional independence no longer reasonable to assume. So another way to think about it is, the givens are various collections of conditional probabilities along with conditional independence assumptions. Sometimes we state them in words. Sometimes we'll just lay them out in symbols. And the point of this slide is that when you have the right givens along with the right conditional independence assumptions, you've unlocked the joint distribution. You can build the whole thing, when you wouldn't have been able to simply according to the chain rule. And that's really important because it means that under the right circumstances and with the right conditional independence assumptions, tiny little pieces of probability distributions can imply an entire joint distribution, which will unlock the ability to take lots of little pieces, assemble them and reassemble them and then produce some other little piece without ever having to construct the whole large object. Yep? STUDENT: [INAUDIBLE]. PROFESSOR: So the question is, why do we have so much emphasis about finding the whole joint distribution? We won't forever. But right now the only algorithm we have for answering an arbitrary query in a domain is by starting with a joint distribution and doing computation on it to answer our query. And so we're like, tiptoeing towards the ability to start with small pieces, build the whole joint distribution, and then collapse it again into a different small piece. Where we'll eventually get is algorithms that help you avoid inflating the whole thing if you don't have to. And so you sort of manage assembly and projection in an interleaved way. And that's going to be, for example, the variable elimination algorithm. Great questions. More great questions. STUDENT: [INAUDIBLE]. PROFESSOR: Yes. So conditional independence is not an ordered notion. All right. So what's the big picture of Bayes' Nets. Bayes' Nets are a device for describing a complex distribution over a large number of variables where that large distribution is built up of small pieces, meaning local interactions, and the assumptions necessary to conclude that the product of those local interactions describes the whole domain. So it's going to be our way of describing big complicated domains using tiny pieces. And there's really sort of a deep analogy here to CSPs where we described the interaction amongst many, many variables in terms of lots of little local constraints. That didn't mean that there weren't correlations between what was assignable at various places. that were more distant, but what we described was the local constraints and then there were global consequences. Bayes' Nets are the same. We're going to describe lots of little local interactions and there's going to be global consequences that will allow us to do much more complicated kinds of inferences. So up till now, as was just pointed out, we've been talking about a probabilistic model being a full joint distribution. Mathematically, that's great. That full joint distribution it's exponential in size but it describes all the quantities we care about. But there's some major problems with using joint distribution tables as models. One is it's too big to write down. And secondarily, even if you could somehow write it down, it's really slow to do computations over exponentially large things. That's sort of a computational space and time cost. There's another cost, which we don't always think quite as much about in computer science, but it's very important in AI, and that is the statistical cost. The bigger a thing I'm trying to learn, the more parameters I'm trying to learn-- you can think of the parameters, the entries of those probabilities, as being facts about the world that I learned through observation. The more parameters I'm trying to learn, the more data it will take to learn that. And so if I'm trying to learn this really, really large thing where every row is sort of it's own thing that must be learned, that's going to be very expensive. Whereas if I can say that whole large object is really described by a couple of simple interactions that chain together, now suddenly I just have to learn those pieces and things become much more-- it's called sample efficient. And actually you'll see this in your projects already with reinforcement learning, where with just naive Q Learning, you have to learn about every possible state of the world, but with function approximation, you only have to learn about the aspects of those states that generalize in a much better way. It's the same kind of thing. So what are Bayes' Nets? Bayes' Nets are a technique for describing complex joint distributions which are models of a domain using simple local distributions. In this case, those will be conditional probabilities. These are also called graphical models. And what we do is we describe local interactions between variables and we it in a very careful way that those local interactions imply a distribution over the entire set of variables. We'll be a little vague about those interactions for a little bit and then we'll get concrete. So let's do some Bayes' Nets. Gonna show you some examples and then we're going to think up some examples. And then I'll actually pull out a Bayes' Nets and we'll look kind of under the hood and see what's really going on. So here's an example of a Bayes' Nets that describes an insurance domain. And so there's a lot of variables in the insurance domain. And of course, one of those things at the bottom are going to be various costs that you might want to predict if you're an insurance company. This one's for vehicle insurance. And then there's going to be various other things, like things you know, like the age of the driver and the mileage on the car. And there's going to be things you don't know, like the skill of the driver or whether or not there's an anti-theft device or the socioeconomic class or something like that. This Bayes' Nets is a description of that domain, which includes a decent number of variables. It's like something like 30 variables there. And if each of those variables had 10 values, like maybe 10 age buckets and 10 mileage buckets or something like that, a full joint distribution over all of those 30 variables with 10 values each, how many entries in that joint distribution? That's each variable has a choice of 10 things. So it's going to be 10 to the 30th. 10 to the 30th numbers is bad news. But it's even worse if you had to somehow like, examine 10 to 30th case studies in order to figure out what those numbers even are. So joint distribution, conceptually really, really big. But what this Bayes' Nets also shows is what the direct interactions are. So for example, there may be a correlation-- let's find something interesting here. Let's think. If you're an insurance company. Let me change colors. If you're an insurance company, you might care a lot about this variable-- accident. That seems like something you want to predict. That's probably correlated with the age of the driver. And in some way, it's probably correlated over with the year of the vehicle and whether they have an extra car and what their home zip code is and all of those things. But what this network does is it says, actually you know what? The accident is really only directly determined by the quality of the driving and some factors about the car for Accident Prevention, like anti-lock brakes and so on. And you say, of course, well, how does that connect to everything else? Well that driving quality is determined by the skill of the driver, but also their risk aversion-- like if you're a really good driver but you take lots of risks, that's not the same quality of driving as if you're a really good driver and you don't take risks. And so in specifying these little pieces, that can finally connect up to things like for example, the age or the mileage on the car and so on. So this isn't the whole story for this domain. There's also the question of what exactly is the dependence between age and driving skill, which isn't on this slide and I have no idea. Hopefully some insurance company knows. But what this does is it lets us do things like observe quantities like age and the vehicle year and the zip code and whether it has airbags and conclude things like, how likely is there to be an accident? That's one use for a network like this. And because all we have are little local connections between variables, we don't need anything so expensive as 10 to the 30th to specify it. This might be a network that you use for simulation. I know this. I know this. Is there going to be an accident? And you're sort of computing on the basis of evidence up here or initial state up there. You're computing what might happen and with what probabilities. That sort of computation going down in the network, where you might query something low on the network. Here's an example where it might go the other way. You want to have a robot mechanic. And so there's going to be evidence down at the bottom of the network here and some underlying causes of why the car might not start, like the battery could be dead or the alternator could be broken. And so you might do things like diagnose, like is there an oil light on, what does the gas gauge read, what does the battery meter read, will the car start, and so on. And so what you can do in this network here is you observe some of these things at the bottom and you can do some inference over what causes there might be. Now rather than exactly knowing the probability of any mapping from causes to effects, we only know little things like, well, and there's no oil, the oil light turns on except when there's no power coming from the battery or something like that. And so a network like this can help us do diagnosis. And also importantly, it can encapsulate a lot of knowledge about cars. This network probably knows more about cars than I do. That's maybe more a comment on me. So graphical model notation. What are graphical models? This is going look a lot like CSPs. What are graphical models? Well, they have nodes in them. They're going to be graphs, so they're going to have nodes in them. Those nodes represent variables and those variables have domains. And they can be unassigned, which is called unobserved, or they can be assigned, which is called observed, just like in CSPs. There are arcs. And the arcs represent interactions between variables-- again, just like the CSPs. CSPs we would say something like, hey, these two variables, this one has to be greater than that one. I'll put a constraint between them. It may have other effects when it chains together with other constraints, but that's my local interaction. There's going to be local interactions here. And we'll see a bit what those local interactions are. The arcs indicate something like direct influence between variables. Formally, they're encoding conditional independence. But it can be very convenient to build up your intuition to think about the arcs as encoding causality. We'll see that that's not actually true. They don't actually need to, and they're not guaranteed to, encode causality. But often that's what ends up happening. So here's an example of a Bayes' Nets for a model we already know. For cavity, tooth ache, and catch, we see there's a direct influence between cavity and tooth ache and another direct influence between cavity and catch, but no direct influence between tooth ache and catch. The influence between tooth ache and catch is mediated by the path that goes through cavity. And paths in this graph are going to be very closely related to conditional independence. So nodes and arcs. Nodes are variables. Arcs are interactions. For now, think of the arrows as causes. Cavities cause tooth aches and they also cause catches, but tooth aches don't cause catches and so I don't have an arrow there. I think what we're going to do is we're gonna take a two minute break now and then we will come back and see examples of first simple and then increasingly complicated Bayes' Nets. And then we'll peak underneath the covers and see what those Bayes' Nets are defined by in addition to the graph. Two minutes. Let's see some examples of Bayes' Nets. First one, back to our independent coin flips and independent coin flips. With no interactions between the variables, we're going to have absolute independence. And the Bayes' Net that corresponds to that will be a bunch of variables with no arrows between them. And as we'll see later when we dig a little deeper into this, disconnected things in the network, just like they did in CSPs, correspond to independence of problems. And here that means absolute independence. Here's a simplified traffic domain. Variables are is it raining, t, is there traffic? So what might we do? Well, here's one graphical model. Here's one Bayes' Nets over those variables. This is the independent one. This one says, in my very simple model, these two variables don't interact. So they're going to each, it turns out, have a marginal distribution. And that's all you know. Here's another model. In this model, I have an interaction. This model says there's r and there's t. And the behavior of t is now a function of the behavior of r. So we are now capturing, in this model-- and I'll be precise in a bit how that's captured-- we're now capturing a direct interaction between r and t. They're both valid graphical models or Bayes' Nets over this domain. But model 2 is probably better. The reason why model 2 is better is because an agent that's armed with model 2 can do things like see traffic and conclude something about rain, or see rain and conclude something about traffic. Once you start taking too many arrows out as you gather evidence, you don't actually have the ability to update your belief over variables that your care about through connections to the evidence. And that's sort of the whole point. You want to see evidence and then infer something about other variables. All right. Let's do a bigger traffic case. This is going to have more random variables. And we will build a graphical model as we go. As I draw this model, I'm not being precise yet about the mathematical semantics of the model. We'll get that in a couple of slides. For now, just think about the arrows as being causal or direct influence. All right. So let's add some variables to our traffic model. First variable, traffic. I'll stick that there. There's either traffic or there's not. Second variable, rain. Where should that go? Should it be separate? Should it point to traffic? Should traffic point to rain? Think causal for now. Does traffic cause rain or does rain cause traffic? To the best of my knowledge, the rain causes the traffic. Low pressure. The barometer has a low pressure reading. What do you think? You can put l up here pointing to rain. Well, they're certainly correlated. Now you can be like, wait a minute. A reading on my machine doesn't cause it to rain. So there's actually this actual low pressure, then there's the reading of the low pressure, which actually l pointing to that thing. So we can get more and more fancy. But imagine this is the actual low pressure in the sky. All right. My roof is dripping. How does that fit into all of this? So it seems like rain causes that. All right. Does anything else cause that? Does the low pressure cause my roof to drip? Kind of. I mean, unless like really low pressure, like rips the membrane off my roof or something, that's mediated by the rain. Similarly, does traffic cause my roof to drip. Like probably not. So this is a reasonable model. There's a ballgame. How does that interact with any of this? All right. So ballgame causes traffic. Any other interactions? Certainly doesn't cause my roof to drip. Anything else? This is reasonable. Yeah? STUDENT: [INAUDIBLE]. PROFESSOR: Yeah, that seems reasonable too. So here's an arrow that I could include or not. That would be a modeling assumption. It seems reasonable that rain affects whether or not the ball game happens. If I omitted that, I would have a slightly simpler model. And there's always a trade-off between how powerful your model is and how hard it is to work with. Cavity. How about cavity? That was probably just out there somewhere, right? Probably like-- that's a separate process. That's the dental process over there. All right. Are there other graphical models I could draw? Yes. As I move arrows around and delete them, the set of conditional independence assumptions changes. And the set of probabilities I need to specify the behavior of the model will change too. Before I had this-- let me change its color. Before I had the green arrow here, I didn't have the burden of specifying how ballgame relates to rain and now I do. On the other hand, now my model is a little bit higher resolution. Alarm network. Here's the deal. The variables are there may or may not be a burglary. Let's stick that in here somewhere. My burglar alarm goes off. Where does that go with respect to b? Well, they seem correlated. And if we're going to go in the causal direction, maybe the burglary causes the alarm. What abouts my neighbor Mary calls. Let's put Mary over here. Is that connected to all of this? Well, it depends, right? We're now making assumptions about the model, but what are some reasonable ones. One is Mary is completely-- Mary's in France. She just calls sometimes. This is appropriate now. She may or may not call. It's disconnected. Might be that she calls if she hears the alarm. It might be that she's like, staring at my house through her window and she'll see the actual burglary and she might call even if there's no alarm, if there's a burglary. Then I'd need extra arrow. Here's John calls. So that's another variable. That may also be called because of the alarm itself. But if John's calling because Mary calls John first before she calls me, then I'd need another arrow from m to j. Earthquake. Where's earthquake go? We would have to say what we're assuming about this. One answer is the earthquake can set off the alarm. And only because of the alarm do my neighbors call me. Another could be earthquake directly causes everybody I know to call me and then there would be direct arrows from you e to j and m. So you see it's sort of-- these are modeling choices. In this class, we often give you a model or clearly state the assumptions and say, draw the model. And then we have to compute with it. All right. So now we're to the key bit. We're to the semantics of a Bayes' Net. So what's this about? We just saw sort of an intuitive example of building at least the graph part of a Bayes' Nets and thinking about arrows as capturing direct influence. And now what we're going to talk about is, for a given Bayes' Nets, what does it mean? What joint probability distribution does it encode and how do we know that? So here we go. The full definition. A Bayes' Nets is a set of nodes, one per variable x. So every variable in the network, everywhere in the graph is a variable in the joint distribution. I have a directed acyclic graph over those nodes. So there are arrows between them. And there are no cycles. And if you think about that intuitively for causality, that makes a lot of sense. Like you don't have the rain cause the traffic and the traffic cause the ballgame and the ballgame cause the rain. That would be very confused. But we'll see mathematically that this acyclicity-- meaning there are no directed cycles in the graph-- bottoms out in the chain rule. It's because in the chain rule, you introduce variables one at a time in some order. The fact that your graph has no cycles means there is an order that corresponds to the network. So a set of nodes. A set of arrows between the nodes that have no cycles in them. And hidden inside each node is a little conditional distribution family. So if you peek inside a node, there are a bunch of conditional distributions that describe the probability of that variable as a function of its parents-- meaning for every setting of the parent values, there's a distribution. So if I look at this node x, I see it has these parents a1 through an. That means if I peek inside x, there's a probability distribution over x, but not just one-- there's one for every setting over the parents a. So if you have a lot of parents, there's a lot buried in this node. So the more parents you have, the amount that you have to stuff in that node to describe what x is going to do as a function of the parents, it grows exponentially. We often see this abbreviation, CPT, as a conditional probability table. That's what lives in the nodes. You can think of this as a description of a noisy causal process. It says, what is x going to do given a1 through an? So you might think, what is a causal process? If it rains, then there's traffic. We could do that with logical rules. But here it's more like, if it rains, then there's a 90% probability of traffic. If it doesn't rain, there's a 30% probability of traffic. And because we need to specify the response of a random variable for each conditioning environment of the parents, we can think of this as a noisy causal process. So a Bayes' Nets is a topology, meaning a graph, plus all the little local conditional probabilities that live inside the nodes. We'll see some examples. And then we'll build one of our own. Any questions? Yep? STUDENT: So using this, we can also express like, variables that may come over as less than, right? PROFESSOR: It's a good question. So the question was, couldn't you also express that one variable makes another less likely? You can express any conditional relationship. So you could say that rain makes traffic more likely. You could say brushing teeth makes cavities less likely. But in general, things aren't Boolean. So you could also say that the driver being of age 20 to 30 changes the driving quality distribution in the following way. It's an arbitrary relationship between the parents and the distribution over the child. And that's actually important. It's important because all these arrows tell you is sort of that there is a conditional probability table for each of these values. It doesn't tell you which table. So when I draw an arrow between rain and traffic in a network, if I draw an arrow between rain and traffic, that tells you that this node can encode a different value for t, given each value of r. It might encode that rain makes traffic more likely. It may encode that rain makes traffic less likely. In fact, there's even a setting of the values for which it encodes that they're actually independent and you just can't tell from the graph, because for every value of r, in fact, it's the same distribution over t. Yep? STUDENT: If you have a XOR relationship between two parents and a child? PROFESSOR: Can you have an XOR relationship between the parents and the children? Absolutely. You can have a noisy exclusive or. So like, if it's a1 through an and I want to say x is basically-- is exactly one of them on, I would have a whole bunch of settings of a1 through an. And for a bunch of them, x would probably be on maybe probability 0.99. And for all the rest, x would probably be off. And whether it was 0.01 or 0.001 or 0.3 or whatever, now the details of the noise in that exclusive or process. But yeah. And you can put deterministic things in there. If I want to say rain causes traffic, I peek in here and, well, what is the probability of traffic given that there is rain? Well, it's plus t could be 1.0. Minus t could be 0. Or I could say, no, it's noisy. Point. 0.9, 0.1. Or it's noisy. 0.5, 0.5. Or whatever. So there's the topology which tells you what relationships can exist and then there's the actual numbers you stuff in, which are very important too. They give you things like, what are the qualitative interactions-- positive interaction, negative interaction. Yes? STUDENT: What do you do if you have variables that kind of require a cycle? [INAUDIBLE] PROFESSOR: The question is, what do you do if you have two variables-- so you got a and b here. And you would like to describe a correlation between them, but there's no obvious causal interaction between them. Or if there is, maybe it feels cyclic. Well, it's going to turn out that either the arrow in this direction or the arrow in that direction gives you the ability to specify any joint distribution over those two variables. So the direction of the arrow is only going to matter when it starts interacting with other arrows. In this case, that's a very good example of why it is that these arrows are not causality. Because there are plenty of cases where you have two variables that have a non-causal correlation. Graphical models are perfectly adequate to express that. But the arrows stop sort of having this intuitive meaning and they become a little harder to think about. That was a great question. Anything else? All right. All right. So probabilities in a Bayes' Net. A Bayes' Net implicitly encodes a joint distribution. So I just told you what it was. It's a collection of variables. They have parents. And there's a little conditional probabilities hiding under each node. They implicitly encode a joint distribution according to the following definition-- If you multiply together all of the little conditional distributions, one for each node, you're going to get the following expression. It's going to be a product over all the nodes of a probability of that node given its parents. By definition, that product of local probabilities in a Bayes' Net is what defines the joint probability over all of those variables. And if you squint at that right hand side thing, it looks a lot like the chain rule. Probability of each thing given some other things. But it's not the full chain rule. And all of the variables that are missing represent conditional independence assumptions. So here's an example. Cavity causes tooth ache, cavity causes catch. So what is the probability of plus cavity plus catch, but minus tooth ache? Well, I don't know, but living in this node-- sorry living in this node is probability of plus catch. And living in this node here for tooth ache is probability of-- what is it? Minus tooth ache given plus catch. And living in-- sorry. Rewind. Living under the cavity node is probability of plus cavity. Living in the catch node is probability of plus catch given plus cavity. And probability under the tooth ache node is probability of minus tooth ache given plus cavity. So I have all these pieces. And what I know is that this entry, by definition of this Bayes' Net. Is the probability of plus cavity times plus catch given plus cavity times probability of minus tooth given plus cavity. Each of those terms lives in the Bayes' Net. When I multiply them together in general, the chain rule says, that's not the chain rule. Who knows what you're going to get? Look, I can multiply those together your own risk. The Bayes' Net comes with a guarantee that when you multiply them together, you'll get the joint. And that's because it also comes with conditional independence assumptions that we'll find out about probability in the next lecture. Why are we guaranteed that just defining an entry of the joint, p of x1 through xn, as the product of all of the bits I have in my Bayes' Net nodes given their parents results in a proper joint distribution? Well, the chain rule says that would be OK if each if each variable was conditioned not on its parents, but on everything before it in an ordering. It's parents are a subset of things that are before it in any topological ordering of that graph. And so we have the additional assumption that p of each variable given the ones preceding it is equal to p of that variable just given its parents. So this is where that assumption comes in. This means that not every Bayes' Net can represent every joint distribution. So if I give you a Bayes' Net that looks like a and b, I can represent joint distributions over the variables a and b. But not all of them. Just which ones? Just the ones in which a and b are independent. If I want to represent ones in which they're not, I need to throw some arrows in. All right. More Bayes' Nets. Here's the coin flip Bayes' Net. But in addition to having each variable be independent here, I also know that living under each variable is a probability distribution governing that variable's response for each setting of the parents. No parents, so marginal. That means probability of heads, heads, tails, heads is this. Probability of heads times this, probability of heads times the probability of tails times the probability of heads. So to get an entry of the joint distribution, I multiply all of the appropriate pieces of the conditional probabilities. Each variable will have one term in this product. And what's that going to be? It's going to be 1/2 to the 4th. This Bayes' Net here, I can fiddle around with these 0.5s. I can be like, oh, x1 always comes up heads and x2 comes up heads 0.2 of the time. I can fiddle with these probabilities all day. I can change the joint distribution. But try though I may, I can never represent a joint distribution in which the variables are not independent. I can do one where they are not identically distributed, but not one in which they are not independent. Here's our traffic Bayes' Net. It's got a node for r and a node for t. And we know that living under r-- like if you say, what are you dreaming of node r? It's dreaming of this distribution here, which is some distribution over r given its parents. It doesn't have any parents. And in this case, it rains a quarter of the time. What's living under node t? It is conditional probabilities of t for every setting of the parent-- in this case, r. And so it might be these conditional probabilities. t is often on when r is on and it's 50/50 when r is off. So given-- armed with this Bayes' Net here, if I say, well, in the joint distribution that Bayes' Net represents, what is probability of plus r and minus t? I go through and I say, well, plus r, that belongs to the r node in the network. And then 4 plus r minus t is right here. So I will multiply together those things and I will get 1/4 times 1/4. Or put another way, I will multiply together probability of plus r times probability of minus t given plus r, which will be 1/4 times 1/4. Here's my alarm network. Now suddenly we have a network that's not just one or two variables. And I can compute, under this assumption that I have this topology-- the burglary causes the alarm and the alarm causes John and Mary to call, but independently-- if you just tell me the right probabilities living in each node, I can compute the probability of any event. So for example, you might tell me, well, the probability of burglary 0.001 and the probability earthquake is 0.002. And here is the response of the alarm for all the different scenarios. You're like, man, I have to say what the alarm does if there is a burglary and what the alarm does with an earthquake and also a burglary and an earthquake and also no burglary or earthquake-- well, when you start giving nodes lots of parents, suddenly you have to describe what that node does in a bunch of different conditioning environments. So here it is. And you can see these are a bunch of distributions over a. Here's one distribution over a. If there's a burglary and an earthquake, 95% of the time, the alarm goes off. Here is the probability of the alarm going off given just the burglary, just the earthquake, neither burglary nor earthquake. And of course, Mary and John have their own distributions which says how likely they are to call-- both in the alarm scenario and then the non-alarm scenario. These will either have to be learned from data or just supplied as givens. Right now I'm giving them to you as givens. And if I wanted to know, for example, the probability that there was a burglary and an earthquake, but no alarm, but John Mary called anyway, I would go through and start multiplying together the appropriate conditional probabilities. And that would define the entry of the joint distribution. Any questions on that? And then we'll do a couple quick extensions and take a look at a demo. All right. So this gets to the question before of causality. What if causality is not what your model encodes? So right now, this is the rain and traffic. This is the causal direction. Rain causes traffic. There's an arrow from r to t. That means living under the node r is a distribution over r-- not conditioned on anything, because r doesn't have any parents. And living under t is a distribution over t. But not just one-- I have to give you a distribution over t for raining and another one for not raining. And then I can use that formula which says, multiply all the entries together. That's like the Bayes' Net reconstitution formula, to take these pieces and define this whole joint distribution. In this case, the joint distribution is actually just the same size as the conditionals. There's only savings when these networks get big. So here is the joint distribution over t and r that is implied by the Bayes' Net on the left. Great. So for example, if I want this entry, I multiply the probability of plus r times the probability of plus t given plus r. And I get 3/16. All right? Let's flip it around. Here's another Bayes' Net. In this one, there's a variable t and there's a variable r. There's an arrow from t to r. You're like, wait a minute. Traffic doesn't cause rain. The Bayes' Net does not care whether traffic causes rain. It is a graph. You put an arrow. It's happy. But when you put that arrow, you have to supply conditional probability. So what lives under t? Well, it used to be sort of t as a function of r. But now it's not. Now it's just t. There's some probability of traffic. And now under r, I have to specify how likely rain is given traffic and how likely rain is given not traffic. Those numbers exist. Let's imagine this is those numbers. And for these particular numbers, if I use the Bayes' Net reconstitution formula and I compute every entry of-- doesn't look like an arrow-- every entry of the joint distribution, it will be this one. And if you have a really good memory, you'll remember this is exactly the same joint distribution as before. So this network here, which does not match the causal process, encodes the exact same joint distribution over those variables as the previous one. Now you might like the previous one better. And there's a lot of advantages to drawing these things causal But mathematically, it is just an expansion of the chain rule. And p of t times p of r given t, and p of r times p of t given r are both, in general, for any distribution, just going to be the joint. Until you start getting three or more variables or there's no arrows, you don't actually have any claim about the underlying distribution. Just talking about causality a bunch of times. When your Bayes' Net reflects the true causal patterns, they're usually simpler. That's just a fact about the world and how we build these models. They're usually easier to think about. And they're a lot easier to elicit from experts. Like it's way easier to go to a doctor and say, what fraction of people who have strep throat have a fever? Than it is to say, what fraction of people who have a fever have strep throat? They're like, I don't know. There's a lot of reason you could have a fever. And it's a lot harder to get things in the diagnostic direction usually. If you're getting them from data, this may not be true anymore. Bayes' Nets don't have to be causal. Sometimes there is no causal net. So like the example you gave, or when variables are missing, often the remaining variables are kind of complicated from a causal standpoint. So for example, traffic and drips in our traffic model, they're sort of correlated by the underlying variable rain, but neither really causes the other. So either direction of the arrow is fine mathematically. And then you end up with arrows that reflect correlation but not causation. So what do these areas mean? They might happen to encode causal structure. It's great when they do. It's intuitive. But they really encode conditional independence. They encode that when you write out the chain rule, in general, you get p of x given all the variables that precede it, but now we have p of x given only it's parents. So it's a statement that once you know the parents, you don't need to know the other things that precede it. And we'll unpack that a little bit in the next lecture. So let's peek inside a Bayes' Net. Let's build one. All right. Let's build a Bayes' Net. Let's create some nodes. Let's do traffic. All right. Let's build a node. Rain. Rain. I'll let it stay true or false. There's rain. There it is. All right. Let's make more of a Bayes' Net. We decided rain causes traffic. Rain is causing-- there's rain, there's traffic. Let's create an arc from rain to traffic. Let's make some more. Let's do some more. What else was in this thing? There was like, rain also causes drip. My roof drips. Rain causes drip. What else? We had low pressure, right? Low pressure. And that causes rain. I've drawn a graph. In order to specify a Bayes' Net and thereby a distribution over these variables, I need to do more than that. I need to actually give probabilities. For example, how likely is it that there's low pressure? I don't know. Maybe 0.1. Now I need to say how likely is it that there is rain? Well, it depends. It's got a parent. It's got low pressure. And even there's low pressure, it rains 90% of time and when there isn't low pressure, it rains 1% or 10% of time. I could put in other numbers. I'm just making stuff up. What lives under drip? This is the noisy specification of how rain leads to drip or how rain causes drip. Maybe 80% of the time when it rains, it drips. And it almost never drips otherwise. I didn't put a 0 there because that's just a no-no when building Bayes' Nets. All right. How about traffic? I need to describe the dependence of traffic on its parent. Well, when there's rain, there's 80% traffic. And when there's no rain, there's a 30% traffic. I have a Bayes' Net. What can I do with this Bayes' Net? Well, I'll ask it a couple of quick questions, like hey, what's the probability that there's traffic? 39%. All right. Let's get some observation. What's the probability that there's traffic given that my roof is dripping? Should it be higher or lower than 39%? Higher? Who says those higher? Who says lower? Wow, the suspense. Let's find out. It's higher. And that's because drip means it's probably raining, which means more likely to get traffic. Let me make another observation. Let's actually observe rain. Let's do a query. Is traffic going to go up or down now. 79 to 8. It went up a little bit. All right. Here's the important one. I know it's dripping. I know it's raining. There's an 80% chance of traffic. I discover that there is a low pressure system. What will happen to my 80% chance of traffic? It stays the same. And that's because low pressure only informs whether or not it rains. And I already know it's raining. I can't know it any more than I already do. And so they're disconnected in some way. And that must have something to do with conditional independence. We'll find out what next week. So let's just stop there. And we'll see you next week for inference in Bayes' Nets. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181023_Decision_Networks_and_Value_of_Perfect_Information_VPI.txt | [INTERPOSING VOICES] PROFESSOR: All right. OK, so hi, everyone. I'm really excited about this lecture today because today the two pieces of what we've been doing in this course come together. So in the first part of the course, we basically talked about, if you remember, all the way back search and planning and all of that. We basically talked about actions and utilities. What are the sequence of things I should do, actions I should take to maximize my expected utility. Right? It was all about the maximization and the utility and all that. For this recent third or so of the course, we've been talking about reasoning over uncertain variables. And today, those things are going to come together. Because often, when we have to decide what action to take, remember we're maximizing our expected utility, which means there are expectations, we need to reason about what's going to happen and with what probability so that we can make the rational or optimal decision. So today, this whole machinery of actions and utilities connects to the whole machinery of Bayes nets and graphical models and reasoning under uncertainty. So the first topic we're going to talk about today is what's called decision networks, which are a way of graphically connecting Bayes nets, which represented variables as nodes, with actions and utilities. And what this will open up is one step further is to start to think about information and learning information, observing evidence in a network. To start to think about that as something that itself has a utility. And we can finally formalize this very powerful and important notion of the value of information. Because when you get information-- you know the saying knowledge is power? Well, information is utility. And today we're going to figure out how information and utility connect up in a formal way that's going to pull all of these threads together. So I'm excited. Let's talk about decision networks. OK? These are basically going to be taking these Bayes nets that we build and augmenting them with new kinds of nodes that represent utility and represent action. So here's an example of a decision network. OK? In this decision network, we have the part at the bottom, which looks sort of like a Bayes net. So let's zoom in there. Let's look at this part at the bottom that looks like a Bayes net. This is a Bayes net. It has variables like what is the weather. And here that might be sunny or rainy. We're going to have super simple examples here so we can fit them on the slide. The weather could be sunny or rainy. And then there's going to be a forecast, which is another variable. And the forecast could say, we expect sun or we expect rain. And of course, the forecast isn't the weather, but it's a noisy indicator. When the true weather is going to rain, the forecast has some probability of actually forecasting it correctly. And there's another noisy forecast if it's sun. And that's all specified in the Bayes net. The Bayes net here is going to specify how likely each weather outcome is and how likely each forecast outcome is given the weather. All right. But this stuff up here is new. And in our running problem, there's going to be a choice. In addition to having weather, sunny or rainy, and a forecast, forecast sun or forecast rain, there's going to be a choice of about whether or not to take your umbrella. And so this is a robot that has to make a choice. The choice is going to be take the umbrella or leave the umbrella at home. And then there's going to be an outcome. Right? And over here, you can see all the different ways that world can unfold. And so there's outcomes where it's sunny and you have the umbrella. And where it's rainy and you have the umbrella. And sunny and rainy when you don't have the umbrella. And we'll get into the exact details of our model of this robot, which is just going to be sort of a toy example of how the actions you take and the probabilities that you predict can interact. OK? So let's go back and figure out what a decision network is all about. Remember, we've been talking about the principle of MEU, maximum expected utility. That means you should choose the action which maximizes your expected utility given your evidence. Right? And the more evidence you have, that's going to change the distribution of what you expect to happen. And so as you get evidence, the best action to take might change. All right? So let's operationalize this with these decision networks. The nodes that look like circles, ovals, those are going to be just like in Bayes nets. And the new nodes are going to have new semantics. So we have three kinds of node types. Actually, only the second two-- sorry, only the last two are actually new. These chance nodes, they have the exact same semantics as in a Bayes net. A node represents a variable, and the name on the node says what variable it is. And it can be observed or it can be unobserved. If it's observed, that means we know which element of the domain of that variable is happening. And if it's unobserved, it's a random variable that we can compute distributions over. When you see these round nodes, you can think about them like chance nodes. Remember, in expect to max, we drew the nodes where the probabilities come in as circles? You can use that metaphor here too. One new kind of node is rectangles. So like here, the umbrella. Rectangles represent actions. So they're like random variables in the sense that they have a domain. Right? And in this case, the actions for umbrella are take or leave. They're unlike chance nodes or random variables in your Bayes net in that there's not a probability distribution over them. You control them. Remember, you control your action, but you don't control the expectation over the world's response. That's the whole point of expect to max. You maximize over the actions and then you live with the consequences probabilistically. So these rectangular nodes represent actions. You have to assign them. That's good for you as an agent because you get to pick the best one. OK. And the last one that pulls everything together are these diamonds, these utility nodes. And what are utility node is, it has parents. In general, its parents are going to include action nodes. Right? Because your utility is going to depend on the action you take. And the parents are going to include other random variables. Because usually, the utility depends on not only the action you take, but on the outcome of some variable that's in general out of your complete control, and so we'll construct distributions over. So in this case, the utility you get as a going outdoors and having fun robot depends on whether or not you take your umbrella, which you have control over, and whether or not it rains, which you don't. You don't have control over it, but of course you can reason over it. All right. So how can we use these networks? Well, first of all, I'm going to speak in general because to know exactly what the right thing to do is in a given network, we need to know all the details. Right? We need to know what is the probability of rain. And if it's sunny, what's the probability that the forecast is going to say rain anyway. We also need to know details like what is the probability-- sorry, what is the utility of taking your umbrella and then getting rained on. So let's jump ahead for a second and talk about the details, just so that we have this in our heads. So remember, the new thing in this network is going to be the utility node. And the utility node, just like any other node in a Bayes net, it has parents. And it specifies something for each combination of values for the parents. So the utility node is going to specify something for each combination of take or leave umbrella and sun or rain weather. And the thing it's going to specify is a utility. So if we peek inside this decision network here and we actually look at the actual values-- we've gotten rid of the forecast here-- we might say, well, the weather has probability of being sun of 0.7 and probability of rain of 0.3. OK. Now I know not only that there is a random variable called weather, I understand the exact probabilities that I'm assuming govern it. OK? Umbrella we know is take or leave. And in this diamond node, I have to have a list not of probabilities anymore but of utilities. So I have to know what is the utility of leaving the umbrella at home and having a sunny day. And here that is 100 points, 100 utiles. That's the best possible outcome. You go and you play robot ball all day and have a great time. Now we have to look at the rest, just so that we know this recurring example. So if you leave the umbrella at home and it's sunny, that's your best possible day of robot sports. If you leave the umbrella at home and it rains, that's your worst possible day. You get rained on, you rust, whatever it is. It's not good. That's 0 utiles. What happens if you take your umbrella? Well, if you take your umbrella and it rains, you get 70 utiles. Imagine, it's not as good as that sunny day of playing ball, but you get to stand there happily dry. If you take your umbrella and it's sun, what's that? That means you see all these other robots happily playing robot soccer, but you are stuck dragging your umbrella around for no reason at all. That's pretty bad. it's not as bad as rusting, but it's 20. And so, these have an order to them. The best possible case is leave the umbrella on a sunny day. The worst possible case is leave the umbrella on a rainy day. And the others are in between. Now you might say, well, that's sort of a weird set of utilities to give the robot. But remember, that's what we do. We give the robot utilities. And with respect to those utilities, we compute optimal actions. If you have a robot that wants to rust as fast as possible, this would be a whole different table. OK? So just like we come up with probabilities and we'd like them to reflect something reasonable. But for this example, this is what they are. OK? So let's go back here. What do we do with a decision network? Well, the main point of a Bayes net was we observed some nodes and we reason about probability distributions over others. Remember, we do a query of what's the probability of weather given this forecast. So Bayes nets, the whole point was to compute a posterior distribution over a node of interest given our evidence. OK? That's what's going on down here. In a decision network, the whole point is to decide which action is best. So it's all really about picking an. Action all right? So how does action selection work? How do you actually do reasoning in a Bayes net? Or sorry, in a decision network? Well, first of all, you have to instantiate your evidence. Like maybe, you know, the forecast calls for rain. Secondly, you instantiate your actions. Because you control them. OK? So you're going to have to say what action you're interested in. But unlike the evidence where, hey, it forecasts rain. You can't change that. The actions you can change. So you're going to instantiate the evidence, the way it actually is, and you're going to instantiate the action nodes in every single way. That means the more actions available to you, the more times you're going to have to do this computation within the network. So you instantiate all your evidence the way it actually is. You instantiate your actions every single way. You compute the posterior distribution over all of the nodes that are relevant to the decision. Which ones are relevant to the decision? The ones that are relevant to the decision are the ones that are parents to the utility node. So if I want to know what is the best utility, what is the best decision? I need to know, I need to consider all the different actions and I need a distribution over weather so I can compute the expected utility of those actions. I don't know the expectation over weather until I do some computation in the Bayes net. All right, so were going to have to compute an appropriate conditional probability. We'll see some examples. Once we have a conditional probability for all of the parents of the utility node, and we have all the different action instantiations, we can take each action and compute its expected utility given the evidence. And then once we've done that for each action, we can choose to maximizing one. So we'll see this in an example, but the basic flow here is you come to this network and this network both tells you here's my evidence, here's what that means for the variables that determine my utility. On the other hand, here are my actions and here's what they mean for the utility. And now we can do expected utility computations and figure out which action is the maximizing one. OK? Any questions before we do an example? Yep? STUDENT: Do we need to calculate the posterior-- PROFESSOR: The calculate posterior-- the question was can I explain-- I can, I'll show you some examples. How about that? That's a great question. That's in fact, the perfect question. So thank you for leading into this next slide. So let's first do an example here, a very small example of a decision network. In this one, there won't be much calculating involved. So remember, weather, in this example, is 70% sun, 30% rain. We have the utilities where the best thing is you leave the umbrella at home and it's sunny. And the worst thing is you leave the umbrella at home and you rust in the rain. And everything else is somewhere in between. So now, we can start to do computations in this network. For now, there's not actually going to be any Bayes net inference. OK? And that's because we're not going to have any evidence, and there's only one actual random variable here. But we do have an action here, which is take or leave the umbrella. And remember, we get to maximize over the action. So we're going to set it but we're going to set it each way and then we're going to pick one. So first let's imagine that we set the umbrella node, which is an action node, to leave the umbrella at home. What are we going to do? We're going to compute our utility. How many points are we going to get if we leave the umbrella at home? Well, I leave it at home. That means we're either going to end up in this outcome or this outcome. I've left my umbrella at home. We're either going to get 100 or 0. Which one? Depends on something out of our control, it depends on the weather. So I can't compute my actual guaranteed utility. I can only compute an expected utility. I can say how many points am I going to get with the leave action averaged over the variables I don't control, which here is weather. So I do a computation, an expected utility computation, for the action leave. Notice the notation here is important. It's the expected utility of an action. OK? Actions have expected utilities. Well, that's just the utility of the action for the outcome for weather averaged over the likelihood of that outcome. So we can do the math. And so we'd say, well, sometimes we get 100 points. That's when it's sunny. 70% of the time, that happens. OK? So here is the utility for leave comma sun times the probability that outcome actually occurs. Now sometimes we get 0 points. And we get 0 points, that's because we've left the umbrella at home, but it's rained. And now we're here rusting in the rain. How often does this happen? 0.3. OK? So we take this average. That gives us an expected utility of 70. Is that good? I don't know. Right? It's all with respect to the other actions and the other expected utilities we have available to us. But the expected utility of the action leave involves averaging over the other parent of U, which is the weather. So we can do the same expected utility computation for the other action. We can say, you know what? Let's imagine instead that I take my umbrella. All right, well, if I take my umbrella, I can compute the expected utility of take. This is again an average of actual utilities, except now we're averaging these utilities on the bottom. It's either going to be take my umbrella and I get rained on, or take my umbrella and it's sunny. So I can pull the numbers out of the network and I can compute the average. It's still 70% chance of sun, except now we only get 20 points when that happens. 30% chance of rain, we get 70 points when that happens. And now the expected utility we will get under this weather distribution for taking my umbrella is only 35. So there's expected utility for leave and there's expected utility for take. I get to pick amongst them. What should I do? I should leave my umbrella and score my 70 points instead of taking it and being stuck with an average of 35. OK? So the optimal decision here in this network is leave. OK? And this computation we used to get to that didn't really have much of anything to do with a Bayes net. In fact, it was just really a tiny little expect to max computation. But hopefully, by now, this kind of computation feels very familiar from the early part of the course. OK? Does everybody follow the flow here? We check what's my expected utility for leave, what's my expected utility for take, and then I pick an action. That is action selection in a decision network. OK? Now, this decision network was trivial. It was trivial in the sense that the probability I needed to compute my expected utility, remember, was the distribution over weather. And it was sitting right there in the network. Because this Bayes net is small and trivial. OK? All right. So the optimal decision is to choose the action leave. That corresponds to an expected utility of 70 from this computation up here. And we write the following. OK? So this is important because this is maybe a little bit different than the notation you're used to. We write the maximum expected utility, MEU, maximum expected utility is the maximum over all the actions of the expected utility of that actions. So think expect to max. It's a maximum over actions, and then it's an expectation over outcomes. But here's the new part. The argument to an expected utility was an action. It's the expected utility of an action. The argument to the MEU here is my knowledge, it's my evidence. Because the right action is going to depend on the evidence I have. And as I get new evidence, I might choose different actions. Right? Maybe, if there was a forecast for rain, maybe my actions would be different, my expected utilities would be different, my MEU would be different. So I write the MEU given no evidence-- that's that empty set there-- is 70. OK? All right. We can take that computation and in a second, we're going to see that connect up to a Bayes net style query in a graphical model. For now, let's unfold what I said in words and wrote in equations into a tree, and see if we recognize the tree. OK? So there's the tree. OK. How does this work? We say really even though somebody has drawn me this funny decision diagram, really I have an expect to max problem. My problem is I'm an agent with a choice, so I have a max node. I'm going to choose either the take or leave action. So remember, in expect to max, we have a max over actions. I can choose take, I can choose leave. Remember also that max nodes represent states. What is the state here? Well, here the state is my knowledge, and here my knowledge is nothing. I don't know anything about the weather, I don't know anything about a forecast. So the set of evidence I have here is the empty set. So from the state where I have the empty set of evidence, I have two actions. Right? So these are the actions. I could think of this as the state if I wanted to use my old terminology. The state is I have no evidence. The action is I can take or leave the umbrella. The chance nodes here represent what the weather will do, and the probabilities that govern them are the conditional probability of the random variable weather given my evidence, which here is empty. OK? And so here would be all of my outcomes. These are governed by the probabilities from the Bayes net. And then the game ends and I get a utility. Right? So the outcomes are either sun or rain, and then there's a utility. And the utilities are given by that utility nodes table. This is just like an expect to max or a tiny little MDP. There's one thing that's very important that's different. The thing that's different is when we did expect to max on some game or something like that earlier in the course, remember assumed that the probability fairy came and said, this node, you're rolling dice, even chance. Or this node, here's the probability that governs the outcomes. Here, when we say, what is the probability of weather given the forecast, we actually have to do computation to figure out the probabilities from that expectation node. How do we do that? We're going to do that by running Bayes net inference. OK? But that's the main thing that's changed, is now we have to do computation at each node to compute the probability of the outcomes given our evidence. There's also been a little bit of a shift in perspective, which is that once we get into these decision-making problems, the state is usually the set of things we know. And we'll see more of that today. All right. We're going do an example where some Bayes net inference is required. Any questions before we do that? OK. So here again, we're going to make the same decision. I'm a robot, I wake up in the morning, I want to know whether to take my umbrella or not. I have basically the same utilities. I have the same action node where I can choose to leave the umbrella or take the umbrella. The only thing that's different is now I'm actually going to listen to the weather forecast before making my decision. That's intuitively probably a good thing, right? Because if you listen to the weather forecast, you've got more information. Information is power, and here that means information as utilities. OK, so we do the same computations now. We say, well, I can leave the umbrella or I can take it. That's that top branch of the expect to max, where I have a max node with different actions underneath it. So I take it or I leave it. Let's do the computation where I leave the umbrella. It's going to unfold the same way. If I leave the umbrella, I'm either going to get 100 points or I'm going to get 0 points. What's different is the probability of 100 versus 0 is different now because I've listened to the forecast and let's imagine I now have the evidence forecast equals bad. Now I also need to tell you what's the probability of the forecast being good or bad given the weather. But here we've shortcut it a little bit, and I'm just going to spot you the information that in my Bayes net, conditioned on the evidence that the forecast is bad, rain is now 2/3 and sun is 1/3. OK? Stare at this really closely. This thing I've given you for this slide, the probability of the weather given the forecast, is not the probability from the Bayes net. The Bayes net tells you p of w and p of f given w. This is p of w given f. I can compute that, I can compute that with variable elimination, or in this case just Bayes rule would do it. OK? But let's imagine that I have pre-computed that quantity for you. So I know the forecast is bad. And now instead of a 70% chance of sun, there's a 34% chance of sun. So I can do that same computation. What is the expected utility of leave? Well, now I'm computing the expected utility of leave given information, given that the forecast is bad. It's those same utilities, 100 and 0, but now the probability of sun versus rain has changed. So when I unroll that expectation, it's got the same terms but the weights are different because I have new evidence. And that means my distribution over weather given my evidence has changed. So now the expected utility of the action leave given the evidence forecast equals bad is now 34. I can do the same thing for take. What's the expected utility of the action take given the forecast is bad? Well again, it's either going to be 20 or 70. So I have to average those together. This utility is the 20 and 70. These probabilities are the likelihood of sun versus rain given my evidence. So I unroll that computation again. It's the same 20 and 70. And now I compute my average and I get 53. OK. Same actions, same outcomes, different weights. Now which one is the maximizing action? I look at this. In general, the numbers are lower because my evidence is bad. This is a bad day. The days where the weather forecast is bad, on average are worse. So notice my utilities dropped. But which one is the maximizing utility has changed. Now the maximizing action is the take action. I should take my umbrella. OK? So the optimal decision is take, and I write the following. Make sure this notation is clear. I write the maximum expected utility, that is the expected utility under optimal action selection, when the forecast is bad is 53. Which action leads to that MEU is the action take. OK? This MEU is the expect to max value. And the argument to the MEU is my knowledge state. My knowledge state here is I know there's a bad forecast. Before, my knowledge state was empty. I didn't know any variables. OK? All right. So we ran through this decision network once when we didn't have any evidence, once for forecast equals bad. We could go through it again, and I invite you to try. We could go through it again for forecast equals good. In order to do that, we'd actually have to look and see the exact distribution of forecast given weather. OK? All right, let's look at that expect to max tree again. It was the same form of computation as before I had evidence. So I had a state. Right? I had a state. And here my state is I know that the forecast calls bad. So my state, which is my list of observed variables, is different. I have the same actions, so I have the same actions, take or leave. I have the same outcomes, sun or rain, for my chance nodes. Except now the probability that governs sun versus rain is the probability of that variable weather given my evidence. So now the probability's, instead of 0.7, 0.3, it's 0.34, 0.66. OK? But the structure of the computation is very similar. Different knowledge state. Any questions before we see an example of this in Ghostbusters? All right, so let's first just see Ghostbusters happening. All right. Let's remember the Ghostbusters game. So in this version of Ghostbusters, so far you've only seen the case where there's one ghost. We'll see some cases with more ghosts starting next week. OK? So there's one ghost in there somewhere. Right now, we have no evidence of where the ghost is. And so you're seeing a conditional probability represented as numbers in a grid. OK? So each of those squares has probably 0.02 of being where the ghost is. Now, I can take sensing actions. When I take sensing actions, I move from my current knowledge state, which is I don't know anything. I know the model but I don't know any random variables values. But I can figure out what is the value of the sensor reading in the upper left corner. Boom. I'm now in the knowledge state sensor reading in upper left is green. In that state, I have a different posterior distribution over the variable of interest, which is where is that ghost. And I can gather more evidence, stab at my-- OK. So given all of these sensor readings, what you can see here is the posterior distribution over the variable, where is the ghost. The ghost position variable. And here the most likely single spot has probability 0.77. Now, I can bust. I'm going to bust at that 0.77 square. Am I going to hit the ghost? Maybe. What's the chance I'm going to hit the ghost? 0.77. Let's roll the dice. We got it. OK. So 100 times I give this lecture, 77% time we end up here. The rest of the time, ghost is somewhere else. OK. So that's Ghostbusters. We're going to the Ghostbusters a whole bunch over the next few lectures. Let's see that as a decision network. Well, first there's sort of the top part. The top part says I can bust. And the bust action has some large number of actions. You can actually take a step back and say, wait a minute. Instead of busting, couldn't you sense? Couldn't you do another probe? Yes, we'll get to that. For now, let's just imagine it's time to bust. Bust now is an action where there is a value of that action for each square on the board. What's my utility? Well, the utility depends on whether or not the ghost is actually in the location I bust. So in order to tell you the utility, I need to know where did you bust and where was that ghost. So the top part represents the utilities, the actions you select, and the expectation that governs the outcomes for each action. What about all the rest of this stuff? What about all the sensors and so on? Well, you can imagine there being the whole rest of the Bayes net sort of from here down, which has a variable for each sensor reading. Most of which I don't have, so they're unobserved. And a couple of which, remember, I did a reading on the upper left. And then I did a reading over here and then I did a reading over here. And there's a couple of these that are filled in. Conditioned on that evidence, I can compute the distribution over ghost's location given the evidence. And then I can go through the competition at the top, which is actually pretty simple. It boils down to bust in the most likely location. So that's Ghostbusters as a decision network. We'll come back to the case where one of the actions you can take is to get more evidence by revealing a sensor. Right? And that's actually an important case because that's the case where what you get for your action isn't a utility, it's more knowledge. Which in turn has expected utility associated with it. So we need a little more machinery before we can do that. All right. We're going to not do that. Any questions from that before we start getting into exactly that issue of value of information, which is the question of how many utility points is it worth if I reveal this variable to you? All, right. Value of information. I also think this is really cool because this is the kind of concept that shows up-- once you're sort of aware of it, it shows up every day and you're thinking about the world. Right? I can take actions, of course I can take actions. Many of the actions I take are gathering information. I gather information so I can make other decisions well. And so that information itself has utility. Today we're going to talk about the value of information, which is in terms of utility. Right? Because as I change my evidence, my MEUs go up and down. And so I can talk in a meaningful way about changing utility. So here's the idea. We're going to take a decision network, which is going to have a Bayes net part, which connects all kinds of variables together, including ones that we don't yet know. And that connects them up to utility variables. So we've already seen that in the first part of this lecture. What we're going to do is we're going to use these networks to compute the value of acquiring evidence. And we're going to do that directly in the decision network. What's that mean? Well remember, if I put a decision network in front of you and say, all right, decision time. Are you taking the umbrella or not? You can say, OK, hold on a second, calculate, calculate, calculate. I'm going to take the umbrella, and my MEU is going to be 34. OK? We can then talk hypothetically about what would happen if I showed you a variable. Right? You'd make better decisions. When you make better decisions, you get higher utilities. And so we can compute on average does your utility go up when I show you a variable? How much does it go up? And there's a whole bunch of different quantities we have to reason over in order to make that precise. But we can do that directly from the decision network. So here is a simple example. This example, I think, is really good at one thing, which is it's really good at focusing in on how value of information works and how evidence variables are sort of connected in a deep way to utility. What's not good about this example is it's such a small example, and there's symmetries in it. So you're going to look at this and think, wait, what does this have to do with Bayes nets? Because the numbers are all small enough you can sort of do them in your head from first principles. So we'll see other examples that are a little less small and symmetric as well. All right. So we're going to compute the value of acquiring evidence. And let's imagine that we're going to drill for oil. Or substitute any isomorphic problem here. Here is a toy decision network here, which has three variables in it. The variables are the oil location. So that's your random variable. The oil location is either going to be in lot A or lot B. So we can imagine let's put A over here. Here's B. Right? We can either drill in lot A or lot B. And let's imagine that exactly one of them has oil. So oil location either takes on value A or B. But I don't know which. And it's 50-50. Drill location also is A or B. Right? I can control which. I can pick to drill A or to drill B. And then there's a utility node. OK? So there are two blocks, A and B. Exactly one has oil, and that oil is worth K in terms of utilities. So utility says that if you drill where the oil is, you're going to get K. If you drill where the oil is not, you're going to get 0. OK? You can drill in one location. The prior probabilities are 50-50, they're mutually exclusive. And here's all the numbers. Oil, A or B, 50-50. Drill location is A or B, but you get to choose so there's no probabilities on that. And then if you drill and the oil are in A or you drill and the oil are in B, you get K. Otherwise you get those 0's. OK? That's the setting. Now let's talk about how well you're going to do. Well, if I drill in A or I drill in B, it's a symmetric problem. And I get to pick. But if I drill in A, what's the expected utility of drilling in A? Well, half the time, I get the prize, and it's K. And half the time, I don't get the prize, and it's 0. So my expected utility of picking A is K over 2. My expected utility of picking B is also K over 2. OK? So the MEU here, the maximum expected utility for the state where I don't know any of the variable's values is K over 2. And it's achieved by either drilling in A or drilling in B. OK? That's the baseline. The question is this. What is the value of information of this variable O, the oil location variable? OK, so we're posing a question about a random variable that is not currently observed. And I'm saying what is the value to you if I reveal this variable to you? So in this case, informally that's the value of knowing where the oil is. OK? Well, how should we think about the value? Well, this is a utility computation. So all values are in terms of utilities, and in particular, expected utilities. And so we can look at the gain in MEU from the new information that's revealed to me. It's going to turn out to be an expected gain. This is the part that's a little bit confusing. I'll work through this a couple of times for a couple of different examples. But we're going to compute how much better we do in an MEU sense if I reveal this variable to you. So it goes something like. Let's say I can do a survey. The survey is not another-- for right now, the survey isn't going to be another node in my network. This survey is just sort of language I'm using for reveal the value of oil location to you. So imagine it's a perfect survey here. OK. The survey can say the oil's in A or the oil's in B. Now, I don't actually know what's going to happen. So when I talk about revealing oil location to you, I'm not talking about revealing to you that it's in A. I'm talking about revealing to you where it is, wherever it is. So when I reveal a variable, it's wrapped up in all-- think about like value of information. If I give you a variable, it's a little present and you unwrap it. And what's inside? You don't know. Right? It could be any value of that variable. So you need to think about the uncertainty over that as well. All right. So if I do this survey and it tells me where the oil is, now suddenly I'm in a different decision network. I'm in this decision network. I'm in the decision network where oil location is revealed to me. Well, what am I going to do? Well, I can drill where the oil is or I can drill in the other place. What should I do? I should drill where the oil is. How well am I going to do on average? K. So before I reveal this variable to you, you were going to get K over 2. After I reveal the variable to you, regardless of which way it falls, you're going to get K. So your gain in MEU is K over 2. We write this as the value of information, VPI-- say why P? I'll be back to that in a second. The VPI, or the value of information for the variable oil location, given no other evidence, is K over 2. That is the difference between the MEU of taking your best action without the variable and the MEU of taking your best action with the variable. OK? Another way to say this is the fair price of information about that variable should be K over 2, assuming you're paying in utilities. Let's take our break now and then we'll do another example of value of information in a case where it's not sort of small and symmetric and every outcome does the same thing. So take a couple minute break now. And then do more value of information. OK. All right, let's get started again. So we're talking about value of information computations. And we're going to go back to the weather and umbrella example where I'm a robot and I have to decide whether to take my umbrella with me or leave it at home. And the utility of that depends on how the weather actually turns out. Now, we already computed earlier in this lecture how well I'm going to do if I act optimally in the absence of any evidence about the weather. And the answer there was there was a 70% chance of sun. And if you work out the various utilities and their various expectations, the best thing to do was to leave your umbrella at home. And the expected utility of that action was 70. So the MEU, that is your utility under optimal decision-making, when you have no evidence of any variables was 70. The MEU was 70 and it corresponded to leaving the umbrella at home. Now, the question is, if I am a robot and I would like to have a good day and not rust in the rain and I don't want to needlessly take my umbrella with me, I have the option of listening to the weather forecast. And so I can ask the question, how much value does that have? That means how much, if at all, will my utility go up if I reveal the forecast variable, which I can do by, say, turning on the radio. Well, I don't know. I know my MEU is 70 if I take my best action, which was to leave the umbrella in the absence of any evidence. Now, I can do another computation, which we also already did, which is what's my MEU you the forecast says bad weather. And remember, we computed that all out. It turned out the MEU was lower, it was only 53 because this is a bad day. The MEU was 53 and it corresponded to the action or it was backed by the action taking my umbrella. So remember, when I listened to the forecast, it's possible it'll say, sorry, bad weather. But it's possible it'll say, weather looks good today. So I also need to do a computation to see what my MEU is if the forecast is good. And we didn't do that, but if you kind of write that down and you kind of plug and chug through, you end up with another MEU, which in this case is 95. This is a good day. So this one is better than 70. And here the MEU is 95. It's backed by the decision to leave my umbrella at home. And usually, it's sunny when you see a good forecast. And so the MEU ends up as 95. So now we have to talk about how much is it worth to you if I reveal, from this initial state where you have no evidence, if I reveal the forecast to you? And so you look at it, and you're like, well, that sort of depends on what the forecast is. If the forecast is bad weather, your MEU is going to drop. If the forecast is good weather, your MEU is going to rise. But the thing is, when I offer to tell you the forecast, I'm not offering to make the forecast good or to make it bad. I'm offering to unwrap the present, and you get to see what's inside. And so to figure out how much on average my MEU is going to go up from 70, I need to reason about how likely it is that I'm going to have a good forecast versus a bad forecast. So what's the quantity I need? I need, in order to compute the gain here, I need to compute the probability of a good forecast versus a bad forecast given my other evidence, which here is nothing. OK? So what is that? That's the forecast distribution. That's the probability of this forecast variable given my evidence with no other evidence. And so I could compute that. It's not sitting in the Bayes net. p of w is sitting in the Bayes net. Right? In the weather notice is p of w, and in the forecast node is p of f given w. In this case, I want p of f because that will tell me when I get the forecast revealed to me, how likely is it to be good versus bad? Well, if I plug and chug in my Bayes net, it'll turn out that the marginal probability of a good forecast is 59% and a bad forecast is 0.41%. You need to know the probabilities in the Bayes net to compute this. All right, so now what can I do? I know my MEU with no evidence. I know my MEU of the bad forecast, I know my MEU with a good forecast. So when I come to you and I say, how would you like me to reveal the forecast, you can say, well, let's figure out how much my MEU goes up. Well, 59% of the time, you're going to tell me the forecast is good and I'm going to get 95 points because when I make my optimal decision, I'm going to leave the umbrella at home. But 41% of the time, you're going to tell me the forecast is bad, I'm going to take my umbrella, and I'm going to get my 53 points. This part here, the sort of weighted outcome of what will happen when you reveal the variable to me and then I make best possible decisions afterwards, that's that expectation. This term here, 70, that is what will happen if you don't reveal the variable to me and I just make my best decision about an action in the absence of evidence. So that difference is the value of information. So I can plug and chug on that, and I get 7.8. What does that mean? That means that from an MEU of 70 with no evidence, if you say, hey, I'll reveal the forecast to you. You say, great, that's where a 7.8 utilities to me. Does that makes sense? I've got my MEU without the evidence, I've got my MEU with the evidence. But my MEU with the evidence has to be an average of the different values which will be revealed to me when I open up the package. We can write this as follows. And at this point, remember there is this point with Bellman equations where there are just like a bunch of variations of the same kinds of equations. This is that point with the VPI computations. Here-- let me change colors-- here we have the maximum expected utility given my current evidence, which in this example is empty. OK. But there's the maximum expected utility of my current evidence. And I want to compute what is the value of learning information about e-prime. Meaning I'm going to reveal e-prime, it's a variable. I'm going to reveal it to you on top of your existing evidence e. Well, right now you can achieve an MEU of whatever it is given evidence e. You do your computations, you pick your best action. I'm offering on top of that to reveal e-prime to you. When I do that, you will get new MEUs. Your new MEUs will be optimal scores when you have evidence e and the new evidence e-prime that I've just revealed to you. Sometimes they'll be higher, because cause you like e-prime, and sometimes they'll be lower, because e-prime is unfortunate. Now, you don't know which outcome you're going to get but you can compute a distribution of what's inside the package e-prime. That is the probability of the particular values e-prime given the evidence e you already know. So this is a predictive probability. What is the probability of this unknown variable that I'm threatening to reveal to you given all the evidence that you already have? When I take that expectation, this is an expected maximum expected utility that represents on average, after I reveal this unknown quantity to you, how many points will you get acting optimally. We can compare that to what I would do now. So this is then. Now I would get a MEU of little e. Then I will get an average of MEU e,e-prime. That difference is the value of information of e-prime given e. All right. Any questions before we do some properties of value of information? Yep? STUDENT: Will the value of information always be 0 or greater? PROFESSOR: Question was, will the value always be non-negative. Yes. It will be. And that's coming very, very soon. It's a great question. Now, the actual outcomes, remember, can be positive or negative. Right? Because if I reveal to you that the forecast is bad, you will actually compute a lower MEU than if I hadn't revealed the variable to you. But that variable might have been good. And it's balanced off by a case where the MEU goes up. And that expectation is always going to be non-negative. And we'll see why in a second. All right. So here's the value of information. Let me walk through the definition in equations first and then we'll talk about its properties. So we assume we have evidence that capital E, which may be multiple variables, takes on value lower case e. If we act now, we achieve MEU of little e. That is, we maximize over actions A, we average over outcomes S, and we take a distribution over utilities. Standard expect to max. Let's imagine that we discover that when we unwrap E-prime, it takes on value little e. If we act then, we will achieve MEU not of just e, but of e comma E-prime. That means it's still going to be a max over actions, it's still going to be an average over utilities. Except now we're going to be instead of making our decision based on evidence e, the probabilities will be informed by E-prime as well. So that distribution over states s will be different. So we might make a different decision. When we max over A here, and when we max over A here, we might make a different decision. We might suddenly take our umbrella with us because we heard a bad forecast. All right. If we act now, we get MEU of e. If we act then, we get MEU of e comma E-prime. But E-prime is a random variable whose value is unknown. I am not offering to make the forecast good or forecast bad. I'm offering to reveal it to you. So we need to have a prediction about what e-prime will be in order to compute how valuable that information is to us. So the expected value, if e-prime is revealed, and then we act regardless of what it is, we're going to do a different computation. That is the MEU. That's the utility we will achieve on average acting optimally when we know e and have E-prime revealed to us. That's why it's still a capital. That's going to be this distribution over possible lower case e-primes that we'll see of the MEU of what we would do upon seeing lower e-prime. The value of information is the change. The value of information is how much the MEU goes up by revealing E-prime and then acting, as opposed to acting now. OK. Let's talk about some mathematical properties, which I think are easier to see maybe thinking about this from sort of expect to max state. So first let's look at the expect to max tree for MEU of just acting right now. We're in some state. The state we're in here-- bad choice. So this is now. Let's just call this e. OK. We're in this state where we know e. We have actions A available to us. And then the outcome we predict is underneath our-- we do inference over S, which is whatever the parent of the utility is, given our evidence e, that's our computation. We think about augmenting our state, and therefore our evidence, and therefore these probability computations with another piece of evidence. We get a computation that looks the same. But of course, this here is this. And this. Because what I'm not really offering to do is not offering to give you E-prime plus E-prime or minus E-prime. I'm offering to put you into a chance state. I'm offering to put you into a state where E-prime will be revealed to you. And the first thing that's going to happen, the first branch here is, well, what is it. You told me you'd tell me the forecast. Is it good, is it bad? That's the first branch. And it has a probability associated with it. And that's this predictive probability we talked about. OK. This piece here, what the heck is this chance node on the top? Why is it-- That represents the revealing of the information. And I think that's the part that's hardest to sort of get the details on. All right. So here are some properties. All these properties can be derived either from the equations or from kind of thinking about expect to max quantities on the different trees that they correspond to. So here are some properties. First property is non-negativity. So whatever evidence you have, if I walk up to you and say, here's a random variable you don't know, what's the value to you? Well, that value might be 0. But it's not going to be negative. Next property. It's not additive. So if I walk up to you and say, OK, you've got evidence e, but here I have two variables that I could reveal to you. And I say, well, what's the value of revealing this variable? So you do some computation, you're like 7.3 utiles. How about this variable? You do some computation, you're like that one's 4.2 utiles. If I observe both variables, you don't get to add the VPIs. So if I offer to reveal A and B, the value of that to you is not the value of A plus the value of B. Why? An easy way to see that is let's say actually in your probabilistic model, A and B are basically the same variable. What if I offer to reveal the weather forecast and the weather? Well, they're both very useful to you. But they're basically the same thing. And so once you've observed one, the other one's a whole lot less useful. So VPI is non-negative, but it's also not additive. It is, however, order independent. So if I have A and B and I offer it to reveal them to you, if I reveal A and then B, or B and then A, you end up in the same situation. So it shouldn't be too surprising that if you compute the value of information sequentially, what's the value of revealing B? And then once I've revealed A, add to that what's the value of revealing A, you get the same regardless of which order you do. All right. Let's ask some questions about VPI. Here's a quiz. You're in the cafeteria. There's a soup. The soup of the day is unknown, it's a mystery soup. They haven't opened the lid yet. It's either clam chowder or split pea. OK? So it's a random variable here. But you're not going to order either one. What is the value of being told which soup it is? Big value, little value, 0 value? Who says big value? Who says little value? Who says 0 value? OK, 0 wins. Why is this? Right? Why is it that finding out this variable, all right, you have knowledge, right? But it doesn't change your utility because it's not going to change your action. And any evidence that does not change your action, regardless of the outcome of that random variable's reveal, is going to have value of information 0. Information only has value if there is some outcome that can be revealed to you that will change your action. OK? All right, so in this case, the value of information is 0 because it doesn't affect your decision. You're at a picnic. There are two kinds of plastic forks, slightly different. One of them is slightly sturdier. But we're not sure exactly which one. I offered to reveal to you which is the slightly sturdier fork so that you can choose a fork in a more informed way. Is this information highly valuable, slightly valuable, or 0? Who says highly valuable? Slightly valuable? 0? OK. In this case, let's first check is this information going to change my decision? It might, right? I might as well take the sturdier fork because, you know, why not? It's slightly better to have a sturdier fork. So here's a case where the value of information is probably not 0 because having that evidence will change my decision. However, the utilities involved are all pretty much tied. So it's probably not going to make a big difference to my utility, not because it won't change my decision. Right? This evidence may be extremely likely to change my decision. But the utilities involved are all very close. So the value of information will be. Low OK. We're playing the lottery. The prize will be 0 or $100 and you can play any number between 1 and 100. They're all equally likely to win. So the random variable is what will be the winning number. Your action is which number would you like to play, what ticket would you like to buy? What is the value of information of knowing the winning number? So I'm going to reveal to you the number before you act rather than after. High value, low value, 0 value. Who says high value? I guess this is-- OK. $100 million. All right. So high value. Well, let's think about what it is. Right now if I play the lottery, what am I going to get? I'm going to get an expected utility of $1. So let's pretend my dollars are my utility here. Which, for small numbers, might be reasonable. OK. So my expected outcome here is going to be $1. If you reveal this information to me, regardless of what you reveal to me, I'm going to become incredibly good at this lottery and I'm going to get $100. So on average, I'm going to get $100 if you reveal the evidence to me. And if you don't reveal the evidence to me and I just have to play without that evidence, I'm going to get $1. So the gain here should be 99. Million. Or $100 million minus $1. You say what about the cost of the ticket in the first place? That's separate. That's just a cost against the utility regardless of what happens. OK. I've been saying value of information but I've been writing VPI. That's because classically this thing is called value of perfect information. What if somebody offers you some imperfect, slightly decayed information? OK. People used to worry about this, but it's really simple for us. Which is that in our formulation, there is no imperfect information. All information is perfect. What is information? Information is revealing the value of a random variable in your network. That's it. You either know the value of the random variable or you don't. It's evidence or it's unobserved. What was the point of imperfect information? People used to think about things like, oh, I'm not going to reveal to you the weather. I'm going to reveal to you a forecast of the weather, which is sort of a noisy form of that variable. The way we formulated things, revealing the weather forecast is not an imperfect reveal of the weather, it is a perfect reveal of the weather forecast, which is itself connected to weather in some probabilistic way. And so what we've done is we've taken any noisy variables that you might have, put them in as new variables in the network, and offered to completely reveal this variable to you, which may or may not actually be useful in your computation. But that variable itself is either revealed or unrevealed. So that's where the imperfect information went. It's gone. Let's look at another example. Back to drilling. Remember, this is the case where you can have oil in A or B. You can drill in A or B. And you get K if you find the oil and you get 0 if you don't find the oil. In this version, your MEU without evidence was K over 2 because half the time you get the oil. And your MEU you with the evidence is always K. And so the value of information here was K over 2 for revealing the location of the oil. OK? So that was the VPI of revealing oil location. It was k over 2. We did that before. Let's add to our network. Let's add a scouting report. What's a scouting report? A scouting report is you send this red robot out to do a probe in a location. And it says, I found this kind of rock in the core sample and I think this is the probability therefore you can compute of the oil. So it doesn't actually tell you whether the oil is there, but it gives you some strong evidence of whether or not the location that you scouted has oil. OK? So what is the value of information? So you don't know oil location. I offer to give you the scouting report. Should that have value to you? One check is will it change where you drill if the scout says I think I found oil in A. Yeah, it'll change where you drill. You could be wrong. You probably would want to know the details of the probability here. In fact, you probably want to make sure it's not like a reverse scout. But even that has positive value. So this has value of information that is going to be non-negative. However, this one was K over 2. The value of this scouting report. Is it 0, between 0 and k over 2, K over 2, or more than K over 2? So who thinks it's 0? Value of [INAUDIBLE] 0? Who thinks it is still K over 2? More than K over 2? Less than K over 2, but still greater than 0? All right. That seems like the majority one. That's right. So here, this is still going to have value. And intuitively, that makes sense because it doesn't tell you where the oil is, but it gives you some information to that. And you could do the computation and figure out, based on the actual numbers, exactly what the gain in utility here is on average. OK. So it's probably sort of between 0 and K over 2. It's sort of somewhere in the middle. All right. Let's add another variable. So let's say the scout report of, you know, I found signs of oil in lot A. Let's say there are different scouts. Right? There's the good scouts that do a really thorough core sample, and I know actually nothing about the geology here. But there's a really good scout and a really bad scout. And this random variable is which scout got sent out to give that report? What is the value of knowing-- we don't know oil location and we don't know scouting report. I'm only offering to tell you whether or not I sent the good scout or the bad scout out. Value of information. Between 0 and K over 2, K over 2 or more. Who says 0? OK, 0. It's not valuable. But why? I just told you it was a really good scout. Why is it not valuable? Well, intuitively it's not valuable because who cares how good the scout is? You didn't tell me what it found. Right? But let's think about this in terms of the graphical model. I've offered to reveal to you the scout variable. What are you going to do with that information? All you do in order to make a decision about drill location is compute the probability of oil location given your evidence. So without any evidence, you would have said, all right, I'm going to compute p of oil lock. And then I'm going to make my decision. And now you say, all right, well, you're going to tell me scout and what am I going to do with that information? I'm going to compute p of oil lock, given I don't know, that it was scout one. What is that? Well, let's eyeball this Bayes net and run D-separation in our head. One thing I can do looking at this network is I can conclude that the oil location variable and the scout variable are conditionally independent in the absence of other evidence. So they're marginally independent here. And that means the probability of oil location is the same regardless of whether or not I tell you what the scout is. And if I have given you evidence, I can plug and chug in this network and actually compute this. But if I give you evidence that it's conditionally independent of the variable of interest given your other evidence, it's not going to change the computation that goes into the MEU. And therefore it's not going to change your actions, and therefore it's not going to change the actual MEUs, and so that's going to have value of 0. However, let's say you already know the scouting report. So I've told you the scouting report says, you know, oil likely in A. And now I offer, hey, that report you have, I'll tell you whether or not it was the terrible scout that gave it to you. Does this have value now? This feels like it should have value. So this shows up in exactly the same way under a D-separation computation or a conditional independence analysis here. That oil location and the scout quality are marginally independent. So discovering scout doesn't help me. But if I know scout report, conditioned on scout report, scout and oil location are no longer conditionally independent. And therefore there may be some value associated to that. The general property here is that if the parents of your utility node are conditionally independent of the variable I'm offering to show you, then it has a value of information 0 given you current evidence. All right. And that should make intuitive sense and it should make sense from a D-separation analysis as well. Questions? OK. Let's really talk about POMDPs. What are POMDPs? They're partially observable Markov decision processes. What are these? Remember, Markov decision processes or Markov decision problems were like non-deterministic search. I was in a state. I knew what state I was in, I took an action, and I knew what actions were available to me. And the thing I didn't know is what state I would end up in. So the thing that was non-deterministic or uncertain was the outcome of taking action A in state S. In a POMDP, I take that one step further. And I say, not only am I'm not sure what my actions will do, because they have multiple outcomes, I'm also not sure what state I'm actually in. What could that mean? Life is a POMDP, right? You're a robot and you're moving around and you have actions like drive left, drive right. But you're not really even entirely sure where you are. You may have a belief distribution over where you are. But in general, you don't even know the state of the world for sure. The only thing you really know is your observations over that state. So POMDPs are designed for this. And we'll see them a few times in this course. But for now, let's take a first cut at them. A Markov decision process was, remember, states, you're in a state, you have actions available to you. There's a transition function which encapsulates the uncertainty over what will happen. Now, if you stare at this, you think, huh, actually, this probability distribution over the outcome given some evidence, we have better tools for computing that now. So you can imagine MDPs that have little Bayes nets living in their transition functions. But let's set that aside for now. A POMDP adds something new. It adds observations O, and an observation function. So now, instead of actually knowing what state S you're in, all you're going to know is what states are possible and what observations you've made. So instead of a robot knowing it's in a square in a grid world, you might just have laser readings from all the different angles around you. And that lets you compute or track where you believe you are. We'll get into this with tracking problems with hidden Markov model starting next lecture. OK? So what's a POMDP? Well, one way of looking at POMDPs is exactly kind of along the lines of what we talked about with beliefs states corresponding to evidence. And that is, I'm not actually in state S, I don't know what state I'm in. But I do know what evidence I have. In other words, I know what belief state I'm in . And so I can take actions, but then after I take an action, instead of being told, hey, we're in state five now, what comes back to me is not a state but an observation. OK, I went left, I'm sensing a wall. I don't know what state I am, but I'm sensing a wall. And so now it's up to us to track that information about the uncertainty over the states. POMDPs are actually just MDPs over beliefs states which are distributions over states. But in practice for us, what that's going to mean, is you can start to think about the state not being what's actually happening in the world, but being what you know. And when I take an action and an observation comes in, I know a little more and I can do computation over the state given what I've observed. We're not going to get fully into this today. This is just our first slice of this. And the reason I'm going to do this first slice out is to conclude this lecture with the case I promised at the beginning, which is how we think about the Ghostbusters game. Where my actions include bust here, bust here, bust here, bust here. Or also, take a sensing action. Right? A sensing action doesn't end the game. What it does is it gathers an observation and puts me in a new state, where again, I can choose to sense more. Or I can choose to take a busting action. So in static Ghostbusters, we have a belief state, which is a distribution over actual states, which is determine by evidence to date. What's my belief state? It's my distribution over ghost location. That is a function of all the sensor readings. So I can either talk about, oh, my state is this vector of probabilities. I can do that. Or I can say my state is these six sensor readings from which I can compute the distribution over ghost locations if I want. OK. So I have a belief state which is determined by my evidence to date. I can think of there being a tree over evidence sets. I have a certain amount of evidence. Initially none. I take some actions. Some of those actions are sensing actions. When I take a sensing action, what comes back is more evidence. Right? In sensing, I pick what variable you're going to reveal. And then what comes back is the evidence, the actual value of that variable. Now, when I compute expect to max in these trees, I'm going to need to do Bayes net style computation to say, OK, given these seven sensors I've already done, if I go to the upper right corner and reveal that, what's going to happen? And the answer is red, yellow, green. But what are the probabilities given what I already know? I'm going to have to do computation in that Bayes net that we showed. OK. So how are we going to solve them POMDPs? There's a whole general issue of solving POMDPs. They're hard. POMDPs are really hard. Remember I always say this is AI, everything here is NP hard? POMDPs aren't. There PSPACE hard. It's a lot worse, it's a lot worse. Instead of corresponding to, like, logical formulas, this is formulas with quantifiers. OK? It's a lot worse. However, one way to solve POMDPs is to use expect to max over these evidence-specified beliefs states. So I can think, all right, I'm in a state where I know nothing. And I can do the following things-- I can bust, great. Or I can sense here, sense here, sense here. If I sense here, what's going to happen? Red, yellow, green. With what probability? I can compute it with Bayes nets. OK, let's say it's green, I'll be in this state. I have all the evidence, I have a four plus, seven comma three is green. I can bust or I can sense more. And I could build out this whole tree. This is exponential. Eventually, I'm going to have to truncate this thing. But I can solve that running lots and lots of Bayes net inference over this tree. And that will let me solve this POMDP, or at least approximate a solution to it. So let's do that. Let's go to here. All right. What are you seeing now? It's the exact same Ghostbusters. We are in the belief state shown here. Flat distribution. You can think of it as a vector. Our state is a big vector of real numbers. Or you can think of it as we are in the empty set. The evidence we have is the empty set of evidence. And so what's going to happen, I'm not going to click a square anymore, I'm just going to hit Go. And a computation is going to happen. It's going to be a value of information computation, it's also going to be a search in this POMDP tree. It's the same thing. And what's going to happen is the system is going to compute which variable should I reveal. Or alternately, I can just bust. And it's going to compute what is the highest value of information versus the actual utilities. Let's see what it does. OK, so it decided to sense right there. And it got some value. And I'm in a new state now. I'm in the state where I know something. I know that one comma one is yellow, which also corresponds to this vector of probability shown here. I'm going to click again and it's going to either decide to bust, or it's going to decide to sense again. What is that computation? That computation is, well, here's my MEU if I bust, here's my MEU if I sense and then bust. That's a value of information computation. The difference between acting now or revealing a variable and then acting. OK, that's value information. Let's see what it does. OK, it decided to sense. Let's go again. It's going to sense. Let's go again. I think it's going to sense again. It's going to sense. Now we get to this point where I could bust or I could take the cost of another sensor reading and sense again. What's best? Well, that's a VPI computation, it's going to do it. And it senses. It senses again. Sense. It's very cheap to sense here, so it's being extra cautious here. All right. It decided to bust and it hit. So what you can see here is you can see now that this action selection is not just a computation of where is the ghost. But how much uncertainty do I have? How much will that uncertainty on average be reduced? Notice it actually went up sometimes when I got confusing evidence. On average, how much will it be reduced by a sensing action? How much more valuable is that in terms of ultimate utility? What should I do? That here, this is a POMDP agent running expect to max over beliefs states in order to do Ghostbusters. Starting next time, we're going to look at an even more fancy version of Ghostbusters where the ghosts actually move around. That's going to be the basis of your next project. OK? So we'll talk about tracking invisible ghosts instead of just sensing them and trying to catch them. We'll do that next time. [INTERPOSING VOICES] |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180904_Constraint_Satisfaction_Problems_CSPs_Part_12.txt | PROFESSOR: All right. OK. Giant slides-- check. Students-- check. OK. How is everybody? STUDENT: Good. PROFESSOR: Good. It's the post-lunch lecture slot. OK. Today we're going to talk about constraint satisfaction problems or CSPs, as we lovingly call them for short. So far, we've been talking about search, and we can ask, what's search for? Search is for a very particular class of problems. When you have a problem that can be formulated in that way, search is an algorithm you can use to solve your problem. But in order to apply search, it's going to make certain assumptions about the world. It's going to assume that there's just one agent. That's you. You're making your plan, but there's no other agents whose actions can be uncertain or adversarial. It means the world is deterministic. You have to be able to assume that the plan you've come up with-- your master plan-- when executed, will go exactly the way it worked, according to your model. That means your model has to be correct and the world has to be deterministic. It also assumes that you're in a fully observed state. So you know the way the world is now, and so you can predict in simulation how it will evolve when you model it, and then you have your plan, and you can go execute it. It also assumes that your state space is discrete. So these are all simplifications, and during this semester, we're going to unpack and relax some of these, and show how we can come up with new algorithms when the world isn't quite so single agent deterministic, fully observed, and discrete as we may like. Search in general, so far, has been for a class of search problems that are called planning problems. And in a planning problem, what you're interested in is the sequence of actions. When you find your goal state, what you really want to know is what is that master plan that gets me from the start state to the goal? And so the path to the goal is the important thing, and this gets us into discussions of things like paths having costs and depths. You either want a cheap path or maybe a short path, and the way that you inject information about a particular search problem, as opposed to just a black box uninformed search procedure is through heuristics, which sort of give a little bit of a certain kind of a hint to the problem. It says, hey, the goals may be over this way. You're headed in the wrong direction. And that's a way to take this fully generic notion of a search problem and inject a little bit of bias that helps you solve your problem better because you know something about it, maybe because it's embedded in space or something like that. And here in the illustration, you can imagine this ninja robot who's trying to steal the gem. And what the ninja robot really wants to do is figure out what is the complex, precise sequence of actions which will get me through the maze of security into that gym. But there's another class of search problems, as well, and these are the identification problems. These problems usually take the form of an assignment of values to variables. And in identification problems, you're not really so concerned with how long the solution is. In general, they're all the same length because all the variables get an assignment. And you're not concerned with the path. You're not concerned with how you came up with this assignment. You just want to know the assignment itself. And CSPs are a special class of identification problems. And because they're a special class of identification problems, we have new algorithms that let us exploit the fact that instead of a black box search procedure, where all you really get to know is a successor function and a goal test and maybe some costs, now, suddenly we're going to have a little bit of visibility into the CSP, and that little bit of visibility is going to let us tailor algorithms that are more efficient because we can make more assumptions about the problem. So in this case, you might have the detective robot who comes to the scene and needs to figure out what's going on, but it's not so important which rug he looks under first. He just wants to know what is the explanation for everything I'm seeing, and that might be a case of an identification problem. It's also very common in identification problems that paths are all at the same depth. And that means it's going to totally change our outlook on things like breadth first versus depth first search. So let's talk about CSPs. What we're going to do today is we're going to define them. We're going to see some examples that sort of stretch different sides of CSPs, and then we're going to talk about algorithms for solving them. And just like in search, there are different algorithms for solving them that are going to have trade-offs between various kinds of computation and efficiency. One of the running problems we're going to have is map coloring. So how many people have seen map coloring in some class somewhere? OK. It's pretty intuitive. You're going to have a map. There's no colors. You want to put colors on the countries, and it's really, really bad if two adjacent countries have the same color, right? It's like crossing the streams. You don't do it. And so we're going to have constraints that say, don't do that, and we're going to work through that as an example. It's certainly not the only CSP, so don't go thinking that all the CSPs in the world involve colors and inequality constraints. OK. So let's talk about how CSPs are different from a standard search problem. Remember in a search problem, as I said, a state is a black box. It's some data structure. Who knows what it is? You'll see this in your projects. You can't look into the state-- oh, there's a hash table and I can look up the values. You don't get any of that. The only thing you can do on a state is what? You can call get successor and you can call is goal, and that's it. That's your whole API, and whatever is behind that search abstraction-- no idea. And that abstraction is powerful, but it's limiting because the successor function can be anything and the goal test can be any function over states. So you can think of this like a judge. These search states come along the conveyor belt and the judge is like, not a goal, not a goal, not a goal, goal, right? And that's all you have. You can present a complete plan to the judge, and you either get told, yup, that's a goal, or no, that's not. In a CSP, we've got some structure, and the structure is the hook that lets us have better algorithms. And in that structure, we assume it's not just a black box in our state. In a CSP, you have a set of variables, often called x sub i. And each variable has values that come from a domain. Sometimes the domain varies by i, and that's a domain, d. And so each of these variables takes on a value, and you can talk about an assignment of values in the domain to the variables. That's your state. So successor functions now are things like assign a new variable, and that's it because you know what your state looks like. You also know what your goal test is. Your goal test is a set of constraints which specify which combinations of values and variables are legal. OK, so instead of a judge just telling you, legal, illegal, illegal, legal-- instead, you've got a list of rules. You've got the laws, and you can look at these and you can see, OK, I'm good. I'm good. Oh, I'm breaking this law right now. OK, I violated a constraint here. And because your goal function has sort of been decomposed into multiple rules, you can do things like detect errors early. This is also a simple example-- it's our first, but not our last example of a formal representation language. So like in the real world, there are CSP solvers. They're super complicated. They do all the things they do inside. And when you write a CSP, you write, OK, well here aer my values. Here are my variables. Here are their values. Here are the constraints between these. Here's an explicit constraint. Here's an implicit constraint. And in doing that, you're writing down a model of the world in CSP speak. It's not the only way to write a model. It's not the only one we'll see here, but it's a representation language that then-- the problem solver on the other side has special purpose algorithms to find you a solution. As we already talked about, because we can peek inside the state, we can have better algorithms than standard search. OK. So we're going to do some examples. Hopefully, you all know your geography of Australia, because we're going to talk about it a lot. This is all you need to know. Australia is divided into-- I believe they're called states, and we're going to color them colors. So our first example is map coloring, as I said. So we're going to see Australia a lot. It's also running example in the book. And in this case, we want to color the map and we want to have adjacent countries not have the same color, so we need to formalize this as a CSP. So step 1-- we need variables. Here, the variables will be the different states. So WA is the variable for Western Australia. That was unexpected. That was also unexpected. All right. In the worst case, you can all come huddle around my laptop. OK. So Western Australia-- that's a variable. The Northern Territory in the north. South Australian in the south. Queensland, New South Wales, and Victoria on the east, and Tasmania is a sort of disconnected island. You can Color it whatever you want because it's disconnected from the rest. That's actually going to be really important later. So there the variables. There are going to be domains. Now I'm paranoid. I'm looking back. If it goes out, catch my attention. Otherwise, I'll just-- I can see it. OK, so what are the domains? Well, we need values to assign, and the domains here might be red, green, and blue in this map coloring here. Now of course, if that was it-- there were just variables and domains-- a solution would be an assignment of variables, giving each variable a value on the domain. So we just color it all red and we say we're done. So we need some constraints. We need something that tells us coloring everything red is not OK. And so the constraints here, in English, say adjacent regions must have different colors. How are we going to write that down? Well, we could write them down as what are called implicit constraints. Like we might write WA not equal to NT. Effectively, an implicit constraint is a little snippet of code that you can execute and will tell you whether or not everything's OK or whether or not it's broken. So here's a little snippet of code that looks up the value of WA, looks up the value of NT, and if they're not the same, it says thumbs up OK. So an implicit constraint is a piece of code you execute to see if you violated the constraint. But of course, any quality like this-- we could write it out explicitly. An explicit constraint is something like, the pair, WA, NT has the following elements as legal elements of their joint cross-domain. So WA, comma, NT can be one of the following pairs-- red, green, red, blue, blue, green, whatever. So on and so on. But not red, red. OK. Implicit versus explicit constraints. It's going to be an important distinction that's going to keep coming up. A solution is an assignment where every variable takes on a value in the domain that satisfies all the constraints. So here's one. This is the one that's in the picture, as well. Western Australia's red, Northern Territories are green, Queensland's red, New South Wales is green, and so on. There are other solutions. This isn't the only one. This is a solution. In general in CSPs, we're asking the question, find me a solution. There may be a lot. In general, if there's one, there's going to be a lot. There's often exponentially many. Here's another example. N-Queens. How many of you know the N-Queens problem? How many of you would say you're frequent N-Queens players? OK. So what's N-Queens? N-Queens is a puzzle. You have a chess board of some usually square size, and so this is four by four. And on that square chessboard, you're going to place Queens. And they follow the normal queen attack or threat configurations. So for example, this one here attacks to here, to here, and on these diagonals. And so what you want in N-Queens is to be able to place N of these Queens on the board, such that none of them are threatening. OK, so we know the rules that make Queens fight, and we would like to have a peaceful kingdom. So let's formulate this as a CSP. You need a CSP, you need variables. So here's our first formulation. Our first formulation-- well, what are the variables? Let's make the variables the squares. So how many variables will we have? We'll have kind of four by four of them. You'll have the two dimensional grid of variables. So we'll have a bunch of x, i, js. So in N, there are now quadratically many variables. And what is on each square? Well, it's either Queened or un-Queened, right? It's either full or empty. So we could say the domains are maybe 0 or 1, or we could write it as empty, comma, queen, or something like that. So we've got variables and we've got domains. And if this were the end of the CSP, we'd just leave it blank or whatever, or we'd put Queens wherever we want. It would all be fine. So we need some constraints. So what are the constraints? The constraints need to be the things that say certain configurations are OK and certain ones aren't. So we could write them down. So we could say, for example, for any pair-- so here is i, j, and k. So x, i, j, and x, i, k. So that's like this one and this one because they share a coordinate. We could say, explicitly, the legal elements of their cross-domain that are allowed are 0, 0, 0, 1, or 1, 0, but not 1, 1 because these two variables-- if you're x, i, j and x, i, k, and you're both 1, that's a threat. And so somewhere in here is buried the information that you're not allowed to threaten vertically. And you can have all the rest-- the thing that says you can't threaten vertically, you can't threaten horizontally, and you can't threaten along a diagonal. And you could write those out, and there would be a bunch of these things that say, for these two squares, here's the joint constraint on those two variables. This is almost right. So if I wrote out all these constraints and these variables, there's a really easy solution. What's the easy solution under all these? Yeah, set everything to 0. It doesn't violate any of these constraints, and there's nothing here that says you have to have anything about N-Queens in your N-Queen problem. So we would need one more constraint, and we could write that out in various ways. Here is an implicit way of writing that out-- that if you sum up all of the x, i, js, you'll get N, right? In order to figure out whether or not you're OK, you're going to have to run some code that loops maybe two for loops and a sum-- something like that. This one is backed by code. In general, when we do the analysis, we'll always think about their explicit versions. OK. So that's N-Queens. This isn't a very good formulation of N-Queens, just like in search problems, you can sometimes formulate things a different way that builds more of the solution structure into the problem-- more constraint-- and therefore makes it easier to solve. Often, this gives you a smaller space that's easier to solve, and this is true for both search and for CSPs. So what else could we do? How else could we set up our variables here? We could use the information that we know that there's going to be one Queen in each, let's say, row. If we did that, we could say the variables-- each row has a q, sub k. And now, the domains are different because now, the variables represent-- in row 3, where is the queen? We know there has to be one, and so we don't have to worry about that sum to end constraint. And now the domains just say, which column is that queen in? Now again, we have to write down the constraints, and here, the constraints are actually a little bit more complicated. So we could write them down implicitly, saying, for all i and j-- that means for all pairs of Queens-- they don't threaten each other. Or we could just start writing this down, like Queen 1 and Queen 2 together can't take on any values other than 1, 3. That's OK. 1,4. That's OK. But like, 1, 2 would not be OK because then, they would be diagonally threatening. OK? That's N-Queens. Any questions? Because we have variables and there are constraints between them, we have another tool that's going to be at the core of a lot of the algorithms that we're going to develop for CSPs, and actually, this is all going to show up again later when we talk about Bayes nets, sort of in an isomorphic way. And that is, we can talk about the constraint graph. This is the graph that is formed by having a node for each variable, and then some representation for constraints. If all the constraints are binary, that is, they connect two variables. We can have arcs represent constraints. There's another notation that we'll see in a second. So here, this is Australia decomposed into a node for each of the variables and an arc for each of the constraints. So because Western Australia and Southern Australia share a border, there's an arc here, which tells you right here, there is a constraint between them. But you'd have to, like, peek inside to see that that constraint is actually an inequality constraint. The constraint graph doesn't tell you what the constraints are, but it tells you where they are, and that gives you powerful pieces of information to let you decompose and use different algorithms that are appropriate for different shapes and topologies of these graphs. OK. So a binary CSP is one in which each constraint relates, at most, two variables. So often, you have what are called unary constraints, which are really just domain reductions. This state has to be green, for whatever reason. A constraint graph, nodes, or variables-- arc show constraints. And the structure will speed up the search problem. So let's take a look and see what this looks like for N-Queens. All right. So what's N-Queens look like? The way we've been drawing it, let's do five Queens. All right. And we could call these a, b, c, d, and e, and then we could call the columns 1, 2, 3, 4, 5, and then we could place a queen right here. And that would be the assignment that a equals-- it's gone. a equals 5. a equals 2. All right. Let's look at this as a graph. This is going to be in an applet that we'll see a couple of times. It's a nice applet by the folks at AI Space. And in here, each variable is a circle, and you can see something other than just the names of the variables here. You can see their domains, and right now, their domains are all 1 through 5 because we haven't picked anything yet for our CSP. You can also see that between two things, like between a and b, there is this constraint that is helpfully called Queens 1. What the heck is Queens 1? It's a constraint between a and b. So what is it going to say under the hood there? It's going to be things like, 1, 1 is not OK because then, they'd be vertically threatening. 1, 2 is not OK because then they'd be diagonally threatening. 1, 3 is OK, and that's what lives inside Queens 1. So from this graph here, you can see the structure of which nodes are connected to each other by constraints. You just can't see what the constraints are. We'll come back and see this again later when we talk about constraint propagation. Here's another example. Maybe some of you have done these cryptoarithmetic problems. The idea is you're supposed to make each letter be a digit, such that the math works out, and in this case, it'll work out both in words and in numbers. And so the variables would be the ts, the ws, the os-- all of those letters. But there's also some other variables you need. OK, in this case, you need three extra variables. Does anybody know what those variables represent? Because you're going to want to write something like, o plus o equals r, but that's not quite right. There's something else going on here. Those are the carry bits. So we need to introduce some variables here for the carry bits, and their domains are going to be the digits, and the carry bits will maybe have limited domains. What about the constraints? Well, one constraint that goes with these problems is that they'll all be different. Otherwise, you'd just assign everything to 0, and it would be fine. OK, so an all diff constraint is an interesting constraint because it's a constraint that touches a whole bunch of variables. In this case, that's not so bad because you can actually break it up into a bunch of pair-wise, all different constraints. And you would have other constraints, like o plus o equals r, plus 10 times the carry bit, whatever it is-- 0 or 1. And you could write all of these down. If you drew out the constraint graph, though, you'd have a problem because what about this all different constraint? How do you draw an arc that touches more than two things? And the answer is, in general, that people draw squares to represent the constraints, and then a bunch of lines going to all of the participating variables. OK, and you can see this is sort of a special case at the other one when you just put a square in the middle of every arc. OK. How many people have played Sudoku? OK. Sudoku is fun. The variables are the open squares. The domains are 1 through 9. The constraints here say that for each column, they all have to be different. For each row, all the digits have to be different. And also, for each little region, they all have to be different. There's actually one more kind of constraint in a Sudoku problem, which is, typically, there are a lot of squares that are already filled in with values, and you can think of those as unary constraints. In your solution, this square right here must be the number 1. And you can imagine a unary constraint where this one can be 3 or 7. As far as I know, that's not a thing you do in Sudoku, but you could. You could call it CS 188 Sudoku with unary constraints. Interestingly, you may know that some Sudoku problems are really easy. You pick something, and then there's another one that's pinned down, and then you pick another one that's pinned down, and another one, and then you're solved. In other cases, you scratch your head and you try things and you backtrack and you try a couple of things and you detect a problem and you backtrack. And actually, that difficulty in computation that you have doing harder Sudoku problems compared to easier ones has a really deep connection to the algorithms we're going to learn for CSPs and their complexity. OK, one of our last examples here-- the Waltz algorithm. This is not how people do computer vision today, but it's an early computer vision algorithm that is an algorithm for interpreting line drawing. So if I look at these three dimensional-ish drawings here, you might notice things like, well, this sort of corner here-- these three lines-- they're like an outie, right? And these ones here, at least in the easy interpretation-- that's an innie. And so you look at these things and you kind of can't tell. Your brain does it, but what's sticking out? What's in? What's occluding what? And these kinds of questions-- that is an interpretation of these line drawings, and you can pose this as a CSP, where you say, well, each intersection-- so here and here and here and here and here and here-- each intersection is going to be a variable. And the values are sort of going to be things like, it's an outie versus it's an innie. And the constraints are going to be things like if two things are connected, you can't have one that's kind of convex and the other is concave. So like if you're connected to something, then you're both sort of outies in the appropriate way or innies. And so the solutions correspond to physically realizable 3D interpretations of these drawings, and this is an early example of a case where computation gives rise to intelligence or reasoning in an AI domain that was kind of very different from search. Any questions on that before we get into varieties and solutions? OK. All right. There are a bunch of kinds of CSPs in the world-- those that we'll talk about and those we won't. What we're going to talk about most in this class is CSPs that have discrete variables. In general, we're going to talk about ones with finite domains, and that means that if each domain has d values in that domain, then the complete assignments are something like order d to the n. That's already bad news, right? That's already exponential in the number of variables in the CSP. This includes things like Boolean CSPs including satisfiability, where each of the variables has two things-- true and false-- and the constraints are things like conjuncts clauses and things like that. And we already know that this is NP Hard. This is NP Complete, which you should have gotten from other classes. But there are other CSPs. For example, cases where any integer is a valid member of the domain or string valued variables, which are even harder. When things have linear constraints-- things like job scheduling, where this job has to end before this job starts and things like that-- these things are solvable, though they're still very hard to solve. Once you have non-linear constraints, things can sort of get undecidable very quickly. And of course, there's also continuous variables like we're going to have a telescope, and it's going to have non-integer, sort of abstract times. It's going to have real valued times or any number of cases, and in the case of linear constraints-- these are linear programs, and you saw, I think, a little bit in CS 70 and certainly in 170, how to solve these. There are good polynomial time approaches to these, but in general, continuous variables are also very hard. Varieties of constraints. We talked about unary constraints. Those are-- you should think of them as restrictions on a domain. That's important because we're going to have algorithms that also restrict domains, and you're going to need to know how those relate to unary constraints. Binary constraints-- doesn't mean the variables are binary. Those are binary variables-- binary domains. Binary constraints involve two variables at a time, like SA and WA are not equal. And higher order constraints are like what we saw in cryptoarithmetic where you can have three or more. There's also preferences or what people call soft constraints, like color this map, but I like red. Use red if you can. Red is better than green. And these are often represented by having variable assignments that have costs-- either costs on the domains or costs sort of on soft constraints. This gives a kind of constrained optimization problem that we're going to say nothing about here, but is going to be sort of-- it's going to be very relevant when we get to Bayes nets because Bayes nets are a form of reasoning over these kinds of graph structures with real value costs. OK. But not today. CSPs are all over in the real world. They're actually really common technology. You probably run into CSPs all the time. Probably the most common one is, hey, I have a bunch of friends. When can we all meet? Or got to meet for some work related thing. Here's when I'm available. I'm available here, and try and find intersections of those things. There's also timetabling, like that schedule that campus has to do to figure out which classes go where so that we can fit all the students with kind of minimum overflow. Versions of this are CSPs, and they're really all over the place. So this is the kind of technology that crops up all over the place in the real world. In the real world, of course, a lot of these things have real valued variables, and not just discrete ones. Let's solve some CSPs. All right. So we're going to be CSP detectives here. So we'll start with the standard search formulation for CSPs. We'll talk about another formulation next lecture. In the standard search formulation for CSPs, states are defined by the values that are assigned so far. So you can call these partial assignments. So the initial state at the root of the search tree is going to be an empty assignment where no variables have any values, and the successor function is going to assign a value to an unassigned variable. The goal test is going to not just be is it a complete full assignment, but it also has to satisfy all of the constraints. OK, this would be the simplest way of mapping CSPs into search. Your successor function assigns another variable, and the goal test is all constraints are satisfied and I've assigned all the variables. So we'll start with the dumbest thing that could possibly work. We will see that it doesn't work, and we'll improve it. So let's think about this graph here. This is the map coloring problem for the Australia problem. And we could think, what's breadth-first search going to do? Well, there's going to be a search tree and there's going to be a root, right? Remember breadth-first search from before? What's going to be at the root is the empty assignment. And that's got successors, and what do the successors look like? They look like things like Western Australia equals red. And that's going to have some successors, like Western Australia equals red and South Australia equals green. And that's going to have successors, which will have successors. And you've got this big exponential tree that encodes all of the combinatorially many ways of assigning these values to these variables. What's breadth-first search going to do? It's going to take the one off the top. It's going to say, are you a goal? Nope. So it's going to stripe through the second level of the tree. Is it going to find any solutions? No, because on this level, there's only one value assigned to each variable. So it'll go to the third level of the tree, accumulating an enormous queue as it does so. Stripe through the third level. Any solutions? Nope. Nope. Nope. Nope. Nope. OK. And then where are the solutions? They're at the bottom. They're all at the bottom. This is like the nightmare scenario for breadth-first search. OK, breadth-first search stays up at night worrying about what's going to happen if all the solutions are at the bottom, because that means it has to explore the entire tree. This is the worst possible case because you have to do everything at the other levels before you can even get to the bottom. So before, we sort of made fun of depth-first search, right? We love you, depth-first search, but you find weird solutions. Suddenly, depth-first search maybe doesn't seem so crazy because our search tree looks like this, except it's sort of exponentially big on the bottom. All the solutions are down here, and depth-first search is at least going to make a decent effort at getting to where the solutions live, whereas breadth-first search is going to look everywhere where they aren't first. So all of the methods we're going to talk about are actually based on depth-first search, even though we spent all of last week making fun of depth-first search. So let's take a look. So here is a search graph. This is for a map coloring problem-- three colors-- adjacent things, which are connected by lines here-- can't be the same. And what we're going to do is we're going to run naive search. This is actually even already a bit of an improvement, but this is basically just the naive thing. We're at the root, OK. So we're going to do depth-first search. We're going to add an assignment and recurse. So we're going to assign the first thing to blue. Great. We're going to go to the next level in the tree because depth-first search follows the deepest thing on the queue. So we're going to do another assignment. What are we going to assign? It's going to be this circle here to the right of the blue one. What are we going to assign to it? Remember, it's blue, green, and red. Blue! And now the next one. Blue. Blue. Blue. Blue. Blue. Blue. Blue. Blue. We have no successors and we're also not a goal, so we'll pull something off the queue. And we'll sort of keep this up for a while. In the interest of doing anything else in the lecture, I won't actually run this whole thing, but this is what depth-first search would do with a naive formulation. So this is kind of crazy, right? Why would you solve a CSP like this, where you keep doing assignments, and then only at the end discover that it doesn't work out? So we should be able to do better than this. How are we going to do better? Any ideas? Ideas. Yeah, so we should be checking these constraints as we go. The answer was, here, maybe the successor function could have the constraints built in, or at least we could look at the constraints as we go. Basically, we want to apply that goal test incrementally as we go. And the simplest version of this is what's called backtracking search. It's not a very helpful name. It's just a classic name for this. It's the basic, uninformed algorithm for CSPs. First idea-- we do one variable at a time. Strictly speaking, the naive thing could assign any variable at any time, and it would still be a valid successor. But since these variable assignments are commutative, it doesn't matter what order you got to them. Remember, the path doesn't matter here. We're just going to fix an ordering, then we consider one variable at a time. And we're going to check constraints as we go. So as soon as we violate a constraint, we're done. We don't have to keep going and checking whether somehow, deeper in this tree, this is going to look better. Remember, we couldn't do that with search. This is only because in CSPs, once you violate a constraint, there's nothing you can do to un-violate it later. That's important. It's not true for search problems in general. It's true here, so we can stop early, as soon as we have violated a constraint. Think of it as an incremental goal test. Keep looking. Have I messed up yet? Have I messed up yet? Depth-first search with these two improvements is called backtracking search. And in general, just a sort of a sanity check here, you can solve N-Queen problems for about N equals 25, and I'll track this number as we go up. Of course, you can always throw compute and get a little bit further, but these are exponential situations. All right. All right. Here's an example. This is the root, and so we have three successors, which take the first variable and the ordering here, WA. We assign it in three different ways. We'll expand one of them. Now we'll have two things assigned. And then right here, we're going to look, and as soon as we see that we've made a mistake, we're going to cross things off. So the third successor here that has Western Australia and Northern Australia both assigned to red-- it's crossed off. So the whole tree below it, which is all doomed to failure, we don't even explore, and that's a big improvement. We can keep going. Here's the pseudo-code for this, which I won't go through in sort of gory detail, except to note a couple of things. One-- it's often implemented recursively. This is a very bad idea with search because we would grow this giant cue that we wanted detailed control over. But here, we pretty much just want to recurse and backtrack. So how does it work? If the assignment is complete, you're done because if you're going to break a constraint, we would have seen it along the way. If it's not complete, then we have to select a variable that isn't assigned, and that's going to be a choice point. And then we have to go through the values in some order. That's going to be a choice point. For now, let's just assume there's a fixed order for those things. Then we're going to check a constraint. So what is this? This is depth-first search plus variable ordering plus a [? failing ?] violation. Let's take a look and see how that does in the applet. All right. So let's reset this. Let's go to backtracking. None of the other fancy things. First thing we'll get assigned-- blue. The next thing we'll get assigned-- red, because there is no successor with blue because that successor violates a constraint. So at least we don't make a mistake on our second step, which we will not discover-- we will not sort of fix until we do an exponential amount of work. So that's good. All right. What's next? Well, blue. And then what? Red. Now green because blue would be a violation. And if we do this, it doesn't mean we don't backtrack. So green didn't actually work out because-- let's go back-- because even though this green was OK, when you get to this-- can you see my cursor-- when you get to this one here, suddenly there's nothing left. There's nothing legal, and so we have to go back to the queue. But we can sort of keep going. I'll just let it play. Faster. And so there's a little bit of backtracking, but now we can get through the small graph because at least we're only doing things that are legal so far, and there's a limited amount of backtracking when we make something whose problem is a little further downstream. OK. All right. One nice thing about CSPs is often, they are general purpose ideas that aren't like-- they're not like A star heuristics that are custom to your search problem. They're general purpose, and they often give huge gains in speed across a wide range of problems. There's basically three classes of ideas, and we'll have time to do one to two of them today. The first one is ordering. Which variable should I actually work on next, and what order should I try the values? Maybe there are some variables that are better to work on now. Maybe there are some values that are more likely to work out. So that ordering of how you structure the choice of variable and value exploration is a big deal. The next thing is filtering. Is there a way to detect inevitable failure early, as opposed to just sort of waiting until you hit a dead end in your search? And the last thing is-- the last class of approaches for improving CSPs is structure based approaches. Can we look at the graph, detect something efficient about its structure, and use some algorithm that exploits that structure? And this is going to work for some kinds of graphs and not others, and we'll talk about these algorithms probably on Thursday. So for today, let's talk about filtering. So what's filtering about? Filtering is about ruling out candidates. Unlike assignment, where you say, all right, I'm going to try this variable, filtering is about ruling out candidates for the variables you haven't yet considered in your assignment, OK. So the simplest kind of filtering is something called forward checking. I'll illustrate what it is, and then we'll talk about the general idea behind this and how it extends to other filtering algorithms. So in filtering, what we do is we keep track of domains for the unassigned variables. This is different. Before, it was just like there are variables, and as we go, we assign them. There's the ones that have a value and the ones that don't. Now there's a new idea here. Even before you assign a variable of value, you have a sort of domain cloud sitting there, which says, as far as I know, these are the legal candidates. And so here, what we can see is all of the Australia nodes-- none of them assigned to a color, but all of them with sort of a cloud, saying all three values, as far as I know, are still in play. We're going to keep track of domains for these unassigned variables, and under certain circumstances, which will vary by the filtering algorithm, we're going to cross things off. It doesn't mean assigning the variable. It just means ruling things out so we can detect future failures. In what's called forward checking, we cross off values that violate a constraint if they're added to the existing assignment. So let's look at how this would work. So we look at the initial assignment and we say, everything's OK so far. We have to assign something, and so maybe we assign red to the WA node, Western Australia. So what do things look like now? Let's-- before I do it, I'll just do it out here. So let's assign red here. So WA has gone from unassigned to assigned. So it's red now. This would be it for backtracking search. We'd assign it and we'd look for successors. But what forward checking does is forward checking says, all right, well, I've made an assignment to WA. So I'm going to go and I'm going to visit all of the other variables and check to see whether they have any values in their domains, which would trigger a violation-- a constraint violation when combined with my current assignment. And so I would go and I would say, all right, Queensland. All right. All of those still work. How about the Northern Territories? I would look at that and I would say, well, green and blue are still fine, but red is not going to work anymore because it would violate the inequality constraint against WA. So I would go and I would-- see if this will work. Magic. OK. It's gone. So we would go through and we would visit all of the other variables-- not assign them, but just cross things off their domain if it triggers a constraint violation. So the neighbors to WA would lose red, and then we could keep going. We could assign green over here to Queensland, which would assign green to q, and then everything that neighbors q would lose its green value. And so what you can see is even in the variables that I haven't assigned yet, I'm starting to see variables having their domains shrink. And if a domain ever shrinks to 0, that means there's not going to be any possible way to do an assignment there, and that means we might as well stop now. So whenever you do filtering and a domain goes empty, you backtrack. All right. So, sorry. Keep going. As soon as we assign to v a blue, even though we haven't tried to assign anything to Australia, it's lost all of its values from its domain, and we know that there's no way to solve this. Now, if you think, looking really closely, you might already realize that we are in trouble. Let me get a color. You might realize we were already in trouble up here, right, because we knew NT was blue and we also knew SA was going to be blue. And if you look closely, they're next to each other. But this is not forward checking's problem. Forward checking does not think this hard. All it does is check for immediate violations between unassigned variables and the assignment to date. Anything further is thinking too hard for forward checking. OK. Any questions on that? Let's do a demo on that. We'll do one more idea, and then we'll take a break. Let's do a demo on that one. So let's go to forward checking. So even before I assign anything, the domains have appeared inside the variables, and they all have complete domains because nothing's happened yet. But as soon as I assign to the first variable, which in this ordering, is the bottom left. And the kind of strike order here is blue first, then red, then green. If I assign blue, think about what's going to happen. We're going to look at all the other variables that are connected by a constraint, which are just its neighbors, and we're going to cross blue off. So there it is. We've propagated the constraint out, but not very far. And so when we go to the next one, it's either going to be red or green because blue has already been crossed off. We're going to assign red, and a bunch of its neighbors lose a variable-- lose a value. Blue. Now look at this. Do you have a bad feeling about this? I have a bad feeling about this. Do you know who doesn't have a bad feeling about this? Forward checking. Forward checking is going, all right, so we assign here, we assign here. Now we detect the problem. We don't detect the problem until we actually trigger the constraint check between those two variables with an assignment conflicting with an adjacent unassigned value. So of course, we're going to have to backtrack. So forward checking helps us avoid some future mistakes, but it's still going to make mistakes. You're still going to have to backtrack. And when you backtrack, domains repopulate, and then we keep going through. And maybe this time-- no, not quite. This time, we get through. OK. So that's filtering. But in the back of our minds, we realize we could be working harder. We could be thinking further into the future to detect these violations before they happen. OK. In general, forward checking is going to propagate information from assigned to unassigned variables, but does not provide early detection for all failures. In particular, it doesn't detect looming conflicts between unassigned and other unassigned variables. That would require checking lots of pairs of variables all throughout the graph to see if anything's broken anywhere. We're going to do that. We're going to do that right after the break, but let's look at the reason why we want this. We want this here because if you look at this, as before, NT and SA can't both be blue, but that involves looking at those two variables at the same time. And we should be able to detect that. So we're going to talk about algorithms for richer constraint propagation, which reason from constraint to constraint and propagate these vanishing domains throughout the graph. When we come back from break, I will show you an algorithm for constraint propagation involving a concept called arc consistency, which is our core conceptual piece for this. Then we'll talk about ordering based methods, and then we'll be done for the day. So let's take a couple minute break now, and when we get back, arc consistency. All right. All right. The moment we've all been waiting for-- arc consistency. Until today, we didn't even know about arc consistency. All right. So we were talking about forward checking like it was checking some constraints, but then there were other constraints that it didn't check, and if only it would check them. What is this checking a constraint? What is this? So we need to formalize the notion of what it means to have checked a constraint. And the key concept that is going to show up in these algorithms is almost your intuitive notion of checking a constraint, with a little twist. And it has to do with the notion of a consistency of an arc. So an arc-- remember there's a graph, right? There's a graph where we have nodes for the variables and we have edges connecting them if there's a constraint between those two variables. So an arc-- if this is a, b, c-- an arc from a to b is going to be almost like the edge to a to b, but it's going to be directed. And that means for every edge on the graph, there are actually two arcs you could check-- a to b and b to a. So it's directional, even though the underlying constraint graph isn't. There's just a constraint on a and b, but we can check whether it's satisfied in two different directions, which I'll get into in a minute. The other thing is conceptually, you can check an arc between two things that aren't connected by a constraint. So I could check whether or not a to c was OK, even though there isn't a constraint there. So it is almost the same as a constraint arc, but it's not quite. It's a directed arc between any two nodes. And we say that arc is consistent. Intuitively, it's consistent if there's no constraint violation along that arc. But formally, it's sort of half of that. It's consistent if, for every x in the tail of the arc, there is some y in the head, which could be assigned without violating a constraint. So this means sort of-- if there was actually an assignment to a and b or to x and y, we could check whether the assignment satisfies a constraint. But in general, an arc is going to have a couple of things in the tail and a couple of things in the head. And the notion of consistent is for everything in the tail, there is at least one OK option in the head. So let's do some examples. We could look here and we could say, all right, Western Australia here is assigned. It's assigned to red. The Northern Territories are not assigned, but I can still look at the two of them and check if this arc is consistent, all right? So let's do it. We have to ask the question. We check everything in the tail. So we look at the Northern Territories and we say, is there anything in your domain, even though it's unassigned-- is there anything in your remaining domain which would have no continuation into the head? So we say, well, if I assigned you blue, would it be OK? Yeah, that would be OK. If I assigned you green, would it be OK? Yeah, that would be OK. What if I assigned you red to NT? Not OK. So red is something in the tail for which there is no assignments in the head, which doesn't cause a constraint violation. OK. So this arc is not consistent. We can, however, make it consistent. What can we do? We can remove things from the tail. What can I do? Well, I can do just what forward checking did. I can wipe out the value that caused a conflict. OK. So now we can check other consistencies, but let's try this one. q to WA. You'll notice these two don't actually-- they are not connected by a constraint, so it should be easy to check. But I can ask the question-- if I assign red to q, is there an assignment which does not yet violate a constraint for WA? Yes. Red. How about green? Yes, red, blue. Yes, red. So this arc is already consistent. I do not need to do anything to it in order to make it consistent. So in general, there's a question of how do you remember this? Right? There's heads and there's tails. It's very easy to get the order. Here's how I remember it. Remember, CSPs-- the constraints are like rules, and these algorithms are like police. They're going to go and they're going to enforce the rules. And you can imagine this arc is going to get pulled over by your algorithm, which is the CSP police. So here, they pull it over. And what do they do when they pull the arc over? Right, they pop open the trunk and they look for anything that's illegal. So they're going to fish around in the trunk and if they find anything bad, they're going to take it out. So this is the algorithm. You pull over the arcs one by one. The algorithms differ in terms of which arcs you pull over first, how many you pull over-- all that stuff. But all these algorithms have the same shape. You pull over an arc, you fish around in its trunk, and if there's anything in that trunk-- any assignment to the tail, which is guaranteed to fail, given what's left in the head-- you cross it off. OK. That's it. That's enforcing the consistency of a single arc. Now we're going to have to do a whole bunch of arcs in order to get a filtering algorithm. OK. So remember-- delete from the tail. Delete from the tail. OK. Forward checking. Forward checking was enforcing the consistency of the arcs that point to each new assignment. So when I had that, if I assigned to red here, I would say, all right, let's go to NT and delete anything that causes a conflict. Let's go to q. New South Wales, Victoria, Southern Australia. And if all you do is, upon every assignment, look at all the arcs pointing to that assignment and enforce consistency, which means delete things from their tail if they cause a constraint violation, you would recover the forward checking algorithm. Now we can start talking about enforcing the consistency of lots of arcs-- even ones that don't point to our assignment. And that's going to give us richer filtering algorithms. So we can talk about arc consistency of an entire CSP. So let's look at this. This is a CSP that's sort of in like a-- we're in the middle of the movie here, OK. WA and q have been assigned-- red and green. And that's shown on the map, as well. NT, NSW, and SA have had their domains reduced by some previous pruning. So they haven't been assigned yet, even though some of them only have one value left. They haven't been assigned, but they're in various kind of stages of having been filtered. And so what we can do is we can look at this and say, all right, here's a partially assigned CSP. I can go visit arcs. It can be like-- go be the arc police. We're going to visit all the arcs and we're going to check them. So first, we check this arc. We checked V to NSW. First, we look at our graph and we notice that they are neighbors. All right. This is the first time we're checking the consistency of an arc that doesn't point to an assignment. So I go through and I check the tail is V. So I'm going to check all the values. I'm going to get my police pen out here in white. I'm going to check the values and I'm going to say, what if I assigned V blue? Is there a choice at NSW that will avoid a constraint violation? Yes. I could assign it red. You're like, but there's one that creates a problem. That's fine. There only has to be some way that the head can be assigned in order to license things staying in the tail. So blue is fine. Check. OK. How about green? That's fine because if I assign green at V, then NSW could be either red or blue. What if I assign red? Well now, at NSW, I can't use red, but I can use blue. So this arc checks out. We declare it to be consistent and we let it get on with its day. All right. Let's look at this one. SA to NSW. Right. Keep staring at the map. SA and NSW are adjacent, so I'm going to look at SA and I'm going to say, well, what's in the tail? Blue. If I pick blue, do I have a choice at NSW that will be OK? Survey says yes. So I don't have to cross anything out. So far, this is pretty boring because I've checked a bunch of consistent arcs. But let's check the arc in the other direction. OK, so it's the same constraint that NSW and SA can't be the same, but head and tail have been flipped. It's important for understanding these algorithms. And so we need to check it again. So now I look at NSW and I say, is read OK? Well, if NSW is red, V could be blue. Is blue OK? Well, if NSW was blue, we're toast. So we have to fish out of its trunk. Got to erase that. Got to erase that. Now it's consistent. It was not consistent before, but now that I've deleted blue from the domain of NSW, this arc has become consistent. OK. There's a tricky case. Anybody see the tricky case? We were just here. We just checked V pointing to NSW. We just declared it consistent, but that was on the basis of having blue and red available in the head at NSW. And one of those is gone. So the consistency may no longer hold. And so I have to go back to V and I have to check-- are there any assignments to V which cannot be extended to assignments at NSW without violating a constraint? Blue-- OK. Green-- OK. Red-- no longer OK. It used to be OK because blue was supporting red's OK-ness. But blue is gone, so we have to check this one again. And in order to make it consistent, I have to make further modification. Now, it's consistent again. OK. So any time you delete a value from a domain, every arc pointing into it that was declared consistent is now questionable again because that value that you just deleted may have been supporting their consistency. OK. So I could keep doing this. The whole reason to do this is actually a completely different arc. It's the one between SA and NT, neither of which is assigned. But if you just look at them for a second-- if you look at that arc in either direction, you will see that, in fact, you have to delete blue from the tail, which results in an empty domain, and an empty domain means a detected failure, which means backtracking. So if we went around and we just kind of checked all the different arcs, we would know as soon as we assigned WA to red and Q, we made green. We would be able to detect this sort of secondary violation between unassigned variables. This is a kind of constraint propagation, and if you enforce the arc consistency of all arcs at once-- that's called making the graph arc consistent-- that is a powerful kind of filtering. OK. What do we need to know? If x loses a value, its neighbors need to be rechecked, which might mean their neighbors need to be rechecked, which might mean their neighbors need to be rechecked. You could worry, is this whole thing going to even converge? OK. Arc consistency is going to detect failure earlier than forward checking because this was the case where we were like, forward checking. Just think a little harder. Arc consistency will expose this if you make the graph arc consistent. You can run this either as a preprocessor, or more commonly, you run this after every assignment. You're still in a backtracking search, but after every assignment, you go around, you think really hard, you do a bunch of consistency checks, and you filter a bunch of stuff so that you don't have to backtrack quite so much. So this is great. What's the downside of enforcing arc consistency on a graph after every assignment in a backtracking search? Your hand was up really quick. Yeah. Yeah, runtime could be bad. I didn't even tell you if this thing converges. Spoiler alert-- it converges. But it's a lot of compute. So you won't have to do as many assignments, in general, but for each assignment, you're going to have to do all of this work and bookkeeping in order to enforce arc consistency. This should remind you of something from search. Does this remind anybody of anything you learned last week? Like really loosely. This is like-- yeah. Uniform cost search? OK. It reminds me of A star because in A star, you do a lot of work on each node to figure out which nodes to not explore, right? This is doing a lot of work on each assignment so you don't have to do as many assignments. Are you going to come out ahead? Maybe. And so there's a trade-off between doing more filtering and just making the core search run faster. In general, this is a very powerful method, and it usually pays for itself. But it varies, problem by problem. Also remember to delete from the tail. I'll try to tell you this every four slides or something. Here's the algorithm. This is an algorithm called AC3. Arc consistency is a property. A graph is either arc consistent or not. It's arc consistent if all arcs in the graph are consistent. You can have graphs in which some arcs are consistent and some are not. Arc consistency means all arcs are consistent. The algorithm that gets you there is called AC3, or rather, this particular algorithm is called AC3, and it gets you there. How does it work? Here's the highlights. So first of all, so far, we're only talking about binary CSPs, because otherwise, you're talking about arcs and how they interact with three way constraints, and we haven't talked about that yet. So they're only for binary CSPs or this algorithm is, and what do you do? You're going to have a queue, right? So inside this search, which has a queue, you're going to have a filtering algorithm, which has a queue. Like Inception. OK. So what we're going to do with our queue is the following. We're going to pop an arc off the queue. The queue is our to-do list of arcs to check for consistency-- to pull over and pull things out of the trunk. So we're going to remove something from the queue and we're going to take out all the inconsistent values. And if you found an inconsistent value in the process of making this arc consistent, you're going to throw a whole bunch of its neighbors back on the queue because you just invalidated their consistency. That's it. That's the flow. You take things off the queue, and whenever you actually change something, you throw some neighbors back on the queue. What do you do when you pull over an arc? You go through every value in the tail domain. I should write a t. Every value in the tail domain. Every value in the head domain. Did I get those in the right order? You go through every value in the tail domain and then you go through every value in the head domain, and you check, hey, constraint? Are these two OK? And then if there's values in the tail for which no values in the head are OK, you delete them. So what's the runtime here? We got two for loops, right? We've got to check all the domains in the tail, and then we've got to do a loop over everything in the head. So the stuff down here is going to be d squared, right? We've got to check everything in the cross-product, at worst. How much work are you going to do up here? Well, how many arcs are you going to have to look at? Well, if you didn't re-in-queue things and there's n nodes-- how many arcs? n squared. You look at all of them in both directions. But here's the problem. Whenever you process an arc, you might delete something from the tail. And if you do, you might have to throw a whole bunch of arcs back on. So maybe this won't even converge, but it will. Anybody see why this might converge? Exactly. Say that again. It can only go back as many times as there are things in the domain because you only get sent back into the queue if something was deleted from the domain you point towards, and that can only happen d times. So worst case here, you get a factor of d. This looks like it's going to be d cubed, n squared. You can do some tricks to get a factor of d out of that. But you know, we're not-- this is going to be polynomial. It's not that bad. But we know that detecting all future problems is going to be np hard. And why do we know that? What if I told you satisfiability was a CSP? Now maybe it seems like bad news, right? So we know that CSPs, in general, or np hard. Filtering-- this algorithm is not going to solve them. It's just going to detect a certain class of violations. Let's watch this algorithm in motion, and then we'll finish up our filtering and talk really quickly about ordering. All right. I don't want to do this. I want to do this. OK. So here is the graph for N-Queens. Remember, each letter is the location of one of the Queens, and the values are which column that Queen is in. And there's a bunch of constraints. Now I can check by clicking here. I can check-- when I click here, this is going to check the arc from a to b, and it's consistent. And now I'm going to check b to a, and it's consistent. And I'm going to check e to something. It's consistent. So I'm going to just let the applet go wild. And if I do this, it'll go around. It's got a queue. You can't see the queue, but it's got a queue. It's checking everything. And you'll notice nothing's actually happening. And that's because at the beginning, the arcs are all consistent because there aren't any values that sort of immediately cause a conflict, right? Until we start assigning things, we're not going to really get any value out of arc consistency in this problem. But now I can go to a and I can say, all right, I'm going to put that-- OK, it looks like I have a star at 2, so I'm going to say, I'm going to assign you to 2. Notice what happened. Notice all the arcs, at least in the way that this applet draws them, they've turned blue for pointing into a because a just changed. Everything that used to point to a that used to be declared consistent-- back on the queue. So we can go and we can start enforcing them. So when I enforce the consistency of b to a, right-- that's over here, remember? It's going to say, are there any places that b can't go without being able to be extended to a? And yeah, there are because 1 isn't OK anymore. It was before, when a wasn't assigned, but it's not OK now. And 2 is not OK and 3 is not OK. And so when I do this, a bunch of things are going to disappear from b. 1, 2, and 3 are gone, and a whole bunch of things just went onto the queue because they were relying on previous values of b. And so we can let this applet execute the whole queue, and you'll notice every red flash means I deleted something. Things go back on the queue. And this will keep going-- it won't go forever-- until it quiesces and there's nothing left going on the queue, and everything is arc consistent. You think it's going to end up with a single solution? We're going to find out. So if you look at it right now, a whole bunch of values have vanished from those domains, but I still have to do a backtracking search because I don't actually know whether 4 is going to work out at b, or 5. Maybe they'll both work out. Maybe neither will work out. But let's say I assign one. What would you guys like? 4. Decisive. OK. Let's do it. We could detect. No luck. All right. We would detect that by seeing a domain go empty. And it's-- the whole thing is arc consistent. Now it turns out, even though the other ones aren't assigned, they only have one thing in their domain, and assigning isn't going to do anything. And you can sort of see that this is going to finish now without any backtracking. That's not always true. Arc consistency-- enforcing arc consistency does not prevent all backtracking. OK. See what's next. So let's talk a little bit more about this, then we're going to quickly wrap up with ordering. So after you enforce arc consistency, the following can be possible. You can have one solution left. We just saw that happen. We enforced arc consistency after a couple assignments, and there was one solution left. You can have multiple solutions left and you can have no solutions left and not know it. So let's look at these two examples here. Let's look at this top one. We can look at it and say, is it arc consistent? Well, let's look. This arc pointing up-- assuming these are all inequality constraints, is there anything that you have to delete from the tail to be consistent with the head? No, they'll both work. And I can check this one and this one. These are the tricky ones, right? One's going to be blue. One's going to be green. But both values are consistent. And so that top one is consistent, and you can see there's two solutions in there, right? One where the left goes green and one where the right goes green. How about down here? Are there any solutions left to the CSP, assuming those are inequality constraints? There is no way. There is no solution there, but this arc is consistent because between those two, there are two solutions left. And this arc is consistent and this arc is consistent. All the arcs are consistent. So what went wrong? Arc consistency let us down here. What went wrong here? What went wrong is what's-- is the problem here is between three nodes, and arc consistency traffics in pairs, right? And it will not detect violations, in general, that are between three. All right. So it also still runs backtracking search. So let's see all of that, and then we'll quickly talk about ordering. I want this one. All right. You're ready for the big graph. All right. Each node has its demands. We're still going to start in the lower left because we haven't talked about ordering yet, and if I assign something to the lower left, I'm going to assign blue, and you can think in your head what's going to happen. First, we're going to be forward checking. So I assign blue and its neighbors lose blue. OK, great. So I'll go to one of those neighbors and it loses red. And look in the upper left. It's got to be green. Forward checking has discovered this. So let's keep going. Blue, red. So we've discovered that this one has to be green and this one has to be green. Forward checking has discovered this. Are you worried? You're worried. Guess who's not worried? Forward checking is not worried. So we continue. So we keep on assigning things. We know it's doomed to failure. All this computation is a waste. It's already doomed. It's doomed. You're doomed. Now it figures it out. Once it actually assigns-- only through assignment do things propagate further into the graph with forward checking. So we are going to backtrack and try again. And I'll play this. So it's going to backtrack. I'll speed it up. It will eventually compute its way through this, but it keeps sort of running into this because the problem happened when? It happened, like-- when did this go wrong? Sort of like instantly, right? It was in that fourth node we were already sort of doomed. But it'll muddle through. OK. Let's compare that to if we enforce arc consistency, where I assign blue. And again, its neighbor stopped being blue. I assign red. Notice the upper left turns green, but also, this node down here-- like way in the middle of the graph-- has lost green because we've propagated the consequences of that, which means as soon as we assign blue here, we already know that node that we kept trying to make green-- it can't be green. It's going to be red. And this node here that we had to sort of mess around with until we discovered it needed to be green-- arc consistency knows and doesn't have to backtrack here. But will it get through without backtracking? No. But the backtracking is not quite as bad, and it's pretty local. OK? So that's a lot better. I'm going to quickly talk about one more concept that should be pretty easy now that we have arc consistency, and then we'll be done for the day. OK, and that is the concept of ordering. So you are journeying down your CSP. You're making decisions like which variable do I do next? And within a variable, which values am I going to try next? And there's good decisions and bad decisions. And so far, we've just been filtering. We have not been trying to do a good job at picking variables and picking values. One very powerful idea is that you should pick the variable that has the fewest remaining values. How are you going to know how many values there are? Well, if you're running filtering, you can see, right? So it's minimum remaining values with respect to a filtering algorithm. So right now, everything looks the same. I'm going to assign red. But now, look. I have a choice here. I could either work on this part of the graph that's sort of next to what I've already done or I could just teleport over here and go into the east or Tasmania or whatever. What should I do? Well, intuitively, I should keep working around where I'm placing constraints by my assignments. Or rather, where constraints are kicking in. And so I should move here to the Northern Territories, and one way to formalize that is to notice that at this point here, the only things left are blue and green, whereas over in Tasmania, I still have red available. So in general, if you see a domain starting to shrink, you know there's action at that variable, and you might want to do your compute there. OK. Why should it be min rather than max? Why don't I go for the variables that are most unconstrained-- most free? The idea here is, unfortunately, you are going to have to assign every single variable. And so if there is a problem in your assignment, given that you're going to have to backtrack, you might as well backtrack now. It's called fail fast. If you think a problem is brewing-- and a sign a problem is brewing is the domains are starting to shrink rapidly-- you should go focus your compute there. And so you should always rush into the scary door because you're going to have to assign every variable. OK. So let's take a look at this, and then we'll talk about one last thing. promise. OK. All right. Let's-- forward checking. We need to do some filtering, but let's do minimum remaining value. I'm going to assign-- so this is not arc consistency. This is still just forward checking, but I make an assignment to blue and I make an assignment to red. And remember before, there was this problem of kind of the consequences of that green in the upper left corner and there was propagation from arc consistency? Well, we could just go work on the upper left corner, right? We're running the show here. We can go to the upper left corner and assign that to green. And if we do that, now, where are we going to work? This is the variable that's-- well, it could be either of those, really. Watch. It's going to do the thing I don't expect. No, it goes there. And so you can see where sort of our running down the consequence of our computation so that we can find the tricky parts of the problem and sort of disentangle them right now, where the backtracking will be local and relevant, as opposed to teleporting all over the place. OK. So minimum remaining values. It's called most constrained variable. Also called fail fast ordering. You're going to have to do every variable. You might as well do the hard ones now, right? There's another way to do ordering, which is the ordering on values. So this is-- OK, let's run headfirst to the tricky parts of the CSP and work on them so that we backtrack as quickly as possible and get all that backtracking done in a local part of the search, rather than having to backtrack through a whole bunch of combinatorial stuff. What about values? So let's say we're in this scenario here, where we've assigned red and we've assigned green. And we're trying to decide, should we assign red or blue over here to Queensland? Now before, we made our lives as hard as possible. Go to the hardest part of the map-- work there. Here, the idea is you want to pick the least constraining value. That is the value that rules out the fewest other values. How the heck am I supposed to know what's ruled out? You're going to have to run some more filtering. This is an expensive thing to compute. But it's interesting. When we picked a variable, we wanted the hardest variable. But when we pick a value, we want the easiest value. We want the one that has the least impact on the rest of the graph. So why is that? Why is it when we pick a variable, we want to do the thing that is hardest, but when we pick a value, we want to do the thing that looks least likely to fail? It's because it's a CSP, and in a CSP, you have to do every variable. Sooner or later, you have to do it. You might as well do it now. You don't have to do every value. If you play your cards right, you might not have to do very many values at all. And so you might as well do the hard variables first. But if you're picking values, you want to pick the ones that are likely to work out, and maybe you don't even have to try the hard ones. OK. So when it comes to values, you take the easy door. OK. I'm going to show you one more thing. OK. All of this together-- ordering these ideas can let you do very large problems that might have been intractable without these heuristics. Let's take a look. Backtracking-- [INAUDIBLE] discard. All right. Last thing for the day. We're going to do arc consistency and minimum remaining value, and see what happens. So we'll start in the corner, like we always do. Blue. So we're going to filter-- we're going to propagate the consequences of our constraint, but we're going to jump to a variable that has a minimal domain. So maybe we'll do this one. Now we're going to go up to the green one in the upper left, and then we're going to follow this through. And we are both propagating and jumping around the graph to the areas that seem to be hot spots. And so in this case, that's solved quite quickly. We would need a bigger graph to show why you would need the least constraining value here. We're hitting the limits-- our algorithms are being good enough that this toy problem isn't hard. The big problems are still hard. CSPs can be very hard to solve. Next time, we'll talk about when problems are big, what kinds of techniques will work. We'll talk about other methods other than backtracking search and we'll talk about exploiting the structure of your CSP. That's it. See you all on Thursday. STUDENT: I'm going to be using that one. STUDENT: Got it. STUDENT: Oh, really? |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180918_Markov_Decision_Processes_MDPs_Part_12.txt | PROFESSOR: Today we're going to talk about Markov decision processes, which are a generalization of search problems where you're not entirely sure what your actions are going to do until you try them. So why would we consider a kind of nondeterministic search? We had our search problems before. They had states, they had actions, they had costs, and we made plans. We'd perhaps be moving a robot around the grid to accomplish some task or a Pac-Man around a board. The reason why we need to have a nondeterministic search is because in the real world, often you know what actions are available to you, and you know what they might do but you're not entirely sure what outcome will occur. So for example, if you're this robot here trying to cross this ledge, you know you can shimmy to the side and try to get that gem, and it might work, or you might fall into the pit. And so when you take an action, you may know the possible outcomes, but you don't know which one is going to happen. And so you have to make plans or more generally, compute policies that take into account all of the different outcomes that might occur in response to your actions. One example of not knowing what's going to happen is when you hit the spacebar, usually you get the next slide, but sometimes it all freezes up. Luckily, my policy knows what to do when that happens. So we're going to have a couple running examples. Just like when we talked about search, we had running examples of search. So we talked about the specific examples of search, or CSPs, or games. And then we talked about the whole class of search problems, or CSPs, or games. The running example we're going to use for MDPs, Markov decision processes, is Grid World. But just like in all those other cases, it's just a running example. So in Grid World, we have a maze, we have a robot, we have north, south, east, and west, and we're going to have a bunch of Grid World concepts, which are really good for illustrating the kinds of things that happen in MDPs, but we're going to have to be really careful that we don't overfit-- flash forward to the machine learning section at the end-- we don't want to overfit to this particular example. So in Grid World, which is just one of a whole rich world of possible MDPs, we have a problem that looks a little bit like a maze. The agent is going to be on a grid, and they're going to be walls blocking the agents path. I love it when they install stuff for me. Let's go. Get rid of you. There are going to be walls blocking the agent's path. This is a lot like moving Pac-Man around a maze except now there's going to be some nondeterminism. And in this case, in this particular MDP, that the nondeterminism is going to take the shape of noise. So in this Grid World, say 80% of the time, when you take an action, which might be called north, you'll actually move to the north. Remember, in search you would just decide where you want to go, and then you'd be there, but here you try to move north, but it might fail. So in Grid World, your actions might succeed, and that means you're going to move in the direction you attempted to move, or they might fail, which means something else happens. And in this formulation of Grid World-- and we could formulate Grid World a million different ways. Formulating a problem so that it matches the real-world problem you're trying to solve is one of the challenges in using these techniques, but in this Grid World, 80% of the time you go where you're expected if there's no wall there. If there's a wall there, you'll just bounce off and stay put. The other 20% of the time you either go counterclockwise or clockwise direction. So 10% of the time, if you go to try to go north, you'll actually go west. 10% of the time, if you try to go north, you'll actually go east. In this Grid World, you're totally safe from going south. That's something that will never happen in this model if you try to go north. Is this is how the real robots work? Not really. This is how the Grid World we're going to use for this lecture and the next one works. If there is a wall in the direction you would have gone, you stay put. All right. That's the rules about where you can move. You can move north, south, east, west, but it might fail, sending you in an unexpected direction. Now let's talk about the equivalent of costs from search. In MDPs, the costs are called rewards, and here we receive two kinds of rewards. They're sort of the primary Grid World reward, which is that you exit the game, and there's good exits and bad exits, and they're labeled with a value. So in this case the gem might be the good exit. If you can get to that square, you take the action exit, it ends the game. You get your plus 1 reward, and you're a happy robot. If you fall into the pit, you also take the action exit, you receive your minus 1 reward, the game still ends, but now you're a sad robot. Those are the big rewards. They can be good or bad, positive or negative. And in Grid World, to illustrate some concepts, we sometimes also have a living reward, which happens every step that you don't exit the game. So for example, in this position here, I might decide to take the action north. Maybe it succeeds, and I actually move to the square to the north of the current robot, and I get a little reward. It could be a tiny little boost, a positive reward, or it could be a tiny little bit of pain or something like that as a negative reward, or it can be big or it can be zero. That's something we can change, and that's something that you get every step of the game. This notion of a living reward and a big reward at the end and a gem and a fire pit, these are all Grid World concepts. But the idea that when you take a transition from a state to another state, you receive a reward of some kind. That's an MDP notion. The goal, loosely, because we're going to revisit this later in this lecture, the goal, loosely, is to maximize the sum of the rewards So you would like to take action so that you get the gem, but you don't fall into the pit. That's a lot like search, when we we're trying to take actions which, in that case, minimize costs, now we're trying to maximize rewards. In a deterministic Grid World, we could have solved the problem with Search. We could have said, all right I'm in this state here, and what can I do? Well, I can move north. I can move west. North is a good move, west is a bad move, so I'll move north. And when I choose that action, I know what's going to happen. There is one outcome per action. That's what it means to say you have a deterministic search problem. In a Markov decision problem, it's nondeterministic, which means when I'm in a certain state-- like the robots right there to the right of the pit-- and I choose to move north, there's multiple things that could happen. Importantly, in an MDP, I know the set of possible outcomes, and it's actually going to turn out I know the probability that each one will occur. I just don't know which one will occur in a given run. So when I plan to go north here, I know I might go north or I might go to the east and have to replan from there, or I might go to the west and fall into the pit, and that will be bad. And as I plan, I need to take these things into account. So I might take, for example, a cautious behavior that takes into account all of the possible failures along the way. So in general, an MDP, a Markov decision process, will be defined much in the way a search problem was by a set of quantities that defined the problem. There's a set of states. This is very much like in Search. These are the set of configurations of your problem. There is a set of actions, and we had these in Search too. Sometimes we didn't really talk about the actions having labels when we talked about, say, depth-first search, but there were neighbors. There was a successor function. And in your code, in project one, you did actually have to manage the labels of the actions. So this is something that's been there all along, their states and their actions. The differences is that in search, when you're in a state and you take an action, there is a successor state and a cost. And in MDP, there's not just one successor state, there are multiple. So we've got our states like before, we've got our actions like before. But now if I'm in state S, and I choose action A, there may be multiple states, s prime, that can occur as a result. And so what I will have is what's called a transition function, which describes from state S and action A, what S primes can happen and with what probability. So people write TSA S prime and call it a transition function, but what you should have in your head is this is for being in state S, choosing action A, and what I care about is the probability that every possible outcome, S prime, happens. So TSA S prime, is in this sense a conditional probability that S prime will happen if you were in state S and chose action A. Another important quantity in an MDP is a reward function. So a reward function, in general, is going to be a positive or negative or a zero reward that you get. If you're in state S, you choose action A, and you land in state S prime. Sometimes it will only depend on S. Sometimes it will only depend on S prime. Sometimes it will depend on the entire triple. That triple is often called a transition, and in general, we'll write the reward function as being a function of where you started, what you did, and where you landed. So if you're on the cliff trying to cross the cliff, there might be one reward for making it across and an entirely different award if you fall into the fire pit. In general, there's maybe a start state. There might be a terminal state. This is where things start to differ. In Search, there was this really important notion of a goal test. These are the states where we stop. We've achieved our goal. And MDPs, in full generality, the game just keeps playing, and sometimes you get into a state where it stops, sometimes the game has gone forever, and we'll need to be very careful with our algorithms and our formulations that we understand what that means that a game may go on forever. So MDPs, like I said, are a class of nondeterministic search problems, and they will require a new class of algorithms to solve them. So A* search won't do it. That's for deterministic search problem. So how are we going to solve MDPs? We're going to talk about a bunch of algorithms today and next class, and then in some sense, the week after that as well, because it turns out that MDPs are the foundation for reinforcement learning as well. However, you actually already have an algorithm for solving MDPs. It's called expectimax search. When you learned about expectimax search, it might have felt like a weird variant of minimax, where the opponent was rolling dice or something like that. But it's actually the more general algorithm, and here, we'll see that it's very tightly related to the algorithms we used to solve MDPs. So MDPs are like search problems, but instead of knowing what your action will do, you have a distribution over possible outcomes represented by the transition function. Any questions? All right. So there's this Markov guy, and what's he doing in my decision problem? Markov in AI usually means something about the present state, and it means that given the present state, the future and the past are independent in an appropriate sense. And in a Markov decision process, what Markov means is that the action outcomes-- so those transition probabilities of what will happen if I take this action in this state, they can depend on the state, and they can depend on the action, but they can't depend on the past. So if I'm the robot and I want to know what's going to happen if I try to cross this bridge, well, it depends on where I am and what I'm doing, but it doesn't depend on the route that I took to get there. So if you remember, this is just like what we had in search. We had to be very careful to formulate our search problem so that the state had any information stuffed into it that was necessary to let the successor function fully describe the problem. And so with Search, we often formulated, all right, my state can't just be Pac-Man's position, it's also got to be which dots I've eaten or where the walls still are or whatever is appropriate for that variation. MDPs are the same way. We often have to be careful how we formulate our states so that they give us the Markov property, that if I know the state and I know the action, I can tell you the distribution over what happens next in a way that's independent of what's happened in the past. Thank you, Markov. Here's another big difference from Search In Search, we had this model of the agent that looked at the search problem and did a bunch of offline computation and thought and thought and thought and came up with a plan. What's a plan? A plan is a sequence of actions. And then we would execute that plan step by step, and as we executed that plan, our master design would fall into place action by action, and we'd end up with the goal state exactly the way we planned. MDPs are for a noisier world. And so we can't really have a plan because it might work or it might not. So let me give you an example. I'm going to pull up Grid World here. All right. Here is Grid World on the right, and here on the left, you can't really tell what I'm pressing, so you'll be able to see what happens for each transition as I pass. So here the robot is the blue dot, and you can see 1, that's the good exit with the gem, minus 1 is the bad exit with the fire pit, and I'm going to press north, south, east, or west, and the robot may or may not go the direction I press. So what direction should we go? Let's try north. I'm going to press the up key on my keyboard. What will happen? Who knows. So it worked, all that suspense. This said, started in state 0, 0. Took action north. Ended in state 0, 1. That is a transition. S was 0, 0; A was north; S prime was 0, 1; and I got a little reward. My reward was negative 0.1. In this particular configuration of Grid World, you get a minus 0.1 each time, and that might affect your behavior. So as we change the living reward up and down, we'll get different Grid Worlds in which the optimal actions are going to be different behaviors. We'll see that in a second. So I'm going to try to go north again. I'm going to press north again. OK, it worked. So far, it's acting pretty deterministic. So we started in state 0, 1. We took action north, and we ended in state 0, 2. We might not have. We might have ended somewhere else, but that's what happened this time. That's a transition, SA S prime, and we got an instantaneous reward, minus 1. So far, my rewards are minus 0.2. I'm going to try to go east. It worked. Do something random. I went east, and it worked. I'm going to try to go east again. Oh, look at this. Started in state 1, 2. I took action in east, and I ended in state 1, 2. That wasn't the most likely outcome, but that's what happened here. And so when I plan, I need to plan knowing that these things might not go the way I expected. So if I had just blindly executed up, up, east, east, east, east, exit, I'd be in trouble because something didn't go according to plan. Something didn't have its most likely outcome here. I'm going to go east again now because that's still a good idea. Whenever I'm in that state, that's a good idea. Now, I'm going to go east again. Now, here it starts to get a little bit suspenseful because I could successfully go into the good exit, or I could move perilously close to the fire pit. Let's try east. Lucky. And then from this point, you'll notice I received only that living reward minus 0.1. I didn't receive the plus 1. That doesn't happen until I take the action exit from the square. And you think who cares about exactly when it happens? Well, we're going to use this running example enough, and it's going to be in your project, but it's good to know exactly when it happens so you don't have off by 1 errors all the time. All right. So here I go. I'm going to exit. Boom. And what happened here? Let's scroll up. I was in state 3, 2. I took the action exit. I ended in the special state, terminal state, and I got reward 1, so my total cumulative reward was like 0.7 or something like that. I'm going to do it again. All right. Let's make this a little more tense. I'm going the scary way. I'm going to go east, east. I'm going to go north. I'm going to walk past the pit. North. All right, I'm going north. Anything could happen. All right. Some years, this is really intense, but not this year. All right. So you get the idea. But even though nothing spectacular happened like falling into a fire pit, you did see a couple of times where I expected the most likely outcome would be I'd move east, but I stayed put. As a result, we can't have a plan, which is a sequence of actions because who knows what's going to happen when I take those actions. For MDPs, the analogous concept to a plan is something called a policy. A policy isn't a sequence of actions you take. It is a function which in a state tells you an action. So it's like a recommendation of an action function. Policy, you give it a state, it gives you an action back. In general, when we say we solve an MDP, we're not just interested in any policy, we're interested in the optimal policy. The optimal policy-- policies are pi-- that little star throughout these lectures, and in general, in this class it's going to indicate optimality. So we want pi star, which is an optimal policy. There may be multiple equivalent policies. A policy gives an action for each state, and often a policy is one that maximizes the expected utility of followed. And an explicit policy is an actual explicit enumeration for each state of where you should go. And so for example, this map here that the robot is looking at for the Grid World, that is an explicit policy. It lists each state and what to do. That means following this policy is a reflex agent. You look up your state, you look up the action, you do it. No computation is required. All the computation went into building this policy in the first place. Once you've got it, you just do it. Sometimes that's realistic, sometimes it's not. So for example, in something like Pac-Man with dots, the state space may be so large you could never compute or even write down the explicit policy, and so we would need different methods. We're going to talk about today assumes that the MDP is small enough, the state space is small enough that we can actually enumerate the states and write down policies. That's not always going to be true. If you remember expectimax, that didn't actually compute policies either. You just executed it as a computation in a state, it churned on the game tree for a while, and then it said take this action. So expectimax might be living inside a function, pi, that computes the policy on the fly on demand. That's not what we're going to talk about today, but that is another class of ways you can build policies. Instead of being explicit policies, you could have an implicit policy that requires computation each time you try to access it. All right. Let's take a look at some optimal policies. So you can see the optimal policy right here. So there's the gem, there's the fire pit, and there's this policy that says, all right, if you're here at the start state, you should go up and around and into the gem. OK. Fair enough. Where it gets more interesting is in some of the other squares. So for example, if you're in this square here, this policy says that you should take the short way. You could just take the short way around and risk the pit. Why? Why don't you go the long way around? Well, this is going to be a function of exactly how the living reward, which here is a penalty balances against the risk you take by walking past that pit. And this is a general example of we feed in the specification of the model, and the behavior emerges through the computation. So let's look at some examples of different kinds of behaviors all emerging from the same computation of solving an MDP to get an explicit policy for Grid World. So here's an example. There's still a plus 1 and a minus 1, a good exit and bad exit. And remember, each time step that you don't exit, you get a little reward. In this case, it's negative, so you get hit with a tiny little tax minus 0.1 each time step. I guess it's more of a tool than a tax. And if you look at this, you can see that as you'd expect, if you're at the start state, you might as well go the safe way. But if you're here, even though you're quite close to the shortcut, it's still going to send you around the long way. And that's because that living reward is so minor that it's not worth the risk of falling into the pit, which is a significant risk. So you just pay the extra price to go around. You might slip a little bit as you go, it might take a little longer, but you basically go around the long way. Here's the really interesting square right here. What the heck is that? The living reward is very small, and that means this agent has a lot of patience. It can afford to take a lot of steps to get a large reward later. So it's doing this odd thing where it's moving into the wall. What's up with that? So we fed all the rules into the agent, and what did it do? It solved it, and it basically learned an exploit. Grid World has a bug, and this agent learned it. What is the bug? From this grid square here, if you go north, you might fall into the pit. If you go south, you might fall into the pit. If you go into the pit, you're probably going to fall into the pit. But if you head towards the wall, there's no chance of falling into the pit. What's probably going to happen? Nothing is probably going to happen. You're probably going to end up where you are, but if you do it enough over and over and over again, eventually you'll slip to the left or the right, and you will have escaped the danger square without any risk of falling into the pit. So if you remember back, you remember the little ibos playing soccer where the way they shoot is they put their stomach on the ball and then poop it out. This is more or less a spontaneous discovery of a behavior which is optimal but maybe not what you would have predicted, and it's a consequence of the rules of the game. All right. We can turn up the temperature here, and charge the agent more per step so that taking your time is no longer going to be as appealing. And so in this case, you still go straight. You prefer the good goal to the bad goal, but here you're going to risk falling in rather than taking 10 turns to shimmy your way out. But you'll still go the long way around if you're in one of these squares. Let's turn up the heat again. Here the reward is minus 0.4. Think about that. It's like starting to be almost as big as the rewards at the end of the game. So now this agent is not going to mess around. Given the choice, it would rather take the safe way than the dangerous way, but if you're in the square, you're just going to run for it. Maybe you'll fall into the pit, but that's OK because if you go the long way around, you're going to accumulate so much negative living reward that that doesn't make sense. What do you think is going to happen if I crank it all the way to negative 2? Think about that for a second. That fire pit starts to look pretty good because it's better than the reward you get every step. And so what are you going to do? Straight to the closest exit. Get me out of this game. All of these different kinds of qualitative behaviors will emerge from the same kind of computation just as the differences in rewards and the balances of those shift. Any questions on that? Now, we're going to see how to compute the stuff. That's Grid World. It's really, really easy to think of search problems as happening on a grid, and by now you know that many search problems have nothing to do with the grid or any kind of spatial structure. I'm going to give you another example of an MDP that looks nothing like Grid World to give you a second running example that we're going to work a lot with today and next time. So this is a super, super simple example of a robot car. It's a very, very simple example that has only three states. And what you see here is a state transition diagram-- I think like a finite automaton-- that describes the states and how they interact in this MDP. So we have a robot car. It would like to travel far and quickly. And it's going to have three states, obviously, a huge simplification. There's going to be cool, warm, and overheated. And cool is the good state. The car would like to be cool. Warm is OK, but you start to risk overheating, and if you overheat, the game is over. It's not actually directly good or bad to be cool, warm, or overheated because the rewards happen on the transitions. And in this case, you've got action slow and fast, and the way the rewards happens on the transitions is that you get twice as many points for moving fast than from moving slow regardless of where you land. The rewards can depend on where you land like they did and Grid World, here, they don't. They only depend, in this case, actually, on what state you're in and what action you take. So let's look and see if we can just wrap our heads around this example. So if the car is cool and you choose Slow, you will get your plus 1 reward because you went slow, and with probability 1, you'll stay cool. All right. What happens if you go fast? Well, if your car is cool and you go fast, you get the plus 2 for going fast, but what happens next is not deterministic. Half the time, you stay cool, and half the time, you warm up. If you're warm and you go slow, you get the plus 1 reward for slow. Half the time you stay warm, but half the time you cool down. If you're warm and you go fast, you get a minus 10, and you overheat, and if you overheat, the game is over. So what do you do? What do you do if it's cool? Go fast. What do you do if it's warm? Don't go fast. Yeah, go slow. And if you're overheated, you should reflect back on the decisions that got you there. In this case, you do not have to end up there ever. So there you go. This is a little MDP. It's got a very small number of states, but you can imagine the games can get very long because there's no real end to this game unless you make a mistake. That's an example of another MDP. All right. So let's think if we wanted to figure out it's cool. I just ask you. So if I want to know what to do when the car is cool, apparently, you can just ask CS 188, but if I wanted to do that computationally, how would I figure out what the right action is? You already got a tool. You've got expectimax search, but I can think, all right, if I want to decide what to do from the state, cool, I can think out the possible futures just like we did with games last week. And if I think out the possible futures, I'll think, well, I can go fast, or I can go slow. That's going to be under my control, but both are possible. And if I go slow, only one outcome can happen from my taking that action, and that is I end up back in my state I started in. If I go fast, there are two outcomes to that action. I'll either stay cool, or I'll heat up. I don't know which will happen, but I know they both can happen, and I know the probabilities associated with them. Now, you might wonder, what if I don't know the probabilities? I would say come back next week for reinforcement learning where we'll learn the probabilities. For now, we know about them. And then I can just kind of keep extending this into the future, and I can see this tree, which will look exactly like an expectimax tree. There will be an alternation between actions that I can choose amongst and then chance nodes which describe the possible outcomes, which I know a distribution over, but I can't force an outcome. It's not going to be as bad as worst case, there's just a distribution. So here's a search tree that describes that. One thing should already strike you about this search tree. Anybody notice anything that didn't really happen in expectimax? One thing is it's the same three states over and over and over and over again. So doing some exponential amount of computation over these three states just like can't be the right thing. Because these same subtrees up here, this tree here, is exactly the same as this one here, so we're going to want to be careful of that. In general, whenever you have a Markov decision problem, you can think about there being a tree like this projected from each state. So from each state, there is a tree of futures that has that state at the root and then the actions available to you branching out. When we talk about S, S is a state. The actions are A-- those are the actions available to you-- and we've had those for a while. Now, here's where we get to something that's new and a little bit counterintuitive but super useful. What are these chance nodes from expectimax? What do they represent? Well, S is a state of the world, and A is an action you choose. And then after you choose an action, it's going to resolve to some state, S prime, and there's going to be multiple possible outcomes, which are governed by a probability distribution. But what about this node right here? I was in a state. There could have been other states, I was in that state. I chose an action. It may or may not have been a smart action, but I've chosen it, but I don't yet know what happened as a result. Being in this situation where you're in a state and you've committed to an action but it has not resolved is called a Q state. And it represents having chosen an action but not seen its resolution. This whole sequence here, this little tree, SA S prime is called a transition, and on this arc right here, when your Q state SA resolves to a particular result, S prime, that's a transition. It's labeled with a probability from your MDP as well as a reward. And here that reward is shown as a gem, but the reward can be negative, for example, both the fire pit and the cliff. So both the fire pit and the living reward are both negative. So this little search tree, this is MDP is in this slide, is the search tree. Everything we do, every algorithm, every quantity, it all comes back to this search tree where you have a state, a sequence of actions that you maximize over, and for each action, a sequence of outcomes, which you have a probability distribution over. We'll talk about solution in a bit. Let's take a step back, and try to figure out how an agent should think about a sequence of utilities. Because another thing that happens in an MDP is that you saw when we were in Grid World, those rewards would be trickled in step by step. So we need to be able to think about our preferences over these sequences. So what preferences should an agent have over a sequence of rewards? First, should an agent like more rewards or less rewards? So for example, if I give an agent a choice between a sequence of three rewards-- 1, 2, and 2-- or three awards-- 2, 3, and 4-- which one should it pick? Well, I guess I have to like define whether positive is good or not. In this case, we've been talking about them as rewards, so it seems pretty reasonable that the agent should prefer to get more rewards rather than less. Great. If that's all I wanted to accomplish, I would just add up the rewards, and I would say, hey, agent, go act in such a way as to maximize the sum of rewards. But there's one other thing that turns out to be really important in MDPs, and it has to do with a choice between now or later. So for example, here's two sequences of rewards. One is 0 rewards followed by 0 reward, followed by a reward of 1. Or another sequence where it's a reward of 1, followed by 0, followed by 0. Same sum of rewards. What should the agent prefer, the reward now or the reward later? Or should it be indifferent? How many people say reward now? Give me rewards. How many people want their reward later? How many people are indifferent? So it is totally reasonable to be indifferent. Sometimes that's an appropriate formulation. But in most cases-- and most people raised their hand for rewards now-- it is also reasonable to prefer rewards sooner. And so how are we going to handle that? Adding up the rewards doesn't actually capture the fact that rewards soon are more useful than rewards later. For example, if somebody comes up to you and says, hey, would you like $100 now or in 20 years? Will you be indifferent? What if they say, and of course, it's easy to see that you might say, well, I'd rather that reward now. But what if they say, would you like $100 now or $110 in 20 years? You probably still want the reward now. And so that means there's going to be an implicit tradeoff between having things soon and having things be big. So it's reasonable to maximize the sum of your rewards. In fact, that's what we've been doing so far with Grid World. It's also reasonable to prefer rewards now to rewards later. One solution to capturing these different reasonable utility functions is to say that the values of rewards are going to decay in an exponential way. So for example, here's a reward, and if you get it now, it might actually be worth 1 right now. But if you think about that reward, that same reward being in the future, it could be worth the same, then you would just be summing rewards, or we can penalize it. A way that's very convenient formally and mathematically to penalize things is to say that in the next time step, it won't be worth 1 any more. It will be worth less. It will be discounted. It will be worth some amount, gamma, which we can pick. Gamma has to be greater than 0 to make sense, and it has to be less than or equal to 1 if you want things to be worth less later, but there are different points you could set in between. And then in two time steps, it will be worth gamma squared. And so the value of your reward will decay exponentially, or the rate of that decay is something that is actually an input to the problem rather than something you can derive. So two questions. One is, how do we discount, and the second is, why do we discount. So how we discount is actually the easier of those questions. So remember, if we think about there being this tree of possible futures where every time you take an action, the world responds with telling you what the outcome was. So here you are at S. You took action A. You landed in S prime, and at that point, there's this little call which is, hey, I was at S, I took action A, I landed in S prime, so here you go. You can have your reward, R of SA S prime. And then you get another award here. You get another reward and another reward here. And what you can do is conceptually you want each of those rewards to get hit with an extra factor of gamma and say if you are running and expectimax search, you could just set it up so that each time you recurse to the next level of the tree and you return a result, you hit it with a factor of gamma so that each level down picks up an extra factor of gamma. This has some really interesting properties like things way, way in the future eventually just don't matter very much because they've been hit with so many factors of gamma that they're just not worth very much to your computation. So why should we discount? One is that sooner rewards probably really do have higher utility than later rewards. And sometimes this has to do with the mechanics of the problem itself, like maybe this is money you can invest it or something like that. And there is actually a story where the mechanism is something exponential or something decaying in value. But it also helps our algorithms converge, and there's another reason why these are very convenient, which I'll talk about in a second. So that's discounting. In general, in MDPs, we say either we sum the rewards or we sum them up with a discount that hits each time step so that things farther out into the future are worth exponentially less. And if you think about it, these are actually the exact same thing because if you use that gamma equal to 1, you'll end up with the undiscovered case. So if you think about a discount of 0.5, that sequence 1, 2, 3 is not worth 1 plus 2 plus 3. So 1, 2, and 3 are the rewards. The utility of the sequence could be the sum of the rewards or it could be the sum of the discounted rewards, which is, in general, what we'll do, and here, that means that's 1 times 1 because it has discounted yet plus 2 at the second time step, but it gets hit with a discount, in this case 0.5. And then on the third time step, you get a 3, but it's been hit with that discount twice, so it's only worth a quarter of what it would have been worth at the current time step. And so the sum here isn't 6, it's much less. And that 3, even though it's the largest number is actually worth less than the 1 at the next time step. So if you look at this, among other things, this means that 1, 2, 3 is not as good in utility as 3, 2, 1. So when we specify an MDP, we have to specify a discount, which tells us how to turn the sequence of rewards into a utility over the sequence. It's either going to be the sum or the discounted sum. All right. Remember when we talked about utilities and we talked about there being even before you have utilities, which is going around and assigning numbers to outcomes, we talked about something more primitive, which is preferences. I like two scoops of ice cream more than one scoop of ice cream. And then I can go around saying how many U tiles each scoop is or something like that. We talked about there being a set of rational preferences. These are preferences that seemed like any reasonable set of utilities should respect them, and then we could derive things like if your preferences obey these rational constraints, then they can be formulated in a certain way as a utility function. There's an equivalent notion here for sequences, and it's a notion of stationary preferences. So there is this theorem that says let's imagine that your preferences are stationary. That means if you prefer the sequence of rewards A1, A2, A3, A4, and so on, to this other sequence B-- B1, B2, B3. So you like the A sequence better than the B sequence. Great. Who cares why. You like A better than B. If I tell you, hey, how about for the first time step, I'll give you the same reward, R, and then I'm going to give you A and B. Well, if you liked A better than B now and your preferences are stationary, it means you will also like a better than be shifted one time step if that first time step has the same reward. It seems like it's reasonable to assume that preferences are stationary. If preferences are stationary, it turns out there are only two ways to define utilities, which as we talked about are basically the same thing, which is you either sum the instantaneous rewards at each time step, or you sum them with a discount. And you think that's a little weird. Maybe my preferences aren't stationary, and sometimes they're not. Because for example, if you are playing a game, like say, Life, where you don't have an infinite number of time steps, it could be that by pushing the rewards out something just comes too late, something falls off the edge, or something like that. And so when things have finite horizons, sometimes things get trickier and have to be handled in different ways. All right. Let's do a quick quiz. First, any questions before the quiz has gone? Any questions? Yep. STUDENT: Is there a reason that are discounts are exponential rather than subtracting at each step? PROFESSOR: It's a great question. The question is, why are we doing exponential rather than subtractive discounts. So let me separate two things. Very important. There's two things going on in a Grid World, not in MDP in general, a Grid World, robots, north, south, east, west, and so on. There is a living reward which might be a penalty at each time. That is an R. That's a reward. That has nothing to do with a discount. Then there is a discount, which so far we have been talking about the Grid World is having a discount of 1. They were undiscounted so far. So you can always penalize each time step as part of the reward structure. But mathematically, when you talk about sequences being converted into a single number, that's when we use the exponential formulation. And the reason we do that is one, it's mathematically very convenient. It helps us prove that things converge. It reflects a lot of real-world things, in many cases by pushing something out in time, for example, interest and so on on money. Things really are exponential. And then beyond that, there's also this theory about stationary preferences which only holds for the exponential case and not for the additive case. But it's a great question, and sometimes we do actually, like in Grid World, want to have something additive going on where every time step costs the same. And we can do that too. OK. Yep. STUDENT: So this is depictions of reward in action, is the reward? PROFESSOR: I'm sorry, could you ask it again? STUDENT: Is the sequence of reward. So how can I know the sequence if the future is undetermined? PROFESSOR: So the question is, how can I know the sequence of rewards when the future is undetermined. And before you act, you don't know what sequence or rewards you will get, but what these algorithms do is they plan-- I could take this action, I could take that action. And based on the actions I take and the actions I might then take later, I have to hedge with all of those possible outcomes and probabilities in the appropriate way. So we need to be able to compute, OK, if I have this reward, or I have this reward, how would I combine them and how would I think about them. But it's true. You don't actually know which future you're going to get until you actually play the game. So solving an MDP happens offline. You don't actually play it. You consider the probabilities, and you make your policy. You can then go actually execute the policy, and then you're going to get one particular experience. And that's going to be analogous to what we do in reinforcement learning where we just play the game and something happens, and the next time we play it, something else might happen. And then we'll be in a position of having to learn. Right now we know the rules of the game, and so even without actually playing it, we can solve in a way that hedges all of those various probabilities appropriately. Great questions. Let's do a quiz on discounting. So this is a Grid World. You can go east, west, and exit. You can only exit at A and E. It's deterministic so that if you go east, you'll actually go east. And if you exit a day, you get 10. If you exit at E, you get 1. If gamma is 1, what is the optimal policy? So what should I do, say at B here? You should go this way. Now, it starts to get tricky. What should I do in the center? I should go left. All right. Now if gamma is 1, what should I do over here at D? I should go left because what sequence of rewards will I get? I'll get 0, 0, 0. And then at the end, it's been pushed farther out. I'll eventually get that 10, but that's fine because I'm not actually discounting it. However, if gamma is 1, I'll give you the easy one. I should go here. I'll give you another easy one in the center. It's going to be the same amount of time either way, but here's the interesting case, what happens here if it's 0.1? Well, if I go this way, what am I going to get? I'm not going to get 1 because it's going to be pushed out a time step. If I go this way, I'll get 0.1. What if I go this way? I'm pushed out one, two, three time steps, so I'm going to get 0.001, and that means it's better to just go right. Because I'd rather get that 1. By the time I get to the 10, it will be so decayed I don't even want it. So in this case, sooner is more important than more. And that tradeoff is encapsulated in gamma, which among other things, specifies in a soft way what's often informally called the horizon of the agent, how far out I'm thinking. Now, in MDP, you're always thinking all the way out but in a way that tails off. And the faster that discount tails off, the more greedy you are going to be, the more you'll pursue rewards sooner rather than larger rewards later. I think we'll just go from there. A couple more things, and we'll take a break and talk about algorithms for solving these MDPs. All right. So let's say you're that race car and you can go fast or you can go slow. And if you play your cards right, you're never going to actually overheat, so the game will last forever. You go fast, fast, slow, slow, fast, fast, fast. And what's your total reward going to be? Infinite. If you go slow all the time, what's it going to be? Infinite. If you go fast when you can and then slow when you have to, infinite. And so that's not good. I mean, it's good if you like rewards because they're infinite, but it's a problem for algorithms that are going to be trying to decide between different actions if all the quantities we're deciding between are just infinity. So how do we handle the possibility the games could go on forever and make our algorithms difficult? Well, there's basically three solutions, and sometimes they're appropriate, and sometimes they're not. One solution is to talk about finite horizon. In this case, we terminate all our episodes after, say, something like t steps. Life is an example of this. We don't know what t is, but it's not infinity. And so in this case, you say, OK, car, you've got 100 moves, go. Do your best. Having a finite horizon can give rise to non-stationary policies where the policy, the optimal action depends on the time left. First thought, that might be weird. You're in some state. Why does what you should do depend on how much time is left? And the answer is, you've seen this in like every sports game you've ever seen. It's like some crazy stuff happens at the end right before the timer runs out. With these cars, you might risk overheating at the last second because who cares. The game is going to end anyway. And so especially as you get close to the end of a finite horizon game, sometimes the policies can change greatly. But that's one way to make things finite. You just declare the end. Another way is to just use discounting. The nice thing about discounting is even though the sequences of rewards can be infinite and their sums can be infinite, their discounted sums usually are not. So if gamma is between 0 and 1 and the rewards themselves are bounded, the sum through some infinite series, magic is going to be bound. And then the last thing is some games have what's called an absorbing state, which is that you look at it and you say, OK, I can actually see that the games can go on forever, but with probability 1, it's going to stop. So the probability that you keep lucking out and not having the game end tails off to 0. That's good enough in many cases. All right. We're going to recap, we're going to break, and then I'm going to give you some algorithms. So to recap, Markov decision processes. They're like nondeterministic search problems. They've got a set of states just like search, including a start state. They have a set of actions, just like search except now instead of the action taking you from an S to an S prime, it can take you to multiple S primes. So we have a transition function, which tells you for each S prime how likely it is, possibly 0 if it's not going to happen. And for each transition, SA S prime, there's a reward that tells you how many points you get on that transition right then and then a discount, which tells you how to add up a sequence in the appropriate way by multiplying in a discount each time. The quantities that we've talked about so far, we've talked about policies, which is a mapping of states to actions and utilities, which is a sum of discounted rewards across the entire game of the MDP. We're going to take a quick break, and then we're going to talk about how to solve them, which will lead into reinforcement learning after the next lecture. Two-minute break, quick break, and then we'll start again. All right. I don't know if that was actually two minutes. It felt like two minutes. It was about two minutes. All right. We're talking about solving MDPs. How do you solve an MDP? The way you solve an MDP, the input, this is really important because in reinforcement learning, it's going to look so similar, and it's going to be so different. The input, when you solve an MDP is an MDP. That means somebody tells you, hey, here are the states, here the actions, here the transition probabilities, here are the awards, and then your output is a policy. It's a mapping from each state to the optimal action. In order to do that, we need to define a whole bunch of quantities that are going to be useful in this computation that actually are going to be very parallel to things that you computed with expectimax search. We're going to have to define them, and then we're going to have to mathematically state how they relate to each other, which will then turn into algorithms for computing them. So here are some important quantities associated with an MDP. We'll show this for Grid World first. The first is the value of a state, and this is going to turn out to be both the utility of being in that state and acting optimally, and it's also going to correspond to what expectimax would have produced. So values, the written v, there is a value for each state. v as a value. v* is the optimal value. That means the value you will achieve, on average, because it's going to be an expected value, if you act optimally starting from state S. Some states will have higher values than others because some states are better to be in than others. It's much better to be right next to the goal than right next to the fire pit. Remember, there are also Q states. S is a state. It corresponds to a max node and expectimax, and it has a value. That value is what you would do if you acted optimally from that point. A Q state corresponds to a chance node in the same sense that there are good states and bad states and you just have to compute the value of each particular state. Like you're in this state, you could wish you were in another state, but the state has whatever value it has. Q states have values. You're in the state, and you're performing this action. The value of that Q state is the expected utility of starting out having taken action A from state S and acting optimally thereafter. In the same way that there are some states that are good and some states that are bad, there are some Q states that are good, and there are some Q states that are bad. Their values have to do with what will happen if from that state and Q state you act optimally in the future. There is then an optimal policy. Pi is a policy. Pi* is an optimal policy. Pi* of S is the action recommended by this optimum policy from state S. And it will be something like in the state, go north. So let's take a look at these in Grid World. All right. So here is Grid World. This is a particular setting. To know these particular values, you need to know what was the noise probability, what was the living reward, what was the discount, and so on. But qualitatively here, you can see that if you are in this state, your value is 1 because you only have one possible future. You take exit, you exit, you get plus 1. If you're in the pit, the value is negative 1. You only have one future, exit minus 1. Everywhere else, these values represent optimal play from that point, and then all the possible outcomes have various probabilities. This is an average as an expectation. We'll figure out how to compute that expectation, but from here, this means that on average sometimes you'll accidentally slip and fall into from-- this 0.85 includes both going straight into the good exit and exiting. It also includes slipping here trying to get back, but slipping again and then falling into the fire and so on. And these arrows, you can see the numbers are the sum of discounted rewards, including the living award and the final reward, under the optimal policy. The optional policy is shown by the arrows. So those are the values of the states. And so you look. This state is really good because you're just going to get a good reward. This state is pretty good because unless you get really unlucky, you're going to get a good reward pretty soon. The state down here in the lower left, the one we had sometimes been calling the start state, its value is only about 0.5. Why? Well, by the time you make your way up, some combination of living reward and probability of falling into the pit and discount have whittled away that plus 1 and averaged in a whole bunch of other possible outcomes, and so your state's value is lower. It is better to be in the state than the state. But the whole point of computing these values is not to notice that this is a bad state, but rather to figure out what is the optimal thing to do if you're there. So those are values. Every state has a number, and the farther you are from the goal in Grid World, in general, the worse your value is going to be. What is that? These are the Q values. So each square is a state, and in each square there are four actions north, south, east, and west. So each of these little pie slices represents that action. The top pie slice is the north action. And so for example, you can see that that 0.85 from the state up here corresponds to taking the action east. So the value of the Q state, 3, 3; action east is 0.85. There are other Q values for that state. In the same way that states can be good states or bad states and you'd rather, if you can control it, be in the good states, there's good actions and bad actions. And if you can control it, you'd rather take the good actions. And so if you go south here, you're going to end up in a worse state, and then even if you act optimally thereafter, you're not going to get that same 0.85. And so you can see the Q value here is lower than it is for the other actions branching out of that state. And I just had a little episode. So those are the values. The fundamental operation in solving MDPs is to compute the value. And here we mean expectimax or expected value of a state. So we need to know the expected utility under optimum actions, and that's going to be an average sum of discounted rewards. Expectimax computed that, but right now we're going to try to find another algorithm, and in order to do that, we're going to write down recursively some properties that hold between these values that will let us formulate an algorithm. So let's write this in blue. The value of a state. So let's talk about the optimal value of some state, S. We're always going to think about this little expectimax future. How good is it to be at some state S? Well, I don't know, but I can relate it to some other quantities. I can relate it to how good other states and Q states are. So in particular, I could write down that the value of being in state S is just the maximum over all of the actions I can take from that state of the optimum Q value of S, A. That just says that the expectimax value of this max node is the max over these, of the Q values of those Q states. All right. Well, that doesn't really help because I don't know Q* either. So what is the value, or you can think about it as what would I get if I ran expectimax from this chance node, S, A. I can write that down too. So I can go, change colors for chance node here. The expectimax value of being in state S and having chosen A, regardless of whether A was, in fact, a good action, you have to compute this for all of them. Well, I don't know why I got into state S, and I don't know why I took it upon myself to choose A, but I do know what's going to happen next. The average outcome I'm going to get is always going to break down into, I'm going to get a reward from state S from action A. That reward is going to depend on the S prime I land in. So I get a reward for the transition. Who knows which S prime it is, but I'm going to get a reward. And then when I land in S prime, that's down here, I'm going to then act optimally. So what is that? The optimal action from state S prime is going to give me a score of v* S prime in the future. So my score from S, A is the instantaneous reward R of SA S prime plus the future after more reward v* of S prime. Two more things. One, that future is discounted, so I stick a gamma there. The second thing is I don't know which S prime is going to happen. I'm taking A, but I don't know what outcome S prime I'm going to get. I have to average over them. The way I average over them is I have to take an average. I sum them all up, weighted by the transition probabilities T of AS prime, and this thing here is the probability of S prime given S, A. So here's a system of equations. People often inline these things. So you will usually see these things inlined. When you see them inlined, it says, what is the expectimax value of a state? Well, it is the instantaneous reward that you will get in the next time step plus the future reward from the successor state. The future is discounted. I have to average over all of the possible consequences of my action, so I have to sum over S primes. I have to take the weighted sum of them, and then, of course, A is under my control, so I take whichever action is best. This defines a system of equations that relate v* of states to v* of other states. And you look at that and you say how does that help me, because now I've written something I don't know in terms of something else I don't know. Moreover, because of that max, this is not a linear system of equations. So you can't just invert some matrix or something like that. So we're going to talk about how to solve these equations, but this is the core system of equations. These are called the Bellman equations. These are the core recursive definitions of values and Q values of states. These are really, really good things to stare at until you are totally, totally sure you get what they're saying because you will see variations of these over and over and over again. So you can stare at them. I'm going to erase them and replace them with beautiful latex so you can see on your slides because no one should have to read my handwriting. All right. So that was a recursive definition of these values in terms of other optimal quantities. I can erase that. I can define the optimal value of a state in terms of the optimal values of Q states. I can define the optimum value of Q states in terms of the optimal values of states using these lookahead equations that relate values of states to values of states one step ahead in time. And again, these are called one-step lookahead equations or Bellman equations. Let's look at that racing search tree again. So if we look at this, well, I could run expectimax on this, but when I look at this, I think maybe there is a better algorithm because those same states seem to appear over and over again. And as we know, their optimal values are all interrelated by the system of equations. So this is the search tree to depth 1. If I continue it forward, that tree is going to grow. It's going to grow rapidly. It's going to grow exponentially fast. And if you look at that, you think it can't possibly be right to run expectimax over this whole giant tree that's growing exponentially fast, but, in fact, only has three states occurring over and over again. It's a tiny little system of equations. There should be a way to more efficiently just directly connect all these quantities and compute them. And that's what we're going to talk about now is an algorithm for this. So why should we not just run expectimax? Well, problem one, we're doing way too much work. Among other things, this whole subtree is exactly the same as this whole subtree. So there's at a minimum, there is duplicated work. And in fact, there's exponentially duplicated work. That's one problem. And so that's the state's repeated problem. The second, this tree is actually infinite, so running expectimax on an infinite tree is a bad idea. And can you fix these problems? Right now, if I stop the lecture, you could fix all these problems. You could say, all right, well, look, why don't we catch this stuff. Once I compute something, why don't I cash that. And actually, instead of doing an infinite tree, didn't we talk about how things deep in the tree if they're discounted don't matter. I'll just truncate this at depth 100. Well, actually, if you put these ideas together, you will have more or less reinvented the algorithm we're going to talk about now, which is an algorithm called value iteration. But the algorithm we have works the other direction. Rather than starting at the top, doing things recursively, caching as we go, and trying to depth limit as we go, the algorithm we're going to have is going to start at the bottom and work our way upward until we decide that the tree is deep enough. The key idea we're going to need in order for us to have this algorithm value iteration is an idea of time limited values. And so what we're going to do is where v*. So v* was a very tricky quantity. It is the value of being in a state and acting optimally. That's very tricky because that could go on forever. It could be an infinite depth thing. We're going to define something much more tractable, which is v sub k of a state. v sub k is the optimal value of if you start in that state and play optimally, it's the expected value of starting that state and playing optimally. If the game is going to end in k more time steps. Why is this useful? This is really useful. Now, suddenly that tree is no longer infinite, and this quantity is exactly what a depth, K, expected max would give if you started at S, so we already have an algorithm. And in fact, if we save repeated work, we'll have an even better algorithm in a second. So we take a look at this tree, and we say, well, looking at this, there is a computation from say, state blue of what can happen in two time steps. This is that tree. And if we run expectimax over that, it will compute the average rewards I will get if I act optimally, I do the right thing at all the max nodes, and the game ends after 2. So I can just think of this little tree as v2. That the root is of depth 2, and it has some value. And if I see this tree occur 100 times, I can just keep plugging that value in, and it represents the value of being in state blue, which was cool and acting optimally for two time steps. So let's take a look how this looks in Grid World. All right. So here is our favorite example of the grid with a good and a bad exit. v0 is very important to understand, and it's also very simple. If I have 0 times steps, what's a time step? It's a reward. 0 means 0 more rewards. What value will I get under optimal play from the good exit? 0. No time to get a reward. Bad exit, 0. Lower left corner, 0. Everything is 0 because there is 0 or more rewards, so my future score from that state is 0. Easy but not helpful. But now I can ask, what about if I allow myself one more reward before I end the game? Well, some of these states will no longer have 0 optimal value. Like the upper right, what can I do in one time step? Well, there's only one thing I can do in one step, which is call exit, and I get plus 1. How about the firepit? What can I do in one time step? There's only one choice according to the rules of the game, take exit and get minus 1. Anywhere else, what can I do in one time step? I can move and get the living reward. If the living reward is 0, these are the values for one time step. So this is v sub 1. These are the values achievable under optimal play on average for one more time step. Now what if I had two time steps? Think about it. Some of these places like the lower left, there's just no where you can get in two time steps that's going to give you any reward other than 0, presuming the living reward is 0. But here's v sub 2. So there's two interesting things that happened. One, the upper right square here became 0.72. Why? What does that represent? Well, what can happen in two time steps? I can move east, succeed, and take the exit and get plus 1. That has a certain probability. I can also move east, fail, slip, go in the wrong direction, try to move back, and run out of time. That's going to have value 0. And so that 0.72 represents a mixture of things where I didn't quite make it to an exit and the one case where I just barely made it into the exit in time, and when you average all that all together you, get 0.72. The more interesting one is the 0 right below it, the one right next to the fire pit. Why is that 0? Because I can exit the game in two times steps. I could jump into the pit and get a minus 1, but that's not optimal play. So under optimal play, I will avoid that. I will take some other sequence of actions, which does not include that, and I can basically guarantee myself an average of 0, where I can make sure I don't actually fall in. And then as I look farther into the future-- four, five, six-- you can see that eventually I'm looking far enough into the future that even from the lower left, I can start to see ways of eventually getting to a good reward. Now, I can't see all of them because some of the ways of getting to that good reward are going to take more than six or more than seven or more than eight time steps because they involve slipping or some other thing that's going to slow me down. But as I do this and I push this out, eventually, this will stop. And looking farther and farther into the future eventually will not contribute much if there's a discount. I'm going to show you one more thing on this. I'll show that same thing again. I want you to watch, not the values because the green will bleed outward, I want you to watch the arrows. So first they change a lot. But once I get to here, watch the arrows. The numbers are still fine tuning, but after a certain point, the arrows are done flipping. And this is a common thing that as I look deeper and deeper, I have these residual rewards that I can keep accumulating forever. It's like computing the limit of a series. I keep accumulating, but the actual policies implied by that depth of search will eventually stabilize, and that often happens long before the values themselves do. So this actually gives us all we need to define the core algorithm here, which we'll define now and then we will extend and pick up next time. And that is, I look at this tree of all the possible futures in ways this game can play out, which includes the same repeating units over and over again. And I look at the bottom, and I say, whats down there? What are these little expectimax fragments? Well, they're all depth 0 trees. They don't have any rewards in them. So I know I can summarize this exponentially large number of leaves with just the terminal values, v0, which we know are all 0. And then if I look above that, there are a bunch of repeated subtrees. These are the v1 trees over and over and over and over again. So I can summarize that whole stripe across this tree by just the values of each state that I could then reuse and reuse and reuse. And then if I go to v2, v3, and now at the top, there's not actually that much tree, but that value at the top is v4 if I consider this tree to truncate after four more rewards. So this is basically how this algorithm is going to flow. I'm going to start with v0 which I know to be 0, and I'm going to go through, and I'm going to compute a new vector of values. Each pass through this, I will be computing a vector which correspond to a depth-limited expectimax computations. And each time this new vector that gets produced for k plus 1 will represent having done expectimax in a larger and larger tree without having to have done it explicitly. Eventually these vectors will stop changing very much, and this process will terminate. So value iteration is the algorithm. It's basically like building a big layer cake where first I decide what could I accomplish in zero time steps? If I know that, what can I accomplish and one? If I know that, what can I accomplish in two? And I stop whenever it appears to have converged. All right. So here's value iteration, the algorithm. We're going to go through this now, and we will extend and improve this next time. In value iteration, you start with v0, that is for each state, you compute your average rewards under optimal play for zero or more time steps. So it's just a vector of zeros. Then you imagine you are given a vector of vk for each state. This only works when the number of states is manageable and can be enumerated. And so for each, you have vk. You know what happens if you compute k time step search from each state. And now using that as a building block, we're going to compute a vk plus 1 value for each. And that means the value, the average rewards under optimal play for k plus 1 more time step. And that's just the same little expectimax tree fragment. We say, well, if I want to know what I can achieve on average in k plus 1 steps from S, I'm going to say, well, what am I going to do? Well, I'm going to max over the actions available to me. I'm going to state I can choose an action. When that happens, I'm going to be in a Q state SA, but I need to know what S prime is going to happen before I can compute rewards. I don't know which S prime is going to happen, but I know a distribution over them. So I can take an average. So I sum over all of them, I weigh each option, S prime, by its probability, and now for each outcome, S prime, I'm going to get an instantaneous reward, R of SA S prime. And then well, I'll have k time steps left. How much score will I accumulate over that k time step? Well, it depends where I land. I landed in S, and then I'm going to play optimally but with fewer time steps. So it's going to be v sub k because that's all the time that's left for state S prime where I landed, and I can stick my discount in there. This looks exactly like the Bellman equations that related optimal values to other optimal values except that wasn't an algorithm. This is an algorithm. It relates vk plus 1 to vk, but I know vk. If I have a vector of vk's, I can then visit each state compute it's vk plus 1 with a one step look ahead like a one-ply expectimax search using this update. So how expensive is this? Well, I have to visit each S, so I get a factor of S. And then for each S, I'm going to do a little expected max, so I'm going to get a factor of A as I consider each action. And then for each action, I need to consider every possible state that results. In the worst case, every state can result in every other state, so I'll get another factor of S. In practice, you don't usually get that second factor of S because actions have a limited amount of possible successors, S prime. But there is your complexity. Is that good? It really, really depends on how many states you have. So for now we're going to imagine our sets of states as small, which is exactly the opposite of what we did when we add the search. We imagined our set of states was so huge you could never enumerate it. Well, we will fix the distance between these two things when we get to a state function approximation about a week from today. So this thing will converge towards optimal values, but I'll prove that next time. So should we do an example now? You guys are all closing your books, so-- who wants an example? Let's do it democratically. Who wants to just be free? All right. Sounds like we're going to do a quick example. All right, v0. What's v0 from each state? Remember the car MDP? It's 0. Because in zero time steps, you can get 0 awards. All right, v1. You don't even need to know the algorithm. You can just think, what will I accomplish on average if I play for one more time step? What if I'm overheated? 0. No more rewards for you. You're overheated. Reflect on your life choices. All right. What if you're in the cool state? You have only one time left. Go for it. Go fast. You're going to get 2. Sometimes. All the time. Who knows what state you'll end up in, but you'll always get the same reward. Yep. STUDENT: Wouldn't you get a minus 10 from being overheated? PROFESSOR: Excellent question. If you are in the state overheated, no, you don't get a minus 10. You get a minus 10 as you transition into that state. In that state, you can imagine there's like a little-- oh, that's unexpected. You can imagine there's a little loop here that says 0. Yeah, that's a great question. So if you're on cool, you get 2. If you are in the red warm state, what can you accomplish in one time step? Well, you've only got two choices. You're either going to go fast or you're going to go slow. And then you can compute. Well, if I went fast, I would be guaranteed a minus 10. If I went slow, I would be guaranteed a 1. That sounds better, and so that represents optimal play. The other one doesn't hurt you because you're not going to do it. Here's where things get interesting. From overheated, you still have 0. The game is over at that point. So let's say you are in the cool state, and you think, OK, I've got two time steps left. I could try to enumerate all of the length 2 futures. There are some number of them. I could try to enumerate them all, but that's not what this algorithm does. This algorithm says, what am I going to do? Well, I'm in this cool state, so I'm at cool, and I've got two choices. I can go slow, or I can go fast. What happens if from cool, I go slow? Well, cool, I go slow. If I go slow, I will receive 1 in the next step. I will be guaranteed to land at cool, and then I will do the optimal thing. v1. I don't have to think about it. Once I only have one time step left, I've got a cache right here lower on the slide. And so what's that? I'll get 2 in the next step. Or I could go fast. If I go fast, what's going to happen? Well, remember I'm in cool right now. When I go fast, I am going to get two points for going fast, but what happens next depends on whether or not I heat up. If I heat up, which happens half the time, I will land in warm and thereafter receive 1. If I don't heat up, I will land and cool and thereafter receive 2 looking at my cache. And so I'm either going to get 2 plus 2, or I'm going to get 2 plus 1, both equally likely. And therefore, on average, my score here will be 3.5 under optimal play. I don't have to worry about what happens if I go slow, because I'm not going to do that. I can control that. But I do need to worry that when I go fast, I might heat up or I might not, and that's going to recurse into what I've already computed. And then if you look at this value here, you'll undergo the same computation. You'll look at it and you say, well, I'm either going to go fast, or I'm going to go slow. And in each case, you do the next ply of the expectimax. It grounds out in v1 that you've already computed, and you can try it on your own. You'll get 2.5. Now if I keep doing this, what are they going to converge to? They will not converge. This is without a discount. This is one of these defective cases where your rewards will go infinite. We'll talk about what that means next time. So the answers are on the slide. Next time we'll talk about why this converges, and we'll talk about another class of algorithms. All right. Thanks, everyone. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180823_Introduction_to_Artificial_Intelligence.txt | PIETER ABBEEL: Hi, everyone. Welcome to the first lecture of CS 188 Artificial Intelligence. [APPLAUSE] So we have two instructors for the class. I am Pieter Abbeel. I'm a professor here at Berkeley. Aside from teaching, I do research in machine learning and robotics. In my very non-extensive spare time, I also try to play some tennis or go running. DAN KLEIN: Hi. I'm Dan Klein and I'm really loud. I'm Dan Klein and I'm a 14th year professor here at Berkeley. And I work on natural language processing. And in my equally extensive free times, I like to do a lot of things. But I'm really into science fiction and video games, and all kinds of things I wish I had more time for, but we'll see a little bit of in this class. And we have a very small course staff here. I'm going to introduce them. And GSIs, if you could please wave your hands or jump up and down or do some webcast appropriate activity, that would be great. We have Aditya, Alex, Daniel, David, Ignasi, Jasmine, Jonathan, Katie, Laura, Mitchell, Nikhil, Nikita, Noah, Ronghang, Thanard, Wilson, and Yi. So we're really looking forward to getting to know you and letting you get to know us. We're going to be your course staff and we're going to enjoy getting to do 188 with you this semester. I'll give it to Pieter for a little bit. PIETER ABBEEL: All right. So let's start with a few logistical things. There's a website, which went up last night. It's at this URL. Check it out. A lot of the information you need for the course is on that website. And if it looks like what you see on this slide, you're probably in the right place. Communication. There's a website for us to broadcast things to you. There's also announcements on Piazza, which will be our main forum for communication. If you have any questions, ask them on Piazza. You can ask them publicly if you think other students would benefit from the answer, too. You can ask them privately if you think it's something more appropriate for just the course staff. And if you think even that doesn't work, you can email us at the email address listed above. Course technology. We're going to use a few things. There's the website, there's Piazza, and there's also Gradescope. For Piazza and Gradescope, you need to make accounts. If you don't have accounts yet, they'll be important to make accounts so you can for Piazza, participate in communication, and Grade Scope, submit work. The course is also webcast. So if you miss a lecture or you want to rewatch this whole thing or something, you can do that online. This is orchestrated by campus, so we don't control exactly when these things come online, but usually it comes online relatively soon after lecture. We also have some edited videos. So a couple of years ago, we had some GSIs slice and dice lecture videos to cut out any quiet spots and make a lecture that normally is an hour and 20 minutes into an hour and two minutes. And so we'll post those, too, in case you want to watch some of these things slightly faster. OK. Prereqs. Often it's been listed us CS 61A or CS 61B and CS 70 or MATH 55. If you can parse that, you're probably in good shape. What we actually recommend is much simpler to parse-- 61a and 61b and 70. There'll be a lot of math mostly in the second half of the course and a bunch of programming. But keep in mind, this is not a programming class. This is programming to put things into practice that you see in lecture rather than learning to program like you would do in the 61 series. Work and grading. There are five programming projects. They're all in Python. You can either work in a group of just yourself or a group of two. You get five late days as a budget for the entire semester, but you can use at most two for any given project. So anything comes up-- your sick for a day or whatever, somebody needs your attention and you can't work on the project, just use a late day and no need to contact us for these. You use that budget whatever way you'd like to, but it's capped at five total. There are 11 homework assignments. Each homework will have an electronic part and a written part. Electronic, you work online. It's on Gradescope. You get to interactively solve these things. Gradescope will show you if you got it right or wrong. You can keep working on it till you get it right. You can work alone, you can work in discussion with others. But you have to submit your own work and your own understanding of that work. There's a written component. This will be typically on paper. You can do it on a computer or print it out and do it on paper. Then you'll upload it into Gradescope. The way you'll solve it is, you'll either work alone or in groups. But you still have to write up your own solution and submit your own work into the submission. Then there'll be a deadline a week later to self-assess your work on the previous homework. So you'll first work through it and then submit. And then a week later, another deadline to compare your solution with our solution, and you'll be asked to describe the differences. Written homework will be graded based on completion and effort. So if you put in the effort and complete it and then also self-assessed properly, you will get a full grade independent of exactly how accurately you solved everything in the first pass. There will be two midterms, one final. We're using a fixed scale. What this means is that in principle, you could all have an A or an A plus at the end of the semester. Historically speaking, that is not what happens. So don't count on that. But we're not going to curve you. If you happen to be this exceptional year where everybody is just as good as the A plus students of last year, no problem, then that's what it is. Participation can help on the margins. So we love for you to participate on Piazza, help out in discussion. And if at grade decision time we see somebody is close to the margin, we'll go check and see, OK, did they help other students in the class? And if so, maybe we'll bump them up just a little. Academic integrity policy. OK. Hopefully we don't have to say much about this and this is the only time this has to come up in this course. What it boils down to is, don't represent anybody else's work as if it's your own. OK? That's essentially it. You can go read more details on the website, but that's what it comes down to. There are also contests. So throughout the course, about three times and then at the end of the semester, we'll have a final contest where you get to compete-- submit your AIs to compete with other people's AIs, and see how they fare. This competition will not determine grades. It is for glory, honor, and so forth, which you should try to always get. But it would not affect your grades directly. OK. So exams. There will be three exams total, two midterms, one final. These are the dates. These are the times. Put this in your calendar. There is no alternative. That's it. Discussion sections. Right now, none of you are assigned to any particular discussion section. There are many, many, many discussion sections. You are welcome to attend any one of them that you like. You can go many times if you want. Maybe you go to all-- whatever-- 20 discussion sections. It's not recommended to do that, but you can. There is a survey later this week where you can annotate your preferences. And that will go out. Watch Piazza. Annotate your preferences. And that will help us balance who goes where. Because you'll see ahead of time what sections are going to be overcrowded or not, and can adjust accordingly. Bear with us. Usually the first two weeks these things are overcrowded. But this will be less so as a couple of weeks have passed by. For a section, we'll also have a webcast. One of the GSIs will work through the problems that are covered in section, and then provide a webcast by the end of the week. No section this week. We'll start next week. OK. Textbooks. Who here has bought a textbook this semester? Raise of hands. OK. Still good fraction of people buying textbooks. OK. Not too many though. So there is no required textbook for the class. But if you'd like to have a textbook and you'd like to read more, this is the textbook we recommend. Keep in mind that we're not following exactly the way things are explained in this textbook, but it is a good book if you want to read more. Laptops in lecture. So one thing that can happen with laptops in lecture is that the students behind you are curious what's on your screen and it's distractive for them. And so if you love using laptops in lecture, try to see a little bit to the sides or to the back so you're not a cause of distraction for the other students around you. It also turns out, the laptop can distract the person using the laptop. But that's more up to you to decide what you want to do. OK. Let's recap the announcements for this week. You need to go check out the website. Register on Gradescope and Piazza. Homework zeros out. It's a math self-diagnostic. It's a way to check whether you're mathematically prepared for the second half of the course. OK? And so it's important to do now, because by the time we hit the second half, it's too late to decide whether or not you are prepared for the second half of the course. P0 is a Python tutorial to get you going with our autograders and just the general environment we'll be working in. There will be lab hours for this on Friday. Especially if you happen to have trouble getting your Python environment set up or you want to use the lab machines, go to those office hours and you'll be helped out. You don't need an instructional account, but if you want one, there are instructions on Piazza how you can get one. Also important is that sections will be pretty loosely assigned, so stay tuned on that. We'll start next week. There is still a wait list. Have some patience. Hopefully this will sort itself out. We think it's likely everybody will make it in, but it's never sure. While you're still on the wait list, make sure you do all the work for the class. Otherwise, well, you'll have missed a lot of work and have missed a lot of points by the time you join. OK. So that's all for logistics. Any questions about logistics? Great. Then we want to do a quick thing about the culture we're interested in having in this class and how we see the class. And not every class is the same in this way, so we'd like to take just a little bit of time to get this across. Any class has to do a combination of instruction and assessment. Instruction for people to learn and assessment to then measure what people learn. Instruction, in our mind, means you grow knowledge, you get to collaborate, even if you still have to, of course, submit your own work, and you get to keep working till you're successful. And so that's just iterating, iterating, iterating on things. Assessment is very different. It's where you just get an opportunity to show what your current state of your mind is on this topic that we're teaching you, and we'll measure how good that is. You might be stopped before you're successful at solving everything. In our experience, these two don't mix very well. Something has to be either instruction or assessment. It can't really be both at the same time. And so in this class, lecture, section, office hours, Piazza, homework, projects, those are all instruction. They're all entities where you would be learning, get to discuss everything that's covered in them, get to keep working till you're completed. For projects, you'll know from autograder what you've completed, have full grade, or not. Same for electronic homework. For written homework, you get to make corrections the week after to whatever you didn't have. So all of these are for you to work till completion and get full grade on if you put in the effort. Exams are the place where assessment happens. And so that's just what we need to do to still have some assessment in here. And you're on your own. So some historical statistics on the consequences of this kind of course structure. So the reason we like to do this is because this way, we can be in everything together, except for exams where everybody has to do their own thing. But then historically what you might see happen is, homework and projects, you might work alone or together, iterate till you've learned it and nailed it. And this is what the histogram looks like for grades. So pretty much everybody nails it. And then we're four or five weeks into the semester, everybody's looking at fixed scale grading like, I'm 100% till now. This is looking great. Oh, my god, all of us are going to get an A plus in this class. Then the exam comes in. And this is a real histogram from recent offering, and this is what it looks like. So we just want you to be prepared for this. And the reason is that it's very different to be working on your own than to be working in a group and getting to discuss everything and getting as much time as you might ever want to work on it. So we have a few suggestions for this. First of all, there's a new component this year, the written homework, which allows you to emulate exam situation on your own. So when you work on a written, you're free to discuss from the beginning. But what we would recommend you do is, you take a half hour or an hour to just work on the written alone. Assess yourself. And based on that, you can see where you're at. Should you study some more or not for this class? And then you go discuss, make sure you solve everything and so forth, and learn all the material. OK. So that's all for logistics and class organization. Dan, giving it back to you. DAN KLEIN: Any questions about any of that? All right. We're going to talk about today today. So we're going to talk about what artificial intelligence actually is, what can AI do? What can AI not do? What's the difference between what we think is possible and what actually is possible? And then we're going to talk about, what is this course? What have all of you gotten yourselves into before it's too late? So one good way to see what people are thinking and dreaming about technology, and in particular artificial intelligence, is to look at things like science fiction. It's basically a story of an interplay and arc between hope and fear. So let's do a little pop culture quiz here. Tell me if you recognize any of this stuff. You know who these are? OK. Good. This is the '70s. They appeared in the '70s. Who's the big gold one? C3PO. Good, you passed your first 188 point. What does C3PO do? STUDENT: The translator. DAN KLEIN: Translator. Google Translate with anxiety. [LAUGHTER] So no, we haven't completely been able to build C3PO, but Google translate works pretty well. We can translate between human languages, so we got that. How would the other guy? The trash can. Who's that? What does R2-D2 do? Silence, yeah. I don't actually know. He's spunky and does a little bit of everything. In AI, we don't yet know how to make things that are spunky. And actually, much more importantly, we don't actually know how to build AI systems that do a little bit of everything. We know how to build systems that do one thing very well if they have enough data, if they have enough compute. And that's something that will show up a lot in this course. But fundamentally, this is the '70s, and in the '70s, what we think about AI and what that kind of technology could mean for the future, it's optimistic. These are helpful droids. They help us do things, they take some work off our hands, and they make our lives better. Sometimes they complain a lot. But then let's fast forward a little bit. What was going on in the movies in the '80s? Killer robots from the future. It's getting a little darker. If you actually look at it, these Terminators look a lot like C3PO. Right? The difference here is the software. I guess the eyes glow red and that's scary, and it's got actually teeth. That's terrifying. I've never seen that before. Anyway, in the '80s, we start worrying about hardware. Maybe hardware could be scary. Maybe this technology that we're building could turn against us. In the '90s, we realized that software can be scary. Software can be very scary. And in fact, that's the difference between the hardware. And so you can see here, where you're moving from hope that this technology can make our lives better to, what if it doesn't? What if the technology is dangerous? What if the robots come for us? 2000s. What's that? Cylons, Battlestar Galactica. So what's the worry here? Well, we're shifting now from worrying that the AIs will defeat us or rise up, to a little bit more of a subtle worry. What if we can't tell the difference? What if they're indistinguishable from us? And that's a very different kind of worry. What if we get replaced? The fear here is not so much about a war, but a replacement. Although I guess, there's basically a war there, too. And let's look at today. What's that? Westworld. What to say about Westworld. So Westworld-- today, I think our views on technology are complex. And so is Westworld. So here, we have a little bit of the, wouldn't it be lovely actually if we could build systems which are indistinguishable from humans? Or maybe that would be bad and maybe they'd rise up. But maybe that'd be OK, because maybe they'd actually be better than us. I don't know. Anyway, there's this complex interplay between what we hope-- the ways that technology could make us better and what we fear-- things that could go wrong along the way. And that's something that we have to keep in mind as we think about these technologies. But we also need a reality check about where we are and what we can and can't do. So let's switch from science fiction and how we're thinking about the future to what's in the news. So it used to be that science fiction writers were the ones who got to think about, what is AI going to do and how could it change the world? But now everyday reporters get to watch developments and think about, what's happening right now and how can we communicate that to society? So you've probably seen a lot of these things in the news. What's this? This is IBM's Watson. It was inconceivable 30 years ago that we would have a AI system that could beat humans at Jeopardy. It was maybe also inconceivable that we would try. This is probably the most complicated way to make $77,000 using AI technology. But it's actually amazing and it's emblematic of what you can do when you combine computation on top of data. Data accessing all of these things that we have written down out there on the web and databases, and computation to connect that up and do reasoning to figure out how to answer these questions, reason about consciouses, and so on. Really cool thing. You probably saw the headlines about Go. Games have been falling one by one. We'll talk about games a little bit more later in this lecture. But this is an example of, I think, a real triumph. For a long time things like chess and checkers were games that we could play very well and we understood very well how to play. But Go was something that was very hard to make progress on with AI methods. And a combination of modern methods that we'll talk about later today and later in this course we're able to make a really big breakthrough. So you probably also saw that. That's amazing. There's all these things that you see on the news that we can do. But there's a lot of things we can't do. There are things that are hard, where maybe we're making progress, but we're not there yet. So for example, automated driving, autonomous driving. It's amazing what we can do with autonomous vehicles right now, but we're not there yet. To go back to science fiction, we do not yet have KITT, if you remember Knight Rider from the '70s. We can't do that yet. We'll talk a little bit more-- actually a lot more about autonomous driving later in this lecture and in this course. These are things we're able to do to varying degrees, which I think have been amazing and in the news. But people are also starting to take these accomplishments and think about what this means for the future. So for example, people are starting to look and say, all right, well, maybe we can totally change how we do health care. But maybe there's issues with that. Maybe there are some things we can do that we shouldn't or we should think through before we cross some of these technological lines. Or maybe not just AI raises questions of things that we can do, but maybe shouldn't. Maybe there are existential questions. So famously, Elon Musk is worried that the biggest risk we face as a civilization is actually artificial intelligence. What if this stuff works? What if it works really well? What if it doesn't do what we expect or what we want? How do we define what we want and how to keep these things safe? And it's not just Elon Musk. Stephen Hawking famously said that he feared that AI could replace humans altogether. And you can start to see in these forward-looking worries starting to reflect what you see in the movies. What if AI ruins our civilization, either in a spectacular or subtle way? What if humans get replaced? What's this future going to look like? But one thing we're going to have to think through in this class is not only the ethics of these issues, but also, where are we? Where are the places where AI is really doing amazingly well and where are the places where we really don't know? So just as a cute example of something that AI is not going to replace humans anytime soon-- I don't know if you know much about paint. Yeah, I'm actually going to talk about paint. So when you have colors of paint, people name these colors of paint very carefully. This is Island Fog and it's some kind of blue or something. Well, in a very cute experiment, a whole bunch of data about paints and RGB values was fed into a neural net. Can neural nets do everything? Well, they can't do this yet. It was fed into a neural net. And new colors, new RGB values were input to the network, and the network was asked to generate names for paint. Now, you tell me whether you are ready to replace the humans that name paint. Here's some paint names. Clardic Fug. My favorite is Bank Butt. [LAUGHTER] Maybe when the Terminators come for us, Skynet will paint them Bank Butt. So really, there's a huge range of things, things we can't really even do. We don't know how to do them or we don't have the sort of data necessary to do them or the methods. So things that we can do amazingly well that have real impact on our society today and are going to have increasing impact. And one of the things we're going to try to do in this course is tease these things apart. What's the difference between what we can do, what we can't do? What are the big ideas that are going to keep recurring as we go through generation after generation of solving things, hitting walls, breaking through those walls, and advancing this technology? So what is AI? Actually, famously, AI is a self-defeating definition. Because if you say, well, AI, that's all those things that require human intelligence. And then as soon as you can do them, well, apparently they don't require human intelligence. And AI's famous at not claiming things. Like search, for example, I think has a big AI component. But once we think of something as technologized or solved, sometimes you don't think about it as AI anymore. But let's take a step back and think about, how should we define AI? And there have been multiple historical answers, that I think it's important both to understand the history of how we've gotten here, and also all the different fields that orbit each other in the AI sphere and how they relate to each other. So the oldest answer of what is AI is, it's the science of making machines that think rationally. For now, let's take that to mean, think correctly. And this is the logistist tradition. This goes back to Plato and Aristotle and things like modus ponens, figuring out, what are the rules that govern correct thought? If we have rules that govern correct thought, we can automate that and we can turn those logical rules into theorem, provers, and deduction systems. And thereby, turn computation into reasoning and thereby, behavior. So this is the oldest approach. This is basically what drove AI early on. In, for example, the '80s, most of the methods were based on this idea of thinking in a correct way. And this didn't scale. There were some things missing from that approach. And people moved from trying to think in the right way to maybe trying to do other approaches. Another approach that has come up a lot historically is building machines that think like people. This really isn't so much AI anymore. This is actually more connected to cognitive science. This is trying to figure out, what's going on in our heads? What are these thought processes that our brains do? This is actually really important to AI, because we have exactly one general purpose intelligence system. It's in all of your heads. And usually when we have an existence proof of a technology, we can reverse engineer it. Hasn't been so easy. Part of that is we don't really know yet what goes on inside our minds. But it turns out, it doesn't actually matter that systems think in exactly a certain way. And we'll see this as soon as later in this lecture that it became less and less important what exactly the process of reasoning was, than the outcome and the action. For example, when we talk about chess playing systems. They work very differently than humans, but they come up with similar moves. So people decided that maybe we should be building systems not based on how they think, but how they act. This is actually a very important change from thought processes to underlying to the resulting actions. And who's heard of the Turing test? Very famous initial AI idea from Alan Turing, which says, one way to tell intelligence is to put a computer in one room and a human in another room, and have a human talk, presumably by typewriter, to both entities and see if you can tell them apart. And if they're functionally equivalent, well, you've achieved something. Maybe you've achieved intelligence or maybe you've just achieved human bluffability. Maybe that's like intelligence, maybe it was not. It's a very influential philosophy of mind idea of the Turing test. But the research that came out of it tended to be driven by the kinds of things that can fool people in the short run. So very important in any Turing test, don't spell too well, don't type too quickly. When somebody asks you for the square root of 7, don't answer. But when they ask you what your favorite Shakespeare play is, you probably should answer. And so these kinds of things are very interesting, but they actually teach us just as much about people as they do about machines. So as you can probably guess, the fourth square is where we will end up in this course. And the modern approach to AI, broadly speaking, has been to build systems that act rationally. That means we focus on the decisions systems make. And rationally here-- think meanings-- means optimally. We build systems that act optimally. And that brought in from initially, a logical logistist tradition-- this brought in a whole history of optimization theory and let us tap into a whole bunch of new kinds of statistics and optimization and data that have really been important in making progress in AI. So you saw the word rational here a couple of times. Rational means something informally. Informally, when we talk about rationality, we mean maybe something about like, unemotional. But when we talk about rational in this course, we mean something very specific and technical. We mean it's a system that maximally achieves some predefined goals. It's not about what those goals are. You can have a robot that's designed to clean up dirt as well as possible. You might call it vacuum cleaner. You can have another robot that's designed to make things as messy as possible. I don't what you'd call that. Call it a pet dog or something. And those two things might both be rational if they're optimally achieving the goals that they have set for them. So rationality only concerns the decisions that are being made, not the thought process behind them. There can be many ways to get to a correct decision. You can get that through computation, through looking at data, past experiences. And those goals are always going to be expressed in terms of the utility of different outcomes. You're in a situation, you have some actions you can take, it'll affect the world in some possibly unknown way. And then you have utilities over the results. We'll unpack all of this during the rest of the lecture. But being rational, in a nutshell, is maximizing your expected utility. So this is an introduction to artificial intelligence. It would probably be better if we titled this course Computational Rationality. That's more like what we're going to talk about. But probably half of you wouldn't be here today. So we keep the title. So if you take one thing from this course-- if you're going to get a CS 188 tattoo, which I am not endorsing, it would be something like this. Maximize your expected utility. It's a good lesson for life. But it's also really going to be the rest of this course, we're going to take our time unpacking these words. What does maximize mean? What are we maximizing over? What are the algorithms for finding optimal things in that sense? What is utility? What are we trying to achieve? What is the expectation here? How does our uncertainty over the environment, the current state of the world, and the consequences of our actions enter into things? So we're going to unpack this slide for a whole semester. All right. I think one more thing and then I'm going to get Pieter back on the stage here. What about the brain? So like I said, AI's really hard, but we have an existence proof of an intelligent system. There are 700 of them in this room right now. So why aren't we done? Why haven't we reverse engineered this? Well, one, brains are very good at making rational decisions, but they're not perfect and you can get distracted by their limitations. But really, the main issue is that brains aren't modular like software. We can't pull bits out and look at them, and see what they do, and put them back in, and replicate them, and follow that kind of modularity. To the extent that brains are modular, it doesn't look like software modularity. Not in a way that we've been able to exploit. There's a famous saying that brains are to intelligence as wings are to flight. People spend a long time trying to get mechanical flight to work. And we had existence proofs, we had birds. And one of the key things that made it possible to start getting automated flight was when we stopped trying to make them flap their wings. So sometimes you want to follow the existence proof and sometimes you want to build things differently, because there's something about the engineering context which changes those assumptions. But that doesn't mean we haven't learned anything from the brain. And we certainly have a ton more to learn from the brain. There are a couple of things we learn from the brain. One of the main ones is that there's really two components to making good decisions. Remember, that's what AI is going to be about, making good decisions in a context. One is memory. Data. You can make a good decision because you remember your experiences in the past. Or an advantage humans have is remembering reading about other people's experiences, so you don't actually have to make all the mistakes yourself. So for example, one reason I might not touch that fire is, I've done that before and it didn't go well, so I'm not going to do it again. Another way to make good decisions is simulation, which is basically computation. Unrolling the consequences of your actions according to a model. What's going to happen next? And playing what if in your head so that you can think through the consequence of things without actually trying them. So maybe I don't touch that fire because I can like play it forward in my head and realize this is going to end poorly based on my model of how things work. And of course for humans, those things are all intermeshed. That model came from data and experiences. And so in this class, one of the things we're going to do is talk a lot about both how these two ways of making decisions are different, and also about how to interleave them as we get further into the course. So to look at this course broadly and think about, what are you guys going to go through for the next semester? Well, the first part is really about getting intelligence or smart behavior emerging from computation. So we're going to think about search, satisfying constraints, thinking about uncertainty and adversariality in the world. And this is going to be about algorithms that, through computation, take a situation and figure out something smart to do. So the smart behavior comes from algorithms, from computation. The second part of the course is going to be about making good decisions and having intelligence on the basis of data and statistics. And this is where machine learning comes in. Here are all of your experiences. Here's a new situation. How should you act on the basis of what you've seen previously? And of course then, we'll be able to interleave these things as we get further into the course. And as we go throughout this course, we're going to talk about applications. What are you actually going to do with all of these methods and all of this intelligent behavior? Think about things like natural language, and vision, and robotics, and games. So I think Pieter's going to come and tell you a little bit about what's happened, stories so far in AI. PIETER ABBEEL: So let's take a step back and think about the first time some people really started working on AI. And this was actually back in the '50s. And the reason it started then is because that's when people started building computers. And initially, computers were thought of largely as just big calculators. But some people started thinking of it as, if you can do calculations on numbers, maybe you can do calculations on other things and start building something that thinks. So let's watch a video from the early '60s, reflecting on the progress that was made in the '50s with these early computers and early ideas. [VIDEO PLAYBACK] [MUSIC PLAYING] - The thinking machine. - Hello again. With me tonight is Jerome B Wiesner, Director of the Research Laboratory of Electronics at MIT. Dr. Wiesner, what really worries me today is, what's going to happen to us if machines can think? And what interests me specifically is, can they? - Oh, that's a very fine question. If you had asked me that question just a few years ago, I would've said it was very far-fetched. Today, I just have to admit, I don't really know. I suspect if I come back four or five years, I'll say sure. But it is confusing. - Well, if you're confused, Doctor, how do you think I feel? - We're just really beginning to understand the capabilities of computers. I've got some film that will illustrate this point, which I think will amaze you. - That man isn't playing checkers against a computer, is he? - Sure, and it's playing pretty well. - Now, which color-- - While most computer scientists saw it as a mere number-cruncher, a small group thought that the digital computer had a much grander destiny. Being a general purpose machine, it could be programmed to do things which in humans required intelligence. Playing games like checkers and chess, and solve brain-teasers. The field became known as artificial intelligence. - Can machines really think? Even the scientists argue that one. - I'm convinced that machines can and will think. I don't mean that machines will behave like men. I don't think for a very long time we're going to have a difficult problem distinguishing a man from a robot. And I don't think my daughter will ever marry a computer. But I think the computer will be doing the things that man do when we say they're thinking. I'm convinced that machines can and will think in our lifetime. - I confidently expect that when a matter of 10 or 15 years, something will emerge from a laboratory which is not too far from the robot of science fiction fame. - They had me reckon with ambiguity when they set out to use computers to translate languages. - A $500,000 super calculator, most versatile electronic brain known, translates Russian into English. Instead of mathematical wizardry, a sentence in Russian is to be fed-- - One of the first non-numerical applications of computers, it was hyped as the solution to the Cold War obsession of keeping tabs on what the Russians were doing. Claims were made that the computer would replace most human translators. - Present, of course, you're just in the experimental stage. When you go in for full scale production, what will the capacity be? - We should be able to do about, with a modern commercial computer, about 1 to 2 million words an hour. And this will be quite an adequate speed to cope with the hole-out with the Soviet Union in just a few hours computer time a week. - When do you hope to be able to achieve this peak? - If our experiments go well, then perhaps within five years or so. - And finally, Mr. McDaniel, does this mean the end of human translators? - I would say yes for translators of scientific and technical material. But as regards poetry and novels, no, I don't think we'll ever replace the translators for that type of material. - Mr. McDaniel, thank you very much. [END PLAYBACK] PIETER ABBEEL: All right. So this is dated 1961. Let's take a look at the history of things that were being worked on at the time. '40s and '50s were the very early days. One of the first things people did was build a Boolean circuit model inspired by the brain. Turing wrote his Computing Machinery and Intelligence book and Turing test started to exist. Then a lot of excitement. Early AI programs played checkers and do some theorem proving, it could do reasoning about geometry. The name artificial intelligence was coined in 1956. And then in 1965, a complete algorithm came about for logical reasoning. So a lot of excitement, a lot of progress is being made. And people thought this is going to move very fast. Just like you saw in this video, they said, oh, maybe in five years we'll have machine translation solved. We're 55 years later, and we're maybe starting to get there. So very optimistic views. From there, transition to knowledge-based approaches. So people thought, OK, we have these engines that can do logical theorem proving, reasoning, and so forth. If we can just put enough knowledge, enough facts together-- and fact one implies fact two, those kind of propositions-- then maybe that allows us to reason about anything in the future and starts putting more and more information in these systems. But it didn't lead where people hoped for. In fact, things didn't work as well as hoped for at all. Not much money was made with AI. Not much progress was made anymore along these lines. And this settled in an AI winter, where very little work happened on AI in industry, and very little funding from government would go to research labs that tried to work in AI. Then in the '90s, there was a resurgence through statistical approaches, where there was a lot of combining of new statistical ideas with sub-field expertise. Leading to an ability to reason about uncertainty and maybe an AI spring. And then 2012 onwards, deep learning started to come on the scene quite a lot. And people got just as excited again, it seems, as they were back in the '50s and '70s. And I guess we'll see if the cycle after repeats or not. But one fundamental difference right now is that AI is used in many, many industries. And we'll give you some examples in the remainder of this lecture of many application domains where AI is already useful and can be expected to be even more useful in the foreseeable future. OK. So every lecture, about halfway through, we're going to take a two-minute break, which is going to happen right now. And then after the break, we'll start discussing what AI can do today. Hi, everyone. Welcome back. So let's take a look at what AI can do, and let's do it through a quiz. So we'll do a raise of hands for each one of these questions and see what you think. And then we'll see what we think and go from there. So which of the following can be done at present by an AI? Play a decent game of table tennis. Who thinks that's possible? Raise your hand. OK. Yeah, about half of you. Well, let's reveal. Indeed. It might depend on what you call decent. You're not going to have a robot running around the table. But a robot that quietly plays back and forth with you is definitely available at this point. DAN KLEIN: We already talked about this, but how about play a decent game of Jeopardy? Yup. Absolutely. So you might worry here that the computers are going to take all of our game shows from us. So far, that hasn't happened. PIETER ABBEEL: How about driving safely up a mountain road? Raise of hands. So this actually happened in 2011. So this is a done thing. Under certain conditions and so forth, but it's happened. DAN KLEIN: What about, drive safely along Telegraph Avenue? [LAUGHTER] Yeah. I don't know if I can drive safely on Telegraph Avenue. But this is actually really important. We'll give it a question mark. Autonomous driving is getting better and better. But this goes to prove that just because you can do an initial step of a technology doesn't mean that once you get to real world conditions and complex environments and high safety standards, that you can still do that in an autonomous way. PIETER ABBEEL: How about buying a week's worth of groceries on the web? Yeah. This is easy now. The computer just uses Instacart and everything will show up. DAN KLEIN: What about buy a week's worth of groceries at the Berkeley Bowl? That's a lot harder, right? It's packed, you've got to make sure you don't bump your car into other people. And then you've got to distinguish 73 varieties of Apple. And it's tricky, right? I give this one a no, I can't send a robot to go do my Berkeley Bowl shopping yet. PIETER ABBEEL: How about discovering and proving a new mathematical theorem? OK. So a small number of people think this might be possible. We're giving it a question mark. AI is becoming pretty decent at proving things. If you give it a statement, it might figure out how to start from some actions to get to whether that statement is true or not. But the big question then still is, what should it try to prove? So many things to prove. 5 plus 7 is 12. 5 plus 8 is 13. So many, so many things. And so, when it's going to really decide what is worth proving and building abstractions around is a whole other question. DAN KLEIN: How about converse successfully with another person for an hour? Can we do this? OK. I would say it depends greatly on the person. [LAUGHTER] But not in a general. So there's a whole history of computers basically bluffing. And chat bots, for a while, can bluff. And sooner or later, you figure out that you're either talking to a computer or this conversation is deeply weird in some other way. PIETER ABBEEL: How about performing a surgical operation? OK. So keep your hands up if you say yes. OK. Now, continue to keep your hand up if you're happy to have a robot to open heart surgery on you? [LAUGHTER] Couple of you. We do surgical robotics research in my lab. I'd love to find a few more participants. [LAUGHTER] I'll see you after lecture. Not really at this point. The surgeries that do happen with a robot tend to happen through a human operator still totally operating the robot to get the surgery done. DAN KLEIN: What about, translate spoken Chinese into spoken English in real time? Yep. All the pieces of this actually work pretty well, at least under limited context and circumstances. PIETER ABBEEL: How about folding laundry and putting away the dishes? OK. Yeah. I think the mixed answer is pretty accurate in that there is some pretty cool initial results starting to do this. But it's nowhere near the level that you can just get a robot at home that's going to do this for you. DAN KLEIN: What about writing an intentionally funny story? One fun thing about this lecture is, we can more or less keep the same list and over time, things just move from one column to the other as AI begins to be able to do these things better and better. That said, this is a case where we still can't write intentionally funny stories. The keyword there is actually intentionally. So let me give you some examples of what this means. And not just examples, but what it says about how we need to approach AI and what's going to work and what's not. So let's go back in time to 1984. This is a very famous system from Roger Schank called the Tale-Spin system. It would basically have a bunch of characters. They're all animal-- anthropomorphized animal characters and objects, and it would create stories by chaining things together. And let's see the kinds of things that it comes up with. So here's a Tale-Spin story. "One day Joe Bear was hungry. He asked his friend Irving Bird where some honey was. Irving told him there was a beehive in the oak tree. Joe walked to the oak tree. He ate the beehive. The end." [LAUGHTER] So this is funny, but there's a glass is 90% full, glass is 10% empty kind of thing going on here. This is ridiculous. Like, what the heck's going on here? The bear is eating the beehive. That's ridiculous. There's actually a lot that's right here. Think about how much is right here. He's hungry. He wants some food. The right kind of food for a bear is honey. So he goes and asks. And the honey comes in the beehive that's over the oak tree. So he has to go to the oak tree. It's all great until the end. One missing link, which is, you have to take the honey out of the beehive first. That's it. One missing link and the whole thing falls apart. And this was actually the story of a major failure case of technologies from this era of AI in the '80s is, you try to write down all the knowledge and you write down all the rules, and all it takes is one missing link in the chain and you get crazy stuff. Let's see some more crazy stuff. "Henry Squirrel was thirsty. He walked over to the riverbank where his good friend Bill Bird was sitting. Henry slipped and fell in the river. Gravity drowned. The end." [LAUGHTER] It's a surprise ending, right? So what's wrong here again-- a lot of the pieces are here. It's thirsty and then there's water. And then there's this drowning event. What went wrong here was actually linguistic. It would still a weird story if-- who is this? Henry drowned. But here, gravity drowned. What went wrong? The system knows there's a drowning and that's appropriate because there's water. And it knows that there's a-- Henry drowns and that gravity does the drowning. But it dropped the wrong argument. STUDENT: I think what it means is that gravity drowned Henry. But unfortunately, they've just dropped Henry because it [INAUDIBLE] the object. DAN KLEIN: Yeah, exactly. What's gone wrong here is linguistic. There's two arguments and it's dropped the wrong one. That's all it takes. In the first one, there's a missing reasoning chain about containers. Here, there's a missing linguistic rule on how to render it. Crazy pants stories. Let's see another one. "Once upon a time, there was a dishonest fox and a vain crow. One day the crow was sitting in his tree, holding a piece of cheese in his mouth. He noticed that he was holding the piece of cheese. He became hungry and swallowed the cheese. The fox walked over to the crow. The end." [LAUGHTER] It's fine, except there's this long digression about the cheese and that has nothing to do with the story. So there's nothing really ill formed here. You can follow the story. The language makes it clear what's going on. But there's something wrong with-- this is not how stories are structured. You don't rat hole about this cheese that you discover in your mouth if you're trying to tell a story about a fox walking over to the crow. So there's all these levels of things going on and all these different kinds of abstractions you have to get right. And any one of those things having a missing link, and you get unintentionally funny stories. Now you might think that we've had a lot of progress now-- we have data-driven methods, we have more compute, we have big computers, we have neural nets. Maybe we can write intentionally funny things. Not yet. So here, I think this is a hysterical example. This is from Janelle Shane, who does a lot of these kinds of trained neural nets. She's responsible for the paint colors, as well. She trained a neural net on a bunch of, what do you get when you cross x and y kinds of jokes. Let me read you some of them if you can see them. "What do you get when you cross a dog and a vampire? A bungee." Yeah, it doesn't really land. How about, "what do you get when you cross a street and a bungee with a cow? A bungee and a pig with a cow." So, we still can't actually either tell the intentionally funny stories or the intentionally not funny stories. Weird stuff happens whatever you try to do. And in fact, one of the things that we've learned as a field sometimes is, sometimes the right thing to do is know your limitations. So at least at times if you've asked Siri, for example, to tell me a story, maybe it's better to not try. So there are a bunch of areas of AI where we're making incredible progress. And there's a bunch of areas where we're not. And understanding where progress is being made and what walls we're going to hit can be very hard. Because sometimes things you were sure were really, really hard, like Go, turn out to have solutions today. And other things that you're sure you're almost there, and you're not and it's really tricky. So what I'm going to do now is start talking about some individual areas of AI and the kinds of problems people work on. And of course, for each of these problems, they get better and better every year. But the overall decomposition of the world into different kinds of problems that interact in certain ways tends to be relatively stable. So I'm going talk about some. Pieter's going to come and talk about some of the others. And let's see what we can learn. We'll unpack a lot of this during the rest of the course. A lot of the application stuff will come at the end of the course. So in natural language-- this is actually the area where I work-- is a bunch of kinds of technologies that deal with different aspects of how humans communicate with each other, and therefore with computers under appropriate circumstances. So for example, speech recognition is taking the sounds that somebody says and mapping them onto basically text. Not necessarily understanding that text, but transcribing it. Text to speech synthesis. This is the opposite. It's going from something you want to say to a WAV file that embodies that speech verbally. These are two areas which have undergone amazing progress in the past five to 10 years. A lot of that's been neural nets. But actually, even before that, a lot of that was just large data methods applied appropriately. There's also dialog systems, which would fit in between. When you say something to a system, what if it's going to actually formulate a response and say something back? This technology is nowhere near as far along. This is something that we're still figuring out how to do, how to build a system that you can have a meaningful sustained conversation with. OK. But let me give you an example on the text to speech front of the kinds of things that go into it, and how we are and aren't able to do a good job of it today. So this is an automatic transcription system. There have been big advances. The state of the art systems are actually better than this today, but I think it's a good example. [VIDEO PLAYBACK] - Friends, family, and classmates said their final goodbyes yesterday at her funeral in East Falls. [INAUDIBLE] on this day, a major-- [END PLAYBACK] DAN KLEIN: OK. Let me go back to here. This is the important part. So, "friends, family, classmates said their final good buys yesterday." Good buys, like Best Buy here. Something's gone wrong. And the way we think about speech recognition systems is, there's a part of doing speech recognition which is linking up the sounds to the phonemic units of the language. So figuring out that with all that noise present, the person said something that sounds like buy. That's the domain of the acoustic model. That's actually something we're extremely good at and have only gotten better at in recent times. The mistake here is not actually an acoustic mistake. It does sound like good buys, it's just the wrong kind of bye. And the way you know that is contextual. And that context of, what words, even if they sound the same, are appropriate in this context? That's the domain of language modeling and that's something that we are not as good at. It's a much harder problem. Because the amount of context that you bring when you do speech recognition is vast. And it's been very hard to capture that. With modern methods, we've been able to do a better job. But that's still the challenging part in speech recognition systems. So that's speech technologies, but there's a whole bunch of other stuff that happens in natural language. For example, question answering, which you can think of Watson as a case of this. But in general, you want to be able to ask a question of maybe a structured database or maybe the web, or maybe something in between like Wikipedia, and get an answer back. That's different than search, where they say, here's a bunch of documents, good hunting. Say, here's your answer. I've compared a bunch of sources, this is the answer. It's a different problem. Machine translation. Remember back to C3PO. How many of you use Google Translate or something like that regularly? How about ever? Let's go for ever. All right. How many of you think it works well for your purposes? OK. I think it depends on the language pair, it depends on the kind of thing you're translating, it depends on your standards. Like, if you have no idea what that Chinese text means and now suddenly you can, more or less, muddle your way through the English translation. That's a lot better than the original. Sometimes machine translation between similar languages, like English and French, can work amazing. It depends on the amount of data, the difference between the language, the amount of compute you put into it, a whole bunch of different kinds of contextual factors. But machine translation works pretty well. Remember that video? Remember in the '50s, where they were like, yeah, we're about one experiment away from solving machine translation? Yeah, it took 50 plus years, and we're still not done. But that feeling like, oh yeah, we're going to do it and maybe there will still be poets, that kind of feeling, that's just the kind of thing that people say when you're at this part of the S curve and you don't think there's the other part of the S curve. We don't know where we are on the S curve right now. We know that sooner or later, there's going to be limitations and that we're going to need new technologies to get past them, or maybe just new scale, maybe more compute. But we don't know where we are on that S curve right now, so we have to be careful. It is the case that machine translation kind of works now and it didn't work at all back then. Think about back there in the '50s, the computers they have, those supercomputers. Your phone has more power, your toaster has more power than a supercomputer from the '50s. And all I could do is look up words. Now we have better methods, but they're still not perfect. And a whole bunch of other stuff falls under the domain of natural language. Web search, understanding how your query relates to what's on the page and mixing that with things like link analysis. Or the big thing now is click stream data, about what people do and don't end up using and clicking on. Text classification, spam filtering, there's a whole bunch of things that basically anchor into natural language and analysis of natural language. Mapping natural language from one thing to another works a whole lot better than understanding it deeply today. I'm going to hand it to Pieter. PIETER ABBEEL: Another very important thing for an AI is to understand what's around them. And so this field is often called computer vision or perception. And what it comes down to is that you somehow want to take in pixel values, which is just numbers, red, green, blue values of what's in that pixel, and then say something about what's there. For example, in computer vision, people will study the problem of face detection and face recognition. Which is actually starting to work really well and a lot of companies are putting this to use today. Semantic scene segmentation, where you look at a scene, a picture on the left, and turn it into the understood scene on the right, which then might feed into a control system for your car. Or 3D understanding. So what we have here, the color corresponds to depth. And so we see 3D understanding of what's going on in that scene, and also isolating out the people. Let's go back to Terminator. So this is from Terminator 2. If you haven't watched it, it's recommended. Not for the class, just for a free time. Scan mode. Motorcycle, car, motorcycle. OK. So that's the movies. Now let's take a look at a real computer vision system. Cat, frog, fox, dalmatian, bulldog, frog, frog, frog, terminate frog. OK. So the real computer vision system is starting to catch up with some of the sci-fi things, but not that it's a fully solved problem just yet. Now once you have a vision system, you can start making decisions based on what you see around you in the world. Then maybe you can have a robot that actually do things in the world. Now, with robotics, there's many components. Part of it is mechanical engineering. You need to build the system. Part of it is AI. And a big thing to keep in mind is that working in the real world is a lot harder than working in simulation. And so a lot of the challenge in robotics tie into the big challenge is how to get something working in the real world. It's across many, many verticals. It could be vehicles, rescue robots, flying robots, soccer, and so forth. In this class, we're going to focus on the AI side of things. But keep in mind, somebody needs to do the mechanical engineering work to build these robots, too, and there's a lot of advances to be made there, too. So let's look at some examples. So people have built robots-- can we lower the volume a little bit? So this is a lot like kindergartners playing soccer. But things do happen. [APPLAUSE] Now, think back to optimizing your expected utility, rational decision making. What do you think it means when you're a robot dog and you're asked to kick a ball? [LAUGHTER] Does this reinforce learning in action for you? We'll see reinforcement learning later. Here is a view from a self-driving car. This is Waymo footage. So there's many sensors. The Lidar, which is a laser range finding system, sends out laser beams. And then measures how long it takes for that laser beam to reflect back, multiplies that time with the speed of light, and decides how far away something is. There's also radar measuring distance. But that doesn't get the same resolution, but has the upside of not being as susceptible to weather conditions. And then, of course, cameras. And the feed from the cameras gets processed to understand the traffic conditions specificities that's not just in geometry. For example, the lights, the crossings, where the other cars are. And then a really important problem beyond that is to also understand, what will these other cars do in the near future? Anticipate that so you can do the right thing. Anticipating what they will do rather than just based where everybody is now. Here is a robot that actually lives on the seventh floor of Sutardja Dai Hall here at Berkeley. And this robot is actually organizing the laundry. It's only if you have towels. The video is sped up 200 times. [LAUGHTER] So maybe only if you have five towels. But if you have five towels, this works really well. And actually, another interesting point here is that a lot of the work on this project was done by Berkeley undergraduate researchers. So if you enjoy this class, talk to Dan and me, but also look on the Berkeley AI webpages, the listing of faculty. A lot of us get undergraduate researchers involved and there's a lot of cool projects to be done. And of course, at some point these robots get tired of doing laundry. Also, this one looks a lot like Terminator, but with only one red flashing eye. And so we've seen robots take over our board games, our video games, our quizzes, and now our workouts. Back to you, Dan. DAN KLEIN: I don't know how I can follow soccer puppies, but-- I'm going to talk a little bit-- we alluded before to game playing. So there's actually a really long history, more than we can get into today, but we'll unpack some of it later in the course, about computers playing games. You remember in that video, even in the '50s, computers were starting to do things like play simple games, like checkers. And of course, they got better at those games. But even just being able to play the game, know the rules and code it, pick a move was impressive in the '50s. Computers got better and better. One of the most famous moments in AI history was in 1997 when Deep Blue played and defeated Kasparov, who was the human world champion at chess. So there was this moment when basically this giant refrigerator of a computer, which could do at the time 200 million board positions per second-- that was unheard of back then. Now it's desktop speed. And it could just number-crunch, churn, dig through the search tree to think about what moves to make. And what was interesting is, this game play was described as intelligent, creative play. People understood the moves, even though, as far as we know, that's absolutely not how humans play chess. But the same kinds of moves and strategies came out of the combinatorial structure of the game. Of course, you can do now, like I said before, the same with commodity parts. I think a revealing thing not about AI, but about humans was in '96 when Kasparov beat Deep Blue, but Deep Blue put up a fight. Kasparov said, I could feel, I could smell a new kind of intelligence across the table. I don't know what intelligence smells like. But in 1997, Deep Blue beat Kasparov, and Kasparov said, Deep Blue hasn't proven anything. So maybe that's a more comment on humans, but there are open questions here. Like, how does human cognition-- how do we deal with this giant combinatorial space? We probably don't deal in the same way, but we come up with similar kinds of solutions. How can humans compete at all? We don't try to compete with computers on taking the square root of numbers. That's just not a thing we compete on. But this is something where we're surprisingly evenly matched. It's really interesting. Other things since-- in 2016, this is 20 years later, AlphaGo beating Lee Sedol. This is a huge advance. People used to say, OK, checkers, yeah, that's done. Chess, that's done. Systems are getting better and better. But we're just so far from playing humans at Go. Because Go is this huge giant combinatorial space. The branching factor is so high, the depth of the tree is so big, that we just couldn't conceive of computation not being bogged down in the exponential growth there. But people came up with methods to do these sparse roll outs, combine that with machine learning, combine that with self-play, neural nets. And suddenly, what seemed impossible, actually made huge advances. Right now-- maybe even right now, within an hour or two, maybe right this minute, there is a competition between OpenAI Five and Team paiN, which are human pros at Dota 2. And there's some caveats here. Like, it's not entirely the same setup. They have access to slightly different information. But again, this is a case where there is a surprising even match and we'll see what happens. One comment from the first match, which the humans won-- go, team human-- was that the AI play was actually completely different than how humans play. They twitch around a little bit, but then do weird strategies, but execute them incredibly well. It's a little bit different than how humans play. And we'll see what happens. There's two more matches that are going to unfold. But again, surprising progress on games. So I don't know if this makes me happy or sad, as somebody who likes video games. Like, well, I guess computers can play my video games for me. But the good news is for at least Dota, if you're in the top 0.001% of players, you can still beat the computer-- for like, the next month. OK. So computers can play your video games for you, they can go to the gym for you. Turns out, they can also probably do your math homework for you. There's a long tradition, like I said earlier, about logical inference. That's one of the earliest places that AI was applied. And we still have a lot of progress in real systems in theorem proving. Also, these logical methods are used for things like fault diagnosis, cases where you need to really be able to trace through what the computers doing. A big issue with machine learning overall is often the computer does well, but you can't tell when it makes mistakes or what's gone wrong, or why it made the decision it made. And this has lots of consequences. But people have made lots of progress in logical systems taken very broadly. One particular place where there's been a ton is in satisfiability solvers, which are used for all kinds of things. Maybe, as Pieter said earlier, one of the biggest things that's going on is, AI is everywhere now. It's not just chess and checkers. Applied AI automates all kinds of things. In some ways, it's the glue or the electricity, as people say, of a lot of modern business. So of course search engines, but also all that route planning, maps, traffic, logistics, things like medical diagnosis, help desks being automated, spam and fraud detection, all the intelligence in your camera, in your thermostat. All of that is increasingly being driven by AI to give you smarter devices, product recommendations, and increasingly much. So not only is AI doing more-- not only can we do more, but we can do more kinds of things. We still can't build one system that can do a little bit of everything. We build systems for each individual task. But there are a lot of tasks that we can improve with AI augmentation now. So, what are we going to do in this course? Quick recap and we'll give you a demonstration to end the day here. We're going to talking about designing rational agents. That's an agent that perceives an ax. So think about this little guy here trying to get that apple down. The abstraction. It's a very important abstraction, is that you have an agent and you have an environment. You control the agent, you don't control the environment. So what comes into the agents are percepts from the sensors. What goes out of the agents are actions from the actuators. And the question mark in the middle, that's the agent's behavior. It's the agent function. That's what we write in this class. We write the decision making procedures that map from what you perceive to what you do. The environment responds, you might not know how. We're going to have to build that into our algorithms. So we talked about these characteristics. And that's what this course is about. Lots of different techniques for different kinds of conceptualization of the boundary between agent and environment. We're going see a lot of examples of agents and environments and where that boundary is set. When you drive, it's your eyes and your hands that are basically the boundary. But if you're an autonomous car, it's the camera and the control lines into the wheels. Something you may not have thought of as an agent is Pac-Man. So we're going to see a lot of Pac-Man in this course. Hopefully you'll recognize Pac-Man. Has anybody never played Pac-Man? Good. OK. If you have never played Pac-Man, but you're too embarrassed to admit it, go play a game of Pac-Man on 188. Pac-Man is an agent. Its sensors perceive the state of the world, which is the labeling of all the dots and ghost positions and all of that. Its actuators are like a little joystick-- up, down, left, right. And then the environment is the ghosts, and who knows what they'll choose to do and how the world will evolve. So we're going to show you to close out the day here-- you want to come join? You will build this soon. OK. So what I want you to do is, I want you think about-- this is going to be Pac-Man. My hands are off the keyboard. This is being played by an algorithm. So this is going to be behavior that appears to be intelligent deriving from-- who knows. You're going to build it. But start thinking today, what's going on here in the agent function? What computations give rise to this kind of behavior? Do you want to do the honors? PIETER ABBEEL: Sure. DAN KLEIN: So there it goes. [CROWD GASPS] [APPLAUSE] All right. Yeah, so we're out of time today. PIETER ABBEEL: See you next week. Bye. [APPLAUSE] |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180913_Search_with_Other_Agents_Expectimax_Utilities.txt | PROFESSOR: Hi, everyone. Welcome to the seventh lecture of CS 188 for the semester. Let's start with a couple of logistics. Homework three, games, has been released. It will due on Monday, this coming Monday. And just also a reminder that every homework has three parts that are due. There is the electronic homework, there is the written part, and then there is the self-assessment of the previous homework. For the written, for the self-assessment, make sure to use the templates that we provide. Any questions about that? Yes. STUDENT: You can still annotate [INAUDIBLE].. Yeah, for the self-assessment, you can go into Grade Scope, download the PDF that you submitted, and annotate onto that. If you happen to not submit anything, you can grab the empty templates and annotate that with the differences between what you had, which would be empty if you had empty, and annotated with everything we have to show the differences. [INAUDIBLE] You can annotate it electronically or you can annotate it on a printout and then rescan or take pictures, but we wanted to use the template format, whether you do it by writing or by directly electronically. Project 2 games is out. In project 2, you get to program minimax alpha beta evaluation functions, expectimax, which we'll see today, for Pac-Man to play against ghosts. This will be due next week Friday at 4 PM. Also there's an update to the homework policy. We already posted this on Piazza, but I don't think we've said it in lecture yet. There's 11 homeworks total, which is more than usual. We also have three parts through each homework, which is more than we've done in the past. So we've concluded it makes sense to let you drop the two lowest scoring homeworks. So at the end of the semester, we'll automatically look at every subset of 9 homeworks and see which one gives you the highest score and use that one to compute your homework grade. You can independently drop written versus electronic versus self-assessment up to two of each. You don't have to drop them. You can just also do them all, but our calculation will then still just try to grab the nine highest scoring ones and give you a grade based on the nine highest score ones for each separately. Any questions about that? Yes. [INAUDIBLE] Say that again. The written homework is graded based on whether you completed it. Our mindset here is that, with the written, we want to work on it as if it were an exam let's say, give yourself a feeling of where you would be out if it were an exam, then after that maybe talk with some other students, and write up after that your full understanding of everything. If it looks like you've completed everything, as you would do in an exam, then you will get a complete grade on the written, whether it's accurately solved or not accurately solved. The self-assessment, similarly, you will assess what the difference is between your answer and our answer, but our assessment of your self-assessment is based on whether your self-assessment is accurate, not based on whether your self-assessment gives you a 0 or a five or a 10. [INAUDIBLE] Sorry, I couldn't hear that very well. [INAUDIBLE] What do you mean with the lowest scores? [INAUDIBLE] So what will happen is that, at the end of the semester, they will have been 11 homeworks. And for each one of you, we will look at your 11 electronic homeworks, see which nine out of 11 have the highest score among your 11, and those nine will constitute your electronic homework score. Then we'll look around your 11 written homeworks. We'll look at the nine out of 11 that have the highest score, use those as your written homework score, and then we'll look at your 11 self-assessments. And we'll see which nine out of 11 have the highest score there and use that as your self-assessment score. Any other questions about that? Mini contest one is ongoing. It's optional. It's a lot of fun though. It ends on Sunday. Let's take a look at the leaderboard at this point. So this is what the contest looks like. You'll have massive mazes compared to the ones you've done in the regular project. You'll have multiple Pac-man agents that work together to clear the board, and you'll be time constrained or, better phrased, time penalized. The more time you spend thinking about what to do, the lower your score will be. Let's take a look at the current leaderboard. At number one we have currently Otto. Is Otto here? Over there. Great. At number two, we have Jason Lee. Here, great. And number three, we have Yoon Shan, Yoon Shan here. Yoon Shan. I think Yoon Shan wasn't here last time either, but Yoon Shan has a good entry in the contest. And at this highest point there was I believe about 140 teams on the leaderboard. Some teams have one student behind it. Some teams have two student behind it. Also some fun names starting to emerge-- too greedy, muh, team no sleep, don't recommend that, a name for the leader board when you're just too tired to come up with something, Joe Mama, Kiwigot, just got some food delivered, should have tried harder, and many more. We'll take another look next time. So this will end on Sunday, and then next week we'll report out on the results of the final ranking and also on the strategies used by some of the top teams. Any questions about the mini contest? Let's start with the technical content for today then. Today we're going to look at decision making when there is uncertainty, and we'll also start looking at what these utilities really are that we've been optimizing all along since lecture 1. So we'll now have uncertain outcomes. Remember this game tree from last time where we had maximizer player at the top gets to choose between two actions, then the minimizer gets to choose. And we see a 10 and a 10 on the left, a nine and 100 on the right. If we run minimax, what do we end up with? Well, minimax, we'll say this would be 10, this would be 9. So we should go this way. And that's what you want to use if you play against a true minimax player that really optimizes, in this case, against you. You want to take no chances because they're going to stick you with a nine on the other side. But we also alluded to it what if you play against, let's say, an agent like this one. Maybe if the agent looks like that, you say I'm willing to take my chances. There is a possibility that they don't play the optimal thing for them and that would give me 100. And a nine is not that much worse than a 10. So I'm just going to go for it. But I was pretty informal. And the question today becomes, how do we formalize something like this. It could be an agent that looks like that, and you say I'm willing to take my chances. It could also be that there is inherent stochasticity. Somebody is going to be rolling dice, and based on the outcome of the roll of the dice, go left or right in the bottom branch. If that's the case-- and let's say the chances of the coin flip, chances are 50/50, you could say, well, on average here, I'm going to get 109 divided by 2-- so 54.5. On average here, I would get 10, and now we have a formal way of justifying going that way because, on average, we'll do better going to the right than going to the left. When we do this kind of calculation, we'll have a different scheme of annotating the nodes. We'll use circles. So we have up triangles for the maximizers, down triangles for minimizers, and we will have circles for chance nodes. So when we follow this procedure, it's called expectimax search. You might ask yourself the question, why wouldn't we know what the result of an action would be. How come there could be a chance node in our model of the world? Well, one possibility is that it's designed that way. Maybe we're playing a game. And as part of the game, there is some dice being rolled, and then it's by design stochastic. Another possibility is that the opponents we are playing with are unpredictable. So we don't know how they're going to act, and maybe we can model them with a probability distribution over possible things they might do. And we think that maybe modeling them with a probability distribution is a better way to model them than to model them as masterminds that work against us. Another reason it could be-- it could be that what does this is modeling here is that, first, you get to choose between two actions. After you choose those actions, those actions don't get executed perfectly. Maybe you're a robot. You want to spin your wheels a bit, and you expect they're going to move forward or backward depending on which action you take. But then the wheels might slip, and you might actually not move forward or might not move backward and instead stay in place. And then the chance node here would model the fact that, after you chose your action, the outcome of that action is actually not deterministic. Now when we're in this stochastic world, our values should reflect the average case, which is the expectimax max value rather than the worst case or minimax outcomes. When we run expectimax search, we will compute the average score under optimal play. So optimal play here means that you maximize what you get on average. What does that mean? In a game like this one here, you would still, if you choose the right branch, always either get nine or 100. You would never get the 54.5. That's never an outcome of the game. But what we're saying is, if you were to play this game many, many, many times and you'd take the right action and it's really 50/50 there, then, on average, you will get 54.5. Hence, we call the value of the top node the expectimax value of 54.5 even if never we are experience that specific value in any single outcome of the game. How does the algorithm work? Well, the max nodes are just like in minimax search. Nothing different has to happen. Chance nodes are like the mid nodes. They are opponents in some sense, but they're not working against us. It's just a stochastic. And so rather than saying, in a chance node we take to the min like we did for minimizers, in a chance node we say let's take some weighted average based on how likely each outcome is. Those are called expectations, the weighted average of the outcomes. Later, we'll actually see another formalism that's very related to this that's going to start next week called Markov decision processes, which is another way to think about expectimax problems. Actually let's run some demos illustrating what we're doing here. So as typical in the lectures, there's two big parts to how we think about problems. We have the world. We want to turn it into some model that we use of the world. And then we have an algorithm that works on the model and finds a solution based on that model. And so you can ask the question, OK, what does it mean to model the world in a very specific way. For example, let's model the world-- let's go back to a situation we had last time over here-- fairly grim world again for Pac-Man, flanked by two ghosts. We now have a choice on how to model the situation. We could say we're going to model it as ghost that work directly against us. That would be minimax. If we model things that way, then we have a minimax tree-- murderous Pac-Man, then each of their ghosts, then Pac-Man, each of that ghosts. This repeats, and we have a tree that goes maybe till the end of the game. If we do a minimax search in that tree, what will happen? Well, we saw last time because we lose a point for every time step that passes and we again time for food pals. But we're flanked here and we're going to be eaten by the ghosts. We might as well be eaten as quickly as possible. So we're going to go right on to the red ghost and get eaten as quickly as possible. And that's indeed what minimax, in this case, will give us. So minimax in action goes straight to the nearestby ghost, minimize the damage. What, on the other hand, if we set it up as an expectimax tree. An expectimax, when the ghosts choose, it's a coin flip. In particular for the blue ghost, it's a coin flip whether they'll go up or down, and then they keep going the direction they chose. So if we run expectimax, what do you expect to happen? We say, well, just two possible outcomes. Well, there's many possible outcomes. If we go to the right, only one thing happens. We die instantly. If we go to the left, there is two possible outcomes. Either the ghost is running away or coming to me. If it's coming to me, then I am in trouble and I really should have dive bombed the red ghost. I would have finished things off earlier and lost less points. That's a bad outcome. But if the blue ghost is running away from me, then I can run behind the blue ghost, eat all the dots, and get high reward. So expectimax will then say, OK, if that's a 50/50 thing, we don't need to average the score you get in one case versus the other case of running to the left. And if the average of that is higher than what you get from running to the right, then you should run to the left. That doesn't mean, after you decide to run to the left, you're guaranteed to do better than you would have done if you had run to the right. It's a 50/50 chance. And so sometimes you'll do really well. Sometimes you'll actually not do well. It depends on the random seed. Let's see what happens here. Expectimax was run. Pac-Man has decided needs to go left to maximize expected value, but will it be lucky in this run or not? We'll see. It is lucky in this run, and so it gets a very high score. If we run this again with expectimax, it might be a lucky again or it might be unlucky this time. It's unlucky. And it has score that's as negative as it's going to get in this particular game scenario. But on average, it's doing better than if would have gone off to the right and run into the ghost instantly. One thing that's important to keep in mind here is that we're making assumptions about how these ghosts function. And these assumptions can be mismatched with the world, and we need to be careful about that. If your run expectimax but the ghost is actually playing against you, then you're very naive and you're going to be taken advantage of. On the Other Hand, if you play minimax but the ghost is kind of random, then you're very pessimistic about everything, too cautious, and you don't take advantage of opportunities. And so it's important to understand how the world functions to then decide which model you're going to use. Here's a pseudocode for this looks. It's a lot like the minimax pseudocode we had last time. There was a dispatch node. The dispatch node decides, is this a terminal state or not. If it is, return the state utility. If it's not, checked is it a max, call max value. Is it an expectinode? Call the x value function. This one is the same as before. Then the one on the right, the expectation calculation, is a weighted average. So it loops over all successors, looks at the probability of that successor, and then looks at the value of that successor, multiplies it with the probability of having that successor, and computes the weighted sum of all of them. So let's take a look at this in action. Here is an x node. We have probabilities that are not equal in this case-- one half, one third, one sixth. So that means we end up with one half a plus 1/3 24 plus 1/6 negative 12, which is C 4 plus 8 minus 2 is 10. So we have 10 over here. And again, very different from minimax is that in minimax, whatever ended up over here, corresponded to something we had at the bottom. Here is no 10 at the bottom. You're never going to have 10. But on average, you expect to score 10 if you land in this node. Let's to another example. Maybe let's have you work through this one. I'll give you 30 seconds, talk to your neighbors, and figure out what is the expectimax value, that is, the value of this node over here in this game tree. Any thoughts on the value of the game here? Anyone? Suggestion is 8. Anybody have another suggestion? Everybody thinks 8? Seems right. How will we get it? Well, it wasn't explicit here. When it's not explicit, we tend to assume that all probabilities are equal. But then we could sometimes have them not be equal, but we are assuming they're all equal. Nothing else has said. We have to an equally average these-- three plus 12 plus 9 is 24, divided by 3 is 8. 2, 4, 6 that's 12, divided by 3 is 4. 21 divided by 3 is 7. Then we have a max node on top of that. The max node we'll choose, the maximum expected value, which is 8. So you had it right. Now last time when we looked at minimax, after we saw minimax, we saw some modifications of minimax that could make it more practical. For example, we saw alpha beta printing, which was a meta reasoning approach that allowed us to think about, is this subtree important to look out or does it maybe not even matter what's in the subtree? It's not going to affect the outcome of what the value is of this game. Can we do the same thing here? We have an expectimax tree. We've seen the left branch to be 8. So we know that we have an option of 8 going this way. We now see a 2 over here. Can we now skip the rest or not? See some of those. Why not? Well, you can't know. Right now it doesn't look pretty good with a 2. But what if one of these is 1 million over here? All of a sudden, it looks really good. And you want to go there. Of course, you still need a check none of the other ones are negative 1 billion. But it really matters to look until till the last one because, when you average things out, one really extreme number can always dominate the average. And so you can't skip anything in this scenario unless you know there are maybe some bounds on what values can be achieved at the different nodes. So can't do the same kind of printing. The other thing we looked down was to do a depth limited search. The reason there was that, if we have to traverse the tree, even with alpha beta pruning, where we only to look at, in best case scenario , square root of the size of the tree, square of the size of the trees is still often way too big to visit everything. So can we maybe look out less by making approximations? And in depth limited search, we made the approximation that we said we're not going to search all the way to the bottom because we can't get there in the time we have. We're going to stop earlier. So maybe we stop somewhere here. We say after three moves we use an estimate of the true value, using a evaluation function. So I would look at the current situation. We had an evaluation function that this situation, we think it's worth this much. Compute that number and use that instead of looking underneath. That evaluation function is only going to be an approximation. It might be better if you design a really good evaluation function, whereas if you don't design a good one, a lot of art goes into that. Often also machine learning goes into that. If you have seen many situations before, you've seen how much they were worth. You can in principle machine learn to predict how much a new situation is worth. What it results in is that you don't need to go as deep in the tree, and you can, in a reasonable amount of time, find an approximate answer to the value at the top. How about these probabilities that we're using now. Let's do a little bit of a refresher of what they are and then build up from there. A random variable represents an event whose outcome is unknown, and a probability separation is an assignment of weights to those outcomes. For example, we might decide that we don't know what the traffic is on the freeway, or Bainbridge, right now. We don't know what it is, but we still want to somehow reason about it. And so we're going to assign probabilities to different possible situations. We might say 25% chance it's empty. It's probably not the Bainbridge, then. It's something else. 50% chance the highway has mild traffic and 25% chance it has heavy traffic-- a lot of modeling assumption being made here. For one, we just assumed that traffic can take on only three values. That's a very specific assumption of how we're modeling the world here. You might say, I'm going to measure density with real numbers, or you might say, I'm going to model it in five categories. Those are choices you have to make as the engineer, the designer of your AI system. How do you want to capture what is in the world? And then the second thing we have to do is choose probabilities for each one of them. And the choice of those probabilities this will affect what we do when we run an expectimax agent because, if the probabilities are different, then different outcomes are more or less likely, making certain branches better or worse for the agent. But that's what we have to do here for now. There are some laws of probability. They're always non-negative, and they have to sum up to 1. So notice that the sum of all of these is one. It's also the case that, as we get more information maybe over the lifetime of the agent, the probabilities can change. For example, maybe a priori we think probability of heavy traffic is 25%, but now we know that it is 8:00 AM. Well, if we know it's 8:00 AM, probability of heavy traffic is higher because a lot of people are driving around at 8:00 AM. And now the probability has changed to the 0.6. We'll talk more about that later. The middle one third of the class is very much focused on dealing with probabilities, updating them based on new evidence that comes in. And so we'll defer that to them. But for now, just keep in mind that these probabilities are things we need to work with and that they can change as you get more information. What does it mean to compute an expectation? The expected value of a function of a random variable is the average weighted by the probability of distribution over outcomes. So how long does it get to the airport? How long to get to the airport if this is our distribution over traffic-- and maybe depending on the traffic situation, it could be 20 minutes, 30 minutes or 60 minutes. Then the expected value of travel time to the airport would be the weighted sum, which gives us 35 minutes. Any questions about this? This is the basic review we're doing. Yes. [INAUDIBLE] So the question is, what if the expected value is the same in both branches but maybe in one branch it'll be either 500 or 550, whereas the other one would have something close to 0 and 1,000. And that's a much higher variance. You might you have a preference for one or the other. We'll touch upon some of that at the end of this lecture. So let's defer it for now, but it's a good consideration. So in expectimax search, we have a probabilistic model of how the opponent or the environment will behave in any state. This model could be as simple as a uniform distribution, roll of dice. The model could be sophisticated and require a lot of computation. It might be something along the lines of you have to decide whether to organize a barbecue or a indoor event, maybe October 10th, and then maybe run a complicated simulation of air pressure, air flows. And from that, you have a distribution over what the weather will be like on that day. And then you use that distribution to decide whether one or the other is the better decision. And so that can often happen in practice that the way you get those probabilities is after a very complicated calculation and then finally you know what the probabilities are. In the game tree, there is a chance node for any such stochastic event where we don't have control. It could be opponent or it could be environment, and it might even be that our probabilistic model says that it's likely the ghosts are adversarial to us but they're not necessarily always adversarial to us. That's one type of probabilistic model, where you say maybe they're mastermind ghosts working directly against us, maybe not. Maybe some ss for one, some probability for the other. For now we assume each chance node magically comes along with probabilities. So probability fairy visits us for this lecture, sprinkles the probabilities and we're good to go. One thing to keep in mind, though, is that, even though this is how we model things, it doesn't need to mean that we are right. For example, you might model a opponent as some probability of being minimax and some probability of being random. But that doesn't mean the opponent is actually doing any such things. It's just your model that you put in this expectimax tree. And hopefully your model is related to what is happening in the real world. For example, maybe you are playing against someone and you don't have enough information to know how they play. You don't know their strategy. And then you might have a distribution over strategies. In practice, they have one strategy, and they're using it. But you don't know what the strategy is. And so you have a distribution over strategies and optimize against a distribution over strategies rather than a single strategy. Let's do another little exercise. Let's say you know that your opponent is actually running a depth to a minimax and uses the result 80% of the time, and the other 20% of the time, doing a random move to mix things up. How would you put this into a game tree? I'll give you about maybe a minute to think about this, talk with your neighbors. And let's see where you come up with. Any thoughts about how we can model this? Over there, yes. [INAUDIBLE] So the suggestion here is, because we know the opponent periodically makes a random move, we don't want to play too conservatively. And so we want it somehow built into what we do that we don't end up playing too conservatively. Now of course the question is how do we build that into it. So we know that's what we want based on analyzing what the description is here. But what we want to avoid, of course, is that we have to hard code all kinds of things into it. Then we'd like to see if we can reduce it to a game tree. And we run search, the right kind of search in this game tree, then that strategy that you want, that periodically taking your chances just automatically emerges from solving the game tree. And so then the question becomes, if we want achieved this-- and you're right, we want we do want to achieve this-- how do we set up the game tree such that a search in that three gives us what we want? Suggestion in the back there. [INAUDIBLE] So the suggestion there is, if we reach an opponent node, we're going to model it as 80% chance one type of behavior, 20% chance another type of behavior. 20% chance of the type of behavior-- in fact, we reach a chance node again because it's random. And the 80% side, we reach the minimax strategy of depth too that the opponent is using. And then once we do that-- and that's the answer we're going to follow here-- is we achieve a game tree that, if we run expectimax in that game tree, we find a strategy that takes advantage of the randomness in the opponent while also being aware that it's only random 20% of the time. So the answer remains then expectimax. But to run this expectimax, when we look at these chance nodes, for each chance, node a bunch of calculation has to happen inside that chance node before we can proceed. So when we look at one of these chance nodes, what's the probability of the opponent, let's say a ghost, moving left or right? Well, we need to then look at this description here and we actually need to go run depth to minimax to understand what happens 80% of the time. So you need to think about that. That tells us for 80% of the time what's happening one specific action, depth to a minimax solution. And then the other 20% of the time we have maybe equal probability over all other actions, or maybe even including the one that's chosen from the depth to a minimax. So what we have here, then, is, whenever we encounter a chance node, we have to do a whole minimax calculation to just find the probabilities that live in that chance node. Imagine the ghost did a depth 10 minimax. Then every chance that you encounter, you'd have to run a depth 10 minimax search, which could be pretty expensive quickly, to find the distribution over actions they would take. So it's kind of interesting. Imagine it's indeed depth 10 minimax 80% of the time, and 20% of the time just random action. To solve that kind of problem, in every opponent node you have to run another depth 10 minimax, whereas in regular minimax, you just have to do a depth 10 once in the original tree and you're done. What's happening here? Why is it so much more expensive to deal with an opponent that does this other thing where sometimes they do something random? It's because now we don't have a perfect match between the assumptions of one player, player A about player B and player B about Player A. Once these assumptions are mismatched, all of a sudden the tree doesn't nicely align in the entire search and you need to do all these side calculations. And that's what's happening here, whereas in minimax, if one is always minimiz-- other one is always maximizer and they know that about each other and do that consistently, then their tree is the same tree. But here it's not anymore because there is a mismatch, and we need to do a big calculation in each of the opponent nodes to figure out what they would do in that situation or distribution over what they would do. Any questions about this? Yes. [INAUDIBLE] The way we set it up here, the assumption is then this is how we model the opponent. So we model the opponent as doing this. What they actually do, it might be that. If it is that, then we're doing great by solving it this way. If they're actually doing something else, then doing this is not going to be the best fit to play against that opponent. But all we can be sure of in this setting here is that we have a model for how we think they behave. And if we assume that's a good model, this is how we could solve the problem. Question, yes. [INAUDIBLE] In this case, it will take a constant amount of time longer. So it'll be the constant amount of time that it takes to expand depth to minimax search. And so maybe that is a certain amount of time that is a factor-- I don't know-- 20 or 100 larger than just already having a chance node that tells you this region over actions. And then that would be happening throughout the tree that you have to pay that factor 10 or 100 whenever you consider the opponent. It also shows that, in practice, you might have to approximate your opponent because, if your opponent is somehow capable of looking 100 deep, you might not have the resources to also look 100 deep and you'll have to approximate it with something else that hopefully is good enough. Let's take a look at the assumptions you were making throughout here. And what are the dangers of optimism and the dangers of pessimism? What are the dangers of optimism? It's assuming chance when the world is adversarial. What does that mean? It means you go around the world, you see everything very rosy, everything's good, everything's good. And in practice, that might mean you get pickpocketed everywhere, you get mugged. A lot of things go wrong because you're just assuming the world is so good where it really is not as good as you're assuming it is. And so that's the danger of optimism. You get taken advantage of by adversaries. What's the danger of pessimism? Well, what does it mean to be too pessimistic? It means that you assume the worst case when it's actually not that likely. You see a bunny and you think it's a vampire bunny, every single bunny you see. Now that gives you a very, very scary life. You're constantly jumping away from things, not living very optimally if they're just tiny, little bunnies. More generally what it means-- let's look at some examples for Pac-Man. What we'll do here is we'll look at a grid of possible scenarios. So what do we have here? We have-- Pac-Man could play minimax or play expectimax, and the ghost could be adversarial or could be random. If the ghost is random and Pac-Man plays expectimax, that's a good way to play. You're matched up with how the world works. So let's take a look at that first. Random ghost, Pac-Man plays expectimax, and here we go. The ghost randomly stepped out of the way, Pac-Man didn't. Let's watch it again. Pac-Man assumed the ghost would do something random, so wasn't afraid to get too close, and then just ate the one pellet left and won the game. What do you think is going to happen in the next where we have Pac-Man play minimax and the ghost is adversarial? Well, adversarial ghost is going to try to chase Pac-Man, and Pac-Man's going to anticipate that and keep a safe distance and hopefully still somehow get to the food pellet. You might wonder, how come this adversarial ghost doesn't just protect the food pellet. Wouldn't that be a smarter thing to do? Well, that's what an adversarial ghost would do if they could look really, really far ahead in the game tree. But in practice, we can only look so far ahead. The typically evaluation function for a ghost is based on distance to Pac-Man. And if you only look, let's say, four steps ahead and after four steps you're hoping to be as close as possible to Pac-Man, then you get this kind of behavior where you are constantly trying to move closer and you might open up the pellet for Pac-Man to eat and finish the game. Now let's think about what happens if we assume-- well, we play expectimax, but the ghost is a minimax ghost. So the ghost is playing directly against Pac-Man. Pac-Man is playing expectimax. So Pac-Man is too optimistic for their own good. So very optimistic, hangs out , and just made it on this one. Sometimes optimism is good, and it works out. Sometimes it's less good. And so this will play out stochastically because just the way the game is set up-- a little bit of randomness in it. And sometimes Pac-Man will succeed. Sometimes he'll be eaten. It's not playing very safe against an adversarial ghost. How about the other scenario, the scenario in the bottom right here? Now, bottom right-- the random ghost, but then a minimax Pac-Man. Minimax means Pac-Man assumes that ghost is playing against it. So he's going to play very safe, assume that the ghost has a plan to come and chase it and eat it and going to keep a very safe distance. But in reality, the ghost is actually random. So the ghost is just randomly bopping around. Pac-Man waits off on the left, and then when finally there is room, he's daring to go. Let's look at another one of these. Ghost, he's just bopping around. Pac-Man is worried, very worried, keep being worried, but then finally says, now there is no mastermind plan that exists anymore that can get me on the way to the dot. And now I'm willing to go. We ran this quite a few times. Here are the scores we ended up with. So if you want to get statistically significant scores, you need to run more than 5 times. But I think this gets the picture across. If it's a random ghost in expectimax, we get the highest on average, 503, because we're in it easy environment, because the ghost is not playing against us, and we're aware we're in an easier environment and taking advantage of it. Here we're in a difficult environment, and we are aware of it. Our score is lower, but not that bad because we realize this is a difficult environment and we play accordingly. On the other hand, if it's a difficult environment and we're not aware of it, then we often get eaten and our score is not that great on average. And then over here, we have a random ghost--so easy environment, but we're not really taking advantage of it. So we have a lower score than if we take advantage of it. But not that much worse in this case because we only waste so much time being scared before we dare to go for the food pellet. Any questions about this? So so far we have seen two types of games. We have seen minimax games and expectimax games. There are some other types-- still turn based. For example, backgammon-- in backgammon, there is a maximizer, a minimizer, and chance. And so effectively this is the same thing. We just now need a dispatch node that can dispatch between terminal, max, min, exp, and we can solve these kinds of problems too. For example, what happens here? It's maximizer's turn, chooses a move. After that move there is a roll of dice, which determines the situation minimizer gets to see, in which they get to make a move. And this process repeats-- very simple. In terms of algorithmic change, just a four way dispatch instead of three way dispatch. Backgammon-- it's actually a pretty big game in terms of state space. There's about 20 legal moves each time. There is a depth-- if you do a depth two search, then you have 20 moves. But there is also this stochasticity to account for-- how many possible outcomes of rolling dice-- and we get actually a very large tree, even if we only look at a very small depth. As the depth increases the probability of reaching a given search node shrinks. So what we have happen here is that, because of the stochasticity, each particular branch is in some sense less significant than it would be in a minimax scenario. Because in minimax, maybe an opponent could forces down a path. In backgammon, the opponent can't really force us down anything. They need to have the right roll of the dice before they can force us anywhere. And so no single branch can dominate as much as it can in minimax. Historically, what that resulted into is that, for backgammon, there was actually the first AI world champion for any game. This was built with just a depth two search, a very good evaluation function, trained with reinforcement learning, and this achieved world champion level play. And so the intuition here, why you can get away with a depth two search is that, if you miss this one scenario that could happen somewhere down the tree, actually it's not that likely to happen anyway because of the averaging that's happening. And so just depth two happens to be enough to build a world champion. There are other types of games. So this is one variation where you have min, max, exp. Here we have a multiagent game. So we have a red player, a blue player, and a green player. What does this mean? It means, if we're at a terminal node, here let's pick one where there are all different numbers. If we end up over here, the utility for red is 7, for blues is 2, for green is 1. So what we have here is that each player has their own value of a terminal node and will optimize for their own outcome. Minimax-- we kind of had that, too. The maximizer had the number that we were showing, and the minimizer had the negative of that number. So there were actually two numbers sitting there, but it was the negative of each other. So we only showed one. But minimax is a special case of this where we just collapse those two opposite numbers into one number that we display. Different things can emerge here, though, when these numbers are not just complementary to each other. For example, what if we're over here? What would blue want to happen? Well, blue would love this 6 over here. Can they make that happen? Well, what if they play this way? What will green do? Green will look at, well, I could have six or two. Green will choose the six, which will happen that blue also gets 6 and gets what they want. So what have here is actually a collaboration between blue and green happens to emerge because the utilities they have over outcomes happens to be aligned in a nice way that they end up working together. That doesn't mean they are told to work together. It's just emerging from these utilities. If the game ended up over here, what would blue want? Blue would say, well, I'd love to get a 7. That's the nicest thing underneath here. Let's say blue goes here. What will green say? Well, there's is a one here for me and a five here for me. I like the five. I'm going here, and blue ends up with two. Blue really should have gone this way where green would have chosen-- let's see-- actually, not sure if it fully works out here. Blue would have preferred-- yeah, let me retake that one. What happened here on the right is that, even though blue would have preferred seven, green doesn't pay attention to that and just picks this one independently. What happens on the left is something very similar. If blue goes here, green will still not pay attention to what blue cares about. Here is a seven. Here's a two and, green will pick this side and blue will end up with one. And so in both branches here, we see the green the opposite of blue would have wanted, even though on this side, they're perfectly aligned. And that's just a consequence of whatever their utilities are for the leaf nodes. How do we solve a problem like this? In each node, we now look at the values underneath and pass them all up based on the decision the current node would make. For example, we're here, green checks 2 or 6. 6 is better. So it's being passed up-- 1, 6, 6 is being passed up. Here green says 2 is better than 1. So 6, 1, 2 is being passed up. Then blue takes a look, which is the middle number. 6 is better. So 1, 6, 6 is the value over here. Similarly on the other side, green decides at the bottom, we'll go for the 7. This is a 5, 1, 7. The other, it'll pick the 5. This is a 5, 2, 5. Then blue will pick based on the middle number. So it'll be a 5, 2, 5. And red will pick on the first number, and so it will be 5, 2, 5. And this is how the game will play out. So similar machinery as we've used for minimax and expectinodes, but now keep in mind that you need to propagate values for all players, not just a single value. Any questions about this? Let's take a couple of minutes break here. And after the break, we'll look at utilities. [INAUDIBLE] Sure. STUDENT: So you don't how predefined office hours? We should ail you-- PROFESSOR: They-- [AUDIO OUT] OK, let's restart. In the second half of the lecture, we're going to look at utilities. To think back to lecture 1, the mantra for the AI class is maximize your expected utility. We've been maximizing expected utilities. We've not actually looked at those utilities and where they might come from and why it's even meaningful to have them. Should they even exist? And that's what we're going to do now. So one question you could have-- why should we average utilities? Why not minimax? Let's think about this. Let's say you're going to make a decision and maybe you're deciding between-- I don't know-- deciding between pizza and Thai food or something. You might say, OK, what am I utilities for each one of them. The Thai food is exciting, but it's sometimes too spicy for me. And then you assign some utility to it. The pizza has some utility. But the Thai food one has different outcome. So you can say, well, I should maybe use the worst case where it's too spicy for me and it's going to be bad. So then Thai food is a worse choice than pizza, and I'm going to take pizza. That I would be the minimax approach to choosing what you're going to do. But maybe that's not the right way to do it. If you're constantly afraid of the worst case scenario-- that's the scenario we talked about-- everything is a vampire bunny. In fact, what you're then going to do if you're never going to go out on the street. You're going to say, there's always some chance that one of the drivers makes a mistake, drives onto the sidewalk, hits me. And you're minimax tells you to stay inside at all times. Then you'll maybe say, well, if you're inside, maybe an earthquake can happen and things can collapse. And you'll just have no reasonable choices if you run minimax for all your decisions. The only time minimax can actually be useful for you is if you already, ahead of time, exclude all those really special things that rarely happen. But if you want to incorporate that these things can happen, these extreme events, then minimax will never work. So if you want to do a fairly comprehensive reasoning about the world, minimax will just not work for you because you're always afraid about what's going to happen. So then [INAUDIBLE] down to something like expectimax. What's the principle of maximum expected utility? Is that the rational agent should choose the action that maximizes his expected utility given its knowledge about how the world works. So then the questions are, where do these utilities come from. Let's think about that. Let's say you have a agent that's supposed to clean your house. Where should utilities for the agent come from? Well, we should specify them for that agent. And we should tell the agent your utilities based on how clean the house. We can't just let the agent choose their own utility. If they choose their own utility, they might just say, oh, well, high utility no matter what. I'm just going to sit here. It's easy. I'm optimizing my utility. Have to do nothing. So to insure that the agent does something we care about, we've got to be able to give it the utilities that we want it to optimize for. So that's where they come from. We have to give them to the agents. But that means that somehow we have to have utilities that quantify what we like or don't like about outcomes in the world. So how do we know that we even have those and that it's even rational to have utilities? Does it even makes sense to think of the world in terms of every outcome has a utility associated with it? And let's say we do have a number for every outcome in the world. Why is averaging them the right thing to do? And what if you believe that your behavior just doesn't match up with utilities. Is that possible? Is it possible that you just live your life in a way that utilities are not meaningful and then maybe you can't use utilities for your agents to help you out because it's mismatched with how you live your life? Well, let's first take a look at some of the new challenges in expectimax in terms of utilities compared to minimax. Let's look at these two game trees here. What's different between them is that we squared the terminal values going to the right. So 0 squared is 0. 40 squared is 1620. 20 squared is 400. 30 squared is 900. Now let's see what happens if we run minimax in both of these trees. Well, in the first one, what did we end up with? Well, we end up with a 0 here, a 20 here. So we're going to play this way. What's going to happen in the second tree? Outcome is going to be the same because, by squaring, the leaf nodes have the same ordering. So we have the same preference over leaf nodes. And in minimax, that means we end up with the same outcome. Min still prefer going left there and still prefer going left here, which will make max prefer going right here. The value is now 400 because we squared everything, but the game gets played the exact same way. What if we have expectinodes there. Well, this average is 20. This average here is 25. 25 is better. We have 25. How about on the other side? The average here is 800. The average on the other one is 650. So we prefer to 800, and it has changed. So by squaring the utilities, we've changed how we play the game. So it means that, all of a sudden, when we work with things like expectimax where we average utilities, it's not just about ordering outcomes. It's also about the numbers we associated with these outcomes because the specifics of those numbers do matter and affect what we're going to decide. So let's look at some examples. Our utilities are functions from outcomes that is states of the world to real numbers that describe an agent's preferences. For example, maybe there's three possible outcomes. You have an ice cream cone. You have an ice cream cone with one scoop. That's probably better. We have the ice cream cone with two scoops, which you might think is even better. And so you might say it's this kind of preference relationship. That's great. But if you want to do expectimax, it's not enough to know which outcome is preferred over which outcome. We actually need associate numbers which each one of those to reason appropriately. In a simple game, the numbers are easy. Just the rules of the game will tell you-- maybe plus one for a win, negative 1 a loss, zero for a draw. It turns out there is a theorum that, even beyond games, for any rational preferences, we can summarize them with a utility function, and we'll see the theorum in a moment. So we prefer two scoops over one scoop over just a cone. But when we make decisions, we might not get to decide just between those three. There might be probabilistic outcomes. For example, we might get a single or a double. Single makes us pretty happy. Double is more complex. When we get a double, we might drop it, much harder to balance. And now we only have the cone. Or we might really be happy and have the two scoops to enjoy. Now if we need to decide whether to go for a single or double, it matters what the probabilities are of us dropping it and it matters what the actual utility numbers are here. It's not enough to know that two is better than one is better than zero. We need to know the numbers and the probabilities and that will determine which choice we make over here. So we need to deal with utilities of distributions over outcomes. Outcomes here-- the terminology that people use is prizes. Don't think about it as some kind of prize you win for something amazing you've done. It's just terminology. You can just call it outcomes or leaf nodes in one of those trees, and that could be a prize that's called a, one that's called b, and so forth. A lottery is a situation with uncertain prizes. So it's a expectinode over outcome could be A or B. And then our notation is that, if you prefer a over b, we write this. If they're equally good to us, we write this thing. Now when we talk about lottery, again, this is a terminology in this context. That means something specific. This is what it means to be a lottery. It doesn't mean there is some kind of game involved where you're scratching things and seeing what's underneath. It just means that a probabilistic event is going to happen. We end up with either A or B with some probability distribution. Now let's see what it means to be rational. One thing we want from a rational agent is that it has transitivity of its preferences. What does that mean? If the agent prefers A over B and prefers B over C, the agent should also prefer A over C. That seems pretty logical. Can we make this even more explicit that this is really the only right way to go in some sense? Well, we can. Imagine we have an agent who does not satisfy this property. Then we can essentially acquire all their money. The way it is what happened is that, if they don't satisfy the property-- let's say they prefer B over C-- then if they had C when they start out, they would happily pay a little bit of money to give us C and in exchange get B. And we get one sent along the way by giving them B and getting C. If they don't prefer A over B, then they just B from us. They can give B now and pay a little extra to get A instead from us. And we again made some money, then now have A. Now remember, we got C from them initially. If they're non-transitive, they decide C is better than A. And we have C ready to go, and they'll give us A again with a bit of money, because C is worth more to them. Now they have C, and we've got some money. And we're back to where we started. So we went around in a circle where, at the end of it, all that happened is we made $0.3 and everybody's holding the same prizes as they were holding in the beginning. So being transitive seems pretty important. Otherwise, this cycle is going to be put into play against you, and you're going to lose everything over time. What are some other actions that people have posited rational agents should satisfy? You are required to either prefer A over B, B over A or be indifferent to them. Those are the only three options for two prizes. Transitivity, we just discussed. Continuity-- if you prefer over B and B over C, then there must be a lottery between A and C that is equivalent to B because B lies between A and C. So some distribution over A and C should match up with B. Substitutability-- if A and B are equally good to you, then a lottery that involves A with some probability is equivalent to you to the same lottery that involves B rather than A. Monotonicity says that, if you prefer A over B, then if P is a probability that's higher than Q, then a lottery that has a higher probability on A compared to this one here should be preferable. If you prefer A over B, you increase the probability of A. Then you should prefer that lottery that has increased probability of A and decreased probability of B. These all seem pretty logical that you would want from your agents. There is some theory that shows that, if your agent's preferences satisfy, or your preference, satisfy these actions with rationality, then your optimal behavior can be described as maximization of expected utility. So rationality confirmed if you satisfy these actions. And a more formal theorum says this-- [INAUDIBLE] Von Neumann-Morgenstern theorum, and also from Ramsey in 1931. Given any preference of satisfying these constraints, there exists a real value function U, the utility function, such that if your preference is A over B, then that function U will satisfy U of A bigger than or equal to U of B and the utility of a lottery will be the weight sum of the utility of each of the entries, each of the prizes in the lottery. This is what we're doing in expectimax. The averaging step is exactly this. We're averaging utilities based on their probabilities. So this is saying that that is the right thing to do if you have rational preference. So if you're happy with these actions, then the consequence is that there exists a utility function where averaging is the right thing to do. The maximum expected utilities principal is then to maximize that expected value. Let's now think about human utilities because, if we're going to have agents make decisions for us, we need to think about our utilities and give them to the agent so they can maximize our expected utility. How do you assess the utility values for outcomes? What do we use to normalize them? You could say the best possible thing in the world is a 1, and the worst possible thing in the world is a 0. For example, maybe the worst thing is being death and then you could start quantifying things like micromorts. If you have a one in a million chance of death, what is that worth to you to avoid that? And you start quantifying things that are extreme because we want to look at the extremes. Quality adjusted life years is another metric often used. It's also important to keep in mind, when we say there is a utility function consistent with your preferences, you can actually add a constant to it and rescale it with a positive scaler and nothing will change in your behavior because that is compatible with how averaging operations also work out. Now if we don't consider lotteries, all we can do when asking people questions is understand what they prefer over what and have a ranking. But once we start giving them lotteries, we can start understanding the actual utility values. And that's what we need to do to infer their actual utility values. So here's how we do this. Standard way of getting some of these utilities-- you say, OK, you can spin the wheel or pay to pass. When you spin the wheel, there's some small chance it lands on dead, instant death. That's it. So nobody would like to play this game probably. But the question is, how much are you willing to pay to not have to play this / and that tells us something about what, let's say, this is worth to you. So best possible prize is, in this case, could be life stays as is or something. Worst case is you're dead. And so the game would be, you could say, OK, how much would be willing to pay to avoid dead. Well, maybe I'm willing to pay $30 bucks to avoid death. If it's some kind of Michael Moore thing where there is, let's say, a one in a million chance of dying-- if that is your decision here, what that means is then, on the scale from 0 to 1, on this standardized utility scale, $30 to you is worth 0.000001 micromorts OK. Or 0.0001 morts, which is one micromort. So we now have a scale. That's how much you value $30. We can do this for everything. For everything, we can say, OK-- maybe you ask, how much do you value your laptop. And you might say, well, maybe there's a choice or something else. Maybe how much do you value getting a new bicycle. And then you might say, OK, there's a choice between getting a new bicycle or not having to play this game. And then you say, well, I'd rather play the game and get a bicycle than to not play the game. You look at the equivalence. What probabilities land them here such that you're equally happy with one situation versus the other? And that tells you the utility value of a bicycle, a new bicycle for you. How about money? Isn't money a way people measure utility? It turns out money is not the same as utilities. For example, let's say we have a lottery, some probability of winning x, 1 minus p, the probability of winning y. Then the expected monetary value is this quantity over here. That's averaging the money. But the utility of this lottery is this over here. So the utility of the lottery is the way that some of the utilities of each outcomes, which is not the same as the utility of the average money you would get. In fact, often the utility of the lottery will be smaller than utility of the average money that you would get. That means most people are risk averse. And when deep in debt, people tend to be risk prone. Let's make this a little more explicit. Imagine you want to understand utility of money. This is money. This is utility. How much is money worth in utility values? Well, who wants a million dollars? Most people. Some people don't. You missed your chance. Now the million dollars means it's worth something positive to you because you like it. How about a billion dollars? A lot of people. Whoever doesn't want, come talk to me. Maybe how much is it worth to get a billion dollars? Well, worth more. How much more? Well, maybe some amount more. Now let's think about it again. Now you have your billion dollars in hand. You still want a million dollars. Yeah, probably you still want a million dollars. More is always better. But the first million is much more special than once you already have the billion. See, the additional value here from utility you would get from going from zero to a million is much higher than if here you got an extra million maybe and only end up over here. So we see that somehow we look at these utility curves, they end up starting to saturate-- not saturate, but climb more slowly as you have more and then the other way around here. Once you're in debt a lot, once you have lost a billion dollars, you cannot pay it off ever anyway most likely. So what is it to then lose a little more? It's all the same. And so that's what's happening on this side over here. How can we quantify this? Imagine there's a lottery between $1,000 and $0. What is the expected monetary value of $500? But most people would prefer a guaranteed other amount. Would you prefer $10 guaranteed over the lottery? Probably not. Would you prefer $900 guaranteed over the lottery. Yes, definitely $900 guaranteed is better than this. How about $400? Maybe $400 because it's guaranteed, and on average you get $500 and $400 is pretty close and now you're guaranteed. That's where most people will land, around $400. So that means is that the equivalent lies lower than the average. That's where insurance companies come in. If you are holding a lottery ticket of this type, an insurance company will say, this is worth $400 to you. I'll buy it off of you for $400. I'll take the ticket. Your insurance company now has a lottery, but it has so many lotteries in its pool of lotteries that a central limit theorem kicks in. And for the insurance company, that averages out and that lottery ticket is actually worth $500. Even for you, it's the equivalent the $400. And so that's how you actually get a win-win situation where there is a difference between the $400 and the $500. That's where the opportunity lies to have a win-win. In fact, if the insurance company gave you $450, they would still make $50 on it. And so between 400 and 500 is the wiggle room for negotiation where all the win-wins lie between you and the insurance company. Now let's take a little test. Let's say you can choose between lottery A or B. A is 80% chance for $1,000, 20% chance nothing. B is guaranteed $3,000. Who prefers A? Who prefers B? Most people prefer B. Here's another lottery. Choice C, 20% chance for a $1,000, 80% nothing, versus D, 25% $3,000, 75% nothing. Who prefers C? Who prefers D? So most people preferred C. That is pretty typical for general population to prefer B over A and C over D. So slides also predicted that. Now let's do a little bit of math. If the utility of $0 is 0, because we can shift and rescale anyway, then B better than A means utility of 3k is better than 0.8 utility of 4k. If C better than D, then we have this equation over here. That's the exact opposite statement. So people who prefer B over A and C over D have reached a contradiction. What does that mean? It means, as far as this is concerned, you had irrational preferences because there's no utility function that can satisfy both at the same time. Or maybe there is. Maybe it's not that you're irrational. Maybe there's something else at play. When you look at the utilities-- and people studied this paradox a lot. They say, well, maybe it's not because people are irrational. Maybe it's because we don't model the situation the right way. Maybe what happens-- the reason you prefer B over A is because, when A is being chosen and you end up over here, you don't just get 0. You also get a very, very stupid feeling that you could have had a lot of money and you didn't. And so there was 0 plus stupidity here, and utility of 0 plus stupidity is lower than the utility of zero. And it makes it all work out again. Next time we'll look at MDPs. STUDENT: Professor, I find this stuff really interesting. So I was wondering, what-- |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181106_Machine_Learning_Perceptrons_and_Logistic_Regression.txt | [INTERPOSING VOICES] PROFESSOR: Hi, everyone. Welcome to lecture 21 of CS 188. Today we'll cover perceptrons and logistic regression. Couple of announcements. Your project four is due on Friday at 4:00 PM. Your homework 10 will be released soon. Next Monday is a holiday so your homework is going to be due on Tuesday exceptionally rather than Monday. And then your midterm two is coming up. That's on Thursday. So please prepare for that. Any questions about logistics? Yes. STUDENT: What materials are covered for-- PROFESSOR: What materials will be covered for midterm two? It will be everything from the very beginning of the semester all the way through naive Bayes, which was last lecture and the current homework. What we cover today and today onwards will be part of the final but not part of midterm two. And midterm two will have more emphasis on materials covered after midterm one than before but it'll be a mix of both materials. STUDENT: How much is it? PROFESSOR: It's hard to know ahead of time. We just tried to put more emphasis on it. Then some questions come out better than others, and that affects which questions end up on the exam. But the kind of mindset ahead of time is more emphasis on materials that came later. But sometimes it'll be a mix. So it'll be about materials that came later and earlier all in one question. And you need to understand both to be able to solve those kind of questions. Yes. STUDENT: Will there be a review session? PROFESSOR: Yes. So we'll release a midterm two prep page probably on Thursday evening. There will be review sessions in the first half of next week, and there will be a practice midterm two that we'll probably do on Monday or Tuesday, a couple of days before the midterm. Similar to how midterm one was. So today we'll look at perceptrons and logistic regression. And before we do that, let's take a quick step back to what you covered last lecture, as well as the lectures before that. So the recent five, six, seven, eight lectures was about probabilistic models. And if we train a probabilistic model over all variables that we might care about, we can then reuse that probabilistic model to make decisions for specific queries. For example, what's the probability of a class label y given some feature values f1 through fn? And that was one way you could reuse those models. And that's what we saw with naive Bayes. For a specific type of Bayes net, you can use it to answer conditional queries about a class label. Today we're going to look at, again, solving that kind of problem, where we have a set of features and we want to output the decision on what class might be represented by those features. But we're not going to learn a full probabilistic model. We're going to focus on just the decision of going from features to what the class label might be and ignore learning a full probabilistic model, which has some pros and cons. If you want a full probabilistic model you're not going to get it with the methods from today. But if all you care about is that decision, today's method will zone in more on the particulars of that decision. So what we'll look at is something called linear classifiers. We'll have input-- for example, email, like you've seen as a running example in previous lecture. Then the email gets turned into a feature vector. They'll still do the same thing. In this case, the feature vector has a account of how often the word "free" occurs, how often your name occurs, how often a misspelling occurs, whether it's from a friend or not, and so forth. The result of that is a real vector that is what we use to then make a decision from. And then it might be spam or ham depending on whether we like this email or not. What we'll focus on today is this part over here-- how to go from the feature vectors to a decision of what the class label might be. This part is very important too, going from x to f of x. Traditionally, it requires a lot of art in that you think carefully about what might be a good way to characterize text. Maybe occurrence of words. Maybe some words should be considered synonyms and then you count them together. Maybe you should have some special things, like does it come from a friend as a feature and so forth. Those are an art to figure out. That's still sometimes how it's done. What we'll also see in the next next lecture is how to do this transition from x to f of x in a way that it's also learned. But for now we'll assume f of x is available and go from f of x to y. Or it could be images. Input is an image. That image gets turned into a feature vector that represents the image. In this case, the feature vector is a bunch of digits here. Is there a pixel value that's on or off at each location? So 1 0 values. You could also use grayscale. If it was a grayscale image, it would be a number between 0 and 1 maybe, or maybe sometimes from 0 to 255 if you have byte representation of images. And then you might have more advanced features, like counting the number of loops in an image, and make that one of your features. And again, today is all about going from once you have those feature vectors to making a decision of what's in the image. And in the future, we'll look at this part. Today's approach will be somewhat inspired by biology and how the brain works. Now, nobody really knows how the brain works so don't take this too seriously, but it's a loose inspiration of roughly some ideas of how the brain might work that we're going to use here. So your brain consists of a bunch of neurons. Maybe, if you're pretty lucky, you have 100 billion neurons or something. And we're going to zone in on one of those neurons. So this is just one of them out of, if it was a human, about 100 billion. So you have a neuron here. What does a neuron do? Well, it has a bunch of inputs. Stuff comes in here, here, here. They're called dendrites. That's where the inputs come in. This could be coming maybe from your retina, where photons hit your eye, or it could be coming from other neurons sitting in another part of your brain that send signals over. These signals come in. Then the nucleus here, the core of the neuron, based on what comes in, in some sense makes a decision. It decides whether or not it's going to send out a signal over the output channel, which is called the axon. And the axon will then branch out and go reach many other neurons where this will become essentially the input to other neurons that come later. We're not going to worry about the network of neurons for now. We're just going to worry about one of these. And so we're going to have maybe you can think of it as f1 coming in here, f2 coming in here, feature 3 coming in here, and feature n coming in here. And then here we have a decision y that's coming out. Representing as a circuit diagram you have, essentially, a bunch of inputs that are wires. Those wires could have a high signal or low signal. If they're very active they might influence more what comes out on the other side. And then there will be weightings here-- see the little weights here-- that corresponds to the strength of the connections. With some neurons you'll be more strongly connected than with other ones. Stronger connections will mean that you pay more attention to what's coming from that input. And then somehow the combination of inputs is resulting in a output signal, which would, in our case, be a class label. So the inputs are the feature values. Each feature has a weight, and the sum is the activation. So take a weighted sum of feature values. That's the activation, and we feed that out. Let's put that in mathematics. So the activation coming out is a weighted sum of features. So this might be f1 of x. This here, small font-- I can't even see what the number is. Actually, looks like the 1 is on top. f1 of x comes in here. This one here is w1. And we compute w1 times f1 of x. That together gets fed in here, summed together with all the other weighted feature values, or in a dot product notation, which we'll use a lot going forward, w inner product with feature vector f of x. If the activation is positive the output will call a positive 1, and we'll have the positive class. If the output is negative, we'll say it's a negative class. You might wonder, how can it be negative? Well, these weights could be negative. And then if a positive feature value comes in and the weight is negative, you get a negative output if that dominates, or you could have a feature value that's negative and a weight vector that has a positive entry and together, it becomes a negative output. Any questions about this? Because this the basic calculation we're going to be doing for the first half of lecture here. Represented as a network-- and we'll see more of these drawings in the future-- we have features coming in. In this case, three feature values. Imagine an image with just three pixels that could be on or off. Three feature values coming in. They're being weighted, summed together, and then a decision is made whether the resulting activation is bigger or smaller than 0. So let's look at an example. We have a weight vector, w, and this weight vector w has entries, just like feature vectors of entries. Just as many entries. And it corresponds. So maybe the weight for "free" is 4, meaning you pay a lot of attention to how often the word "free" occurs. The weight for your name is negative 1, meaning if your name occurs, the activation goes down. In this case, it's supposed be positive when it's spam, negative when it's not spam. So when your name occurs, you want the probability to go down. Then misspelled is 1, because maybe spam emails more often have a misspelling. From a friend is negative 3, meaning that you want it to be a negative output if it's from a friend and you have no other signal. And this is our weight vector drawn here in 2D space. But if there's four features, really they would be living in a four-dimensional space. And the dots here mean that, in principle, there's even more features. So it'd be living in a high dimensional space that we can't draw on a slide. Yes. STUDENT: Is there a bias term? PROFESSOR: Is there a bias term? You can. So that's a good question. You could start off with a bias here, which is an entry that would be always 1. And what that would do is, let's say you have no information. If you think most often it's one or the other class, the bias term could push you in that direction. And it would essentially set a higher bar-- if the bias term is positive, it would set a higher bar to be able to conclude that it is not spam. Or if the bias term is negative, that would mean that you think, by default, it's not going to be spam. But if there is enough indication for spam, you can get it over zero and classify as spam. And so let's say we assume-- maybe we assume almost everything is spam, which, behind the scenes probably is true, and a lot of it we don't see. Then maybe the bias term would be something like 100 in that by default you think things are going to be spam. But if there's enough evidence otherwise, then you might be able to get this below zero. I haven't told you how we find these weights. For now, just assume that they're there. It's like your brain's already been trained. It's fixed. It's going to make decisions. But we haven't talked about how we actually train it. So this is the weight vector. And the dot product determines whether you're positive or negative. So it's very easy to see, if I draw a vector corresponding to some incoming email, I compute the feature vector. If I draw it here, well, it's in the same direction. That's a positive, so that's spam. If I draw it in this direction that would be misaligned. That's a negative. That's not spam. And really what you want to look at is, in some sense, the orthogonal plane to that vector. And if you lie on one side of the plane you have a positive inner product, on the other side, a negative inner product. And that's how decisions get made. So here's one vector which is pretty spammy. A lot of free. Not your name anywhere. Things are misspelled frequently. It's not from a friend. And so it's pretty well aligned with this w. But then here's another one that looks like maybe a more benign email, and the feature vector points in the other direction. Negative inner product. And this one will be the negative class, and this one will end up being in the positive class according to this vector w. If you change w, your decision process will change. So how do we get these decision rules? We have a binary decision rule in the space of feature vectors. The examples are all points. Any weight vector corresponds to a hyperplane. What does that mean? A weight vector is a hyperplane. It's the hyperplane orthogonal to that weight vector. And so let's draw this out. Here is a weight vector. It mostly points towards free and a little bit towards money. So it would maybe roughly point in this direction. Then we have a bias term. What does that mean? It means that if we did not have a bias term-- let's say for now that this was 0-- then our hyperplane would be right here going through zero. Well, it's not a perfect drawing. It should be orthogonal. So maybe this way, going through 0. And this would be the positive class. This would be the negative class. But we actually do have a bias term, and the bias term seems to favor the negative class. So that means this hyperplane is actually shifted over and will be-- it's not drawn to scale but will be, let's say, qualitatively somewhere here. And then everything on this side is negative and on this side is positive. So you could have a point that lies out here. And you would look at the inner product with the weight vector. But then there's also the bias. If this one's far enough out it would be positive. But then, if it's only out here, well, it's not far enough out with the weight vector. It would be negative class. Let's do this in a more typeset drawing. So we have a decision boundary due to this weight vector. Corresponds to where this equation holds true. One side is negative. Other side is positive. That's how we make decisions. How do we find w? It's going to be a little different from most things we've seen so far. We're going to have an iterative algorithm that doesn't just look at the data and says, here is the result, but will loop over the data, look at all the data multiple times until finally it's found something it's happy with. So it'll do an update, update, update, till finally it converges. A bit like the local search we did in [INAUDIBLE],, where you wouldn't right away find a solution but try to improve it over time till finally you find something good. So we start with all the weights it's equal to 0, let's say. That's a reasonable thing to do. Then, for each training instance, we classify with the current weights. So you run the classifier with the current weights. If this is your current classifier and these are data points, then that should be the case that it will be correct. You feed in a data point or made the correct decision. y star means the correct decision. y is the decision the perceptron is making. They're equal, will be no change needed, and you go on to the next data point. On the other hand, if it's wrong, like you might have in this drawing over here, and if you're looking at this data point, then an adjustment will be made to the weight vector to try to make it more compatible with the data point that we just saw. And then this process repeats until ultimately, hopefully, it's compatible with all data points that are in our data set. Let's look at this a little bit more concretely. We classify with the current weights. So we have a weight vector. We have a feature vector. We compute the dot product, w times f of x. And we check if it's positive or negative. That'll tell us whether it's correct or incorrect. Now if it's correct, if the true label was positive in our algorithm, we have to do nothing because it was predicted as positive, the true label is positive, no update. But what if the true label is negative? So what if y star is negative 1? What will happen? Well, we'll make the wrong decision here because the inner product is positive and we should be negative. So you want to adjust the weight vector. f we cannot adjust here. f is just a description of the input, and that's how we capture what we're getting. w is our decision making process, and we can change that and try to improve it. So how can we change it? Well, if y star is negative 1, so it's wrong, what we can do is we can say, well, we'd love for w to be not as aligned with f. So what does that mean, not as aligned with f? Well, in some sense that means more aligned with the negative of f. So we can make w more aligned with the negative of f. That will make it less aligned with f. So how do we do that? We can just essentially add y star times f. In this picture what it would look like is we have y star times f or y star negative, pointing the opposite way. We can add it to w, and we end up over here with our new w. Let's say w prime, which is the one we get after the update. This new w, w prime, will be less aligned with f, which is what we want because you want it to be a negative inner product. How do we know this for sure? We can do some math. So let's call this w prime. After the update we have w prime inner product with f is equal to, well, the original w plus-- in this case, y star was negative 1 but let's just say y star for now, times f inner product with f, which is w inner product with f plus y star times f inner product with itself. This is the original, and this is the correction. So after we did the correction, the inner product has changed. How has it changed? f times f is always bigger than 0. So when y star is positive, the inner product will have gone up. When y star is negative, the inner product will have gone down. That's exactly what we want. So it's a very, very simple update rule that ensures that the inner product moves in the direction we want it to move. Pictorially what it means, if y star is negative, we see the weight vector rotate away from f. And if y star is positive, we'll see the weight vector rotate closer to f. Any questions about this? This is our first algorithm. This is the perceptron. Yes? STUDENT: Is there a way to control the rate of convergence? PROFESSOR: So the question is, look at this-- I mean, it gets one example, and it seems to be swept all the way over. And maybe that's overfitting. It's paying too much attention to this one example. And indeed, learning rates is what you would use to make the steps of the size that is more appropriate. We're going to look at that in next lecture. Not today. It does turn out that, under certain assumptions, this works just fine, even without having a learning rate. So it's not necessarily wrong to do it without a learning rate but definitely some more sophisticated approaches. We'll do an update where they will multiply this with some alpha, just like we did in Q-learning, to make the adjustment more gradual. There was another question. Yes. STUDENT: Could you relate it to the feature-based? PROFESSOR: Is it related to-- STUDENT: The feature-based Q-learning? PROFESSOR: Is this related to feature-based Q-learning? In many ways it is. In feature-based Q learning we had notion of the Q value is weight vector times feature vector. In Q-learning, we wanted the result of that inner product to be a Q value. Here, the way we use the resulting inner product is to make a decision as to whether it's positive or negative, which indicates which class we are in. But there's definitely a lot of similarity. But the learning is very different because in Q-learning, we learn with an approximation to the Baumann equation based on samples, and here we learn essentially through supervision of whether something should be positive or negative. So we get a lot more signal here. In Q-learning you have to bootstrap. And you were doing it based on some estimate of what the Q value would be at the next state. That was kind of approximate. Whereas here somebody tells you, this data point, this f of x, should have a positive label, or they tell you, this f of x should have a negative label. So a lot more signal here to learn from. That was the binary perceptron. In a separable case-- meaning the data points you have are laid out in a way that there exists a hyperplane, or in two dimensions a line that separates them apart in the class categories-- this is how it could play out. Here's your separable data set. You look at one example. It changes your hyperplane. You look at another example. Shifts around again. As we said, it might jump a little much. But then over time, it'll stop jumping around. It will be right in the middle here-- not exactly in the middle but separating the two classes. When we now cycle to the next data point it will be correct, which will mean no update. We go to the next data point, it'll be already correct, which means no update. And the algorithm has converged. How about multiple classes? It's pretty exceptional you would only have two classes to decide between. Let's say we have three or more classes, and we'll use three as a running example. How to go about that. One way to do it is to have a weight vector per class. So instead of having w times f of x positive versus negative determines the class, for each class, we have a wy. For example, three class classification problem, we have a w1, a w2, w3 corresponding to each of the three classes. And each will point in the direction that that class's data points live. You compute the activation for a class y, wy times f of x. And then you can see which one has the highest activation. And that's your decision. It's not identically matched with a two class. Like if you said, oh, what if instead of three we had two classes? If you use this approach, you'd have a weight vector for the positive class and another weight vector for the negative class. In the two class case you can also simplify it to only have one weight vector. That's what we did. In the multi-class case, you can't really do it that way, and we'll have one weight vector per class. And so here's what it pictorially could look like. You have three weight vectors pointing in their own directions, and then the regions in that direction would correspond to those classes. It's a little more subtle than that because we're looking at the inner product. So if the vector is really, really long-- let's say this w3 was super, super long, out of the slide, way out there-- then this decision boundary might look more like this. And it might take up more. Because it's so long, its inner product will be larger. Similarly if this w3 was really short and only came up to here, then this would shrink a little bit and maybe only be this big. If they're all the same size, then it's just dividing up the space based on the angles. So that's how we can make decisions into multi-class. How about training this thing? We can use the same procedure, actually, with a very, very minor modification. We're going to start with all weights equal to zero which, now, if we had a three class problem, means three weight vectors, each of them having all equal to 0 with some tie-breaking happening in the beginning. We pick up training examples one by one. And we predict with the current weights. We see what the prediction is. If the prediction is correct, then our weights are good as far as this training example is concerned and we don't have to do any update. However, if the prediction is incorrect-- let's say, for this feature vector we would predict class y but the class label is actually y star, then we need to somehow rotate wy and wy star, such than wy star has a higher inner product with f than wy. How are we going to do this? Well, we can use essentially the same trick as we did before, by adding or subtracting the feature vector to the weight vector. So wy we want lower inner product because we didn't want the outcome to be y. So we're going to subtract out f. This here is negative f. It's being added on to wy. We have a new wy prime. And for wy star we want a higher inner product, so we add it on. And we end up with a new wy star prime. And same thing. If we look at the inner products, we look at wy prime with f of x, which is wy minus f of x times f of x, which is wy times f of x minus f of x inner product with f of x. This is the original. This part, inner product of vector with itself, is always bigger than 0. We subtract it out. So the resulting weight vector wy prime has a lower inner product than the original one, which is what we want. And for wy star we can do the same calculation. There's a positive here, so this negative would be a positive. And we would find that the inner product has increased. Does it mean that after this update wy star will win? We don't know that. That's not guaranteed. But we know that at least it'll have a higher inner product than before and wy will have a lower inner product than before. So we nudge things in the right direction. Do we nudge them far enough? Too far? We don't know. But we at least nudged in the right direction. OK. Let's take a look at this in action in a demo. So let's reset this. We have a bunch of data points. There are three classes-- there's a blue class, a red class, and a green class. We have random initial weight vectors-- a red one pointing this way, a blue one pointing this way, and a green one pointing this way. It's kind of interesting here. The red one actually points in the blue region. How is that even possible that the red is not surrounded by red? That's because the blue one is longer. And because it's longer, the inner product is higher and it can take over, in some sense, some of that angular space of the red one, which only has this region over here. So first data point. So we cycle through the algorithm. We're training. We get a data point. It's a blue one. It should be classified as red. So first step is compute the label. It'll be red. Then-- oops, this is going very fast. Let me reset this. This is going too fast. OK. Let's finish training. Reset. It's on autopilot for some reason. OK. Stopped. So we have a current data point here, which is green. It computed the label. It's going to check, is the computer label equal to the actual label? In this case it is equal because it will compute green and it is green. So when we step, it'll skip over this, look at the next training example. It'll compute the label for the next training sample, which is blue, which is correct. So again, it will skip over the-- oh, that's green. Didn't see that right. So this is green. Apologies. Green dot under the yellow arrow. So that's wrong in terms of it predicting blue. Then what it's going to do is it's going to change the way it vectors. It's going to change the weight vector for green and for blue. The green one should become more aligned with this data point here. So it's the first thing we do for the-- let's see. Computed label. Hold on. Let me reset this. We have a weight vector for blue, which is the one over here, green over here. It computed blue because it's in a blue region. It's checking for the computed label, which is blue. It shouldn't be blue so it should subtract out this vector. So we're going to see this blue one, get this thing subtracted out in the next step here. Then it's looking at the green one. The green one should be more aligned with the yellow one, the data point. So we'll see the green one rotate towards the data point. And then it goes to the next one. Next data point up here is blue. The computed label is red, so that's not good. It'll say, OK, I need to first make sure that I don't predict or predict less likely red. So red should point less in this direction. And we'll see red rotate away in this first update step here. And blue should point more in that direction, and we'll see it rotate towards that data point. And this process repeats. Right now we're here. It predicted green and it is green, so it'll skip over any updates. Am I seeing the colors wrong again? It's rotating red away and rotating green towards. And this process just repeats. It has the next data point up here. It's going to compute it as blue as its decision. It actually is blue so it should skip over this. It skipped over it. Great. It goes to the next one. That one kind of falls out of the screen. Not sure which color it has. It's going to check, is the color predicted? It'll predict green. Is that equal to the actual label? The label looks like it's blue so it's not the same. So it's going to rotate the blue one away from this and the green one towards it. It seems to have some bugs or my color ratings are off. Let's finish training. And after it's gone through all the examples till it doesn't change anymore we have all the green in the green region over here, all the blue in the blue region over here, and all the red in the red region over here. At this point, whenever it grabs a new data point, it will see that it already agrees with it and it will do no updates. And it's fully converged. You can reset this and play with the app, and you'll see it cycle through a different versions of the data set and iteratively improved the weight vectors. So it's very different from most algorithms we've seen till today in that here there is not a one-time update. You don't just look once at the data and then you know the answer. You keep cycling through the data until, at some point, things stop changing. And that's when you declare success. Any questions about this? Question over there. Yes. STUDENT: If the data is linearly separable, are we guaranteed that perceptrons will converge to the correct boundaries? Or is there a situation where you could ping-pong back and forth and never converge? PROFESSOR: So that's a good question. So are we guaranteed, if there is a solution, that we find it? And sometimes it's like completeness in the original lectures on search. If a solution exists do you find it? Do we have the same completeness guarantee for perceptron? In the binary perceptron, if data are linearly separable, then it might bounce around for a while. But ultimately it will find the decision boundary that separates your data. Otherwise, there is no guarantees. This is an example that-- actually, let's take a break here. And let's work through this example after the break. PROFESSOR: OK. Let's restart. Any questions about the first half? Let's do this example then. Multi-class perceptron. We have currently three weight vectors-- one for sports, one for politics, one for tech. And so we're trying to classify whether a sentence is about one of these three categories. We get in the first sentence "win the vote." What would be the weight vector for this first sentence? Well, it'll have a bias term. It has to "win." Oops, where did it go? It has "win," which is not a 1. Then it doesn't have "game." Oh, come on. Doesn't have "game." It has "vote," and it has "the." So that's our feature vector for this sentence. If we inner product with sports we get-- inner product with this guy we get 1. Inner product with this guy we get 0. Inner product with this guy we get 0. So sports would win with our current settings of the weight vectors, which is not what we want. Let's say we want this to be about politics. So our y star would be politics. Then what happens? Well, we will subtract this weight vector from the sports one to make sports less aligned with the weight vector. So we'll do-- oh, come on, screen. We'll do minus 1 minus 1 minus 0 minus 1 minus 1, which results in a negative 1, negative 1, 0, negative 1, negative 1 weight vector. And we'll add it to politics, which will result in-- this is all zeros, so this will result in a 1 1 0 1 1 weight vector. Then we go to our next example. So we did this one. We'll go to this one. "Win the election" was the weight vector for this one. Bias is always 1. Then does "win" appear? Yes. That's a 1. Does "game" appear? No. That's a 0. Does "vote" appear? No. That's a 0. Does "the" appear? Yes. That's a 1. We inner product with each of these weight vectors. And what do we get for the first one? We get negative 1 plus negative 1 plus 0 plus 0 plus negative 1. So we get a negative 3. For politics we get a 1 plus 1 plus 0 plus 0 plus 1. We get a plus 3. And for technology we get a 0. So now the class predicted is politics, which, let's say, we think is the right thing here, the correct label, so no changes to the weight vectors. We go on to the next one, "win the game." What's the weird vector for this one? Weight vector bias is always 1. There is a "win," there is a "game," there is no "vote," and there is a "the." Let's look at the inner products with each of our current weight vectors. For the first one, which is negative, negative, 0, negative, negative, we get a negative 3. For the second one, politics, we get a plus 3. And for technology, we get 0 again. So politics wins. But we don't want politics to win this and we want sports to win, so we're going to subtract this feature vector from the politics one. So we're going to subtract out. There's our current weight vector. We're going to subtract out this one, which is 1 1 1 0 1, which results in a 0 0 negative 1 1 0. New weight vector for politics. And we're going to add it to the sports one, 1 1 1 0 1, which will give us a new weight vector for sports, which is 0 0 1 negative 1 0. What happens now? We go back to the first one, win the vote, and we keep cycling until we do a full pass through all the data points without any update, at which point no updates will ever happen anymore and we call it done. So some properties, tying back to the question earlier. Separability is a property of your data. Your data is separable if you can find a hyperplane that separates it into the different classes. Or if it's a multi-class problem, if there are weight vectors pointing in various directions, such that each can have their own region where all the data points from that class fall into that region, that's separable data. Convergence is a property of a run of an algorithm. Your algorithm, when it is being run, could either converge or keep jumping around. Some algorithms will be guaranteed to always converge. That's not true for the perceptron in general. But if you have a binary perceptron with separable data, then it is guaranteed to converge, and it will find a decision boundary that separates the two classes. How many mistakes can it make along the way? It can make at most this many mistakes. So we're not proving this guarantee, but let's get some intuition for this. What is this saying? We have number of mistakes, we have k, which refers to the number of features that we have-- so the size of the feature vector-- and delta squared, where delta is the separability measure. So if you look at this data here, you can find the direction such that along this there is a separation of delta between the two classes. So that's delta. If you find this kind of separation that's non-zero between the two classical and parallel planes delta apart. If your data has non-zero delta, then this will be a finite number, assuming you have a finite number of features, and it means you'll make a finite number of mistakes after which you'll have converged. Now, let's get some intuiton for this. This shows that the more features you have, the longer it might take before it converges, which makes sense. The more features you have, the more you need to learn to understand what these features tell you before you converge to the right decisions. Also, the larger delta is, the less mistakes you're going to make. So the further your classes are apart, the easier the problem is and the less iterations you need to find a solution. So that's what this is saying over here. Any questions about the positive properties? Yes. STUDENT: [INAUDIBLE] PROFESSOR: We don't have the proof here so I can't justify it without working through the proof. I can't do it on the spot here. But if you were to work through the proof, it would tell you that it's strictly less than, unless you made a typo. Now, there are some problems. Often, your data will not be perfectly separable. Just like when you run linear regression, your data points don't necessarily all fit on a line. They might deviate from that line, and you might still want to run linear regression. Same thing here. Even if your data is not perfectly separable, you might still want to find something that is a pretty good decision boundary that is usually correct. So what to do then? Perceptron will just keep bouncing around. A new data point comes in, corrects for that, but then does another one wrong. Next time it has to correct for that one. And it just keeps happening over and over and over. Another thing that can happen is that you stop when you don't want to stop. Imagine you end up finding this decision boundary over here. Yeah. Everything's separated, but that's not a very smart decision, because there are some data points very close to that decision boundary. And that means, well, what if a data point now actually is right over here? Wouldn't you prefer that one to be also blue when it classifies because it's closer to the blue ones? But with that decision boundary on top, it would actually classify it as red because it stopped. It found something where it doesn't make mistakes anymore. So how to make it be a little more wary about being close to making mistakes and maybe shifting that boundary over a bit. That would be nice if you could do that. Another issue, which you've already seen with naive Bayes, is that you could have overfitting. That one's actually not too hard to fix. So as you iterate, meaning you loop through your training data, you keep updating your weight vectors. Your training accuracy will keep going up because you're looking at your training data and it's telling you what to change to be more accurate on your training data. But you'll have a separate stash of data, called hold out data, and another separate stash of data, called test data. The test data you can't touch. That's for when you're 100% done you report out on. So it's representative of data you'll see in the future. But your hold out data you use during training to monitor your training and see what's happening. So as you're training you'll see on your hold out data accuracy will keep going up too for a while, but then it'll start saturating and go back down. And you want to stop training right here. Once this starts going down again, that's the moment your updates are overfitting-- that is, memorizing your training data-- rather than fitting to the general pattern that you want to capture. If you stop there, then you'd hope, since you used your hold out data only for this one thing-- you only use it to decide where to stop-- that very likely whatever you had on your hold out data is pretty representative of what you'll also get on your test data. So this one's easily fixable. Early stopping. You stop training when you see hold out accuracy go down. How about the first two? How do we deal with those? Well, let's improve the perceptron and look at a new algorithm called logistic regression that will resolve those two issues. Let's get some intuition first. So let's say we have data, blue and red, and we want to separate it. Red crosses, blue circles. But these data is not separable so there's no perfect decision boundary here if we have to use a line. You might say, well, maybe you want to use not a line. Maybe you want to use some decision boundary that looks more squiggly, like this? That might be OK in some situations. But you might argue also that this would be overfitting, that you don't really want to loop through in detail. And in general, it'll always be the case that if you get perfect score in your training data by making your things squiggly enough, it's unlikely to do well on your test data. So it's likely that altering your training data, you're not going to get 100%. So here's a scenario. We want to separate this by a line. It's not separable. What will your perceptron do? It'll keep bouncing back and forth between different decision boundaries and never converge. What if we think about the problem a little differently? Instead of thinking of it as one side is positive, the other side is negative, what if we think of it more gradually? We make a probabilistic decision? So maybe we say on the line it's 50/50 because that's the decision boundary. That's where we don't know what it is. And then as we move in one direction, the line is 70% blue, 30% red. In the other direction, it might be 70/30 the other way around. If we move further away from the decision boundary it becomes 90/10. Another direction further away becomes 90/10 the other way. If you think about it this way, it becomes a lot more meaningful. It seems somehow more OK that you have still a red one here. You assigned a non-zero probability to having a red one there. You're modeling the fact that sometimes something might lie on the wrong side of the decision boundary. And you could then decide, OK, if I make decisions this way, what's the best way to position this thing such that it reflects what's in my training data? And you could hope for-- and we'll see that you can actually position this in a way that you converge. After you cycle to your data enough, this will be positioned in a stable way, and you won't want to move it anymore because you can't improve upon the positioning you found. So how do we make this probabilistic? Perceptron scores with an inner product, w inner product with f of x. We'll call that z. z can be any number from negative infinity to all the way positive infinity. How do we make that a probability? Well, if it's very positive, we want it to be close to 1. If it's very negative, we want it to be close to 0. So can we find the function that turns what lies on the real line into something that lies between 0 and 1 with these properties? Because if we can, then we're kind of getting close to what we want to do here. Sigmoid function is a function that does this. What's a sigmoid function? It's 1 over 1 plus e to the negative z. If you draw that function, this is what it looks like. So when z's very negative, you're at roughly 0. Then when z becomes closer to 0, this goes up through, and at positive infinity, it's at 1. So you gradual transition from 0 to 1 as z goes from negative infinity to positive infinity. So this satisfies these properties over here. So what we could do in principle, going back to this drawing that we just had, if we had a weight vector, w-- and let's say the positive class is that way so weight vector would be pointing that way-- we could say, well, if weight times feature vector is positive it's on that side. The further we go that way, the closer our z, after taking a sigmoid, becomes a 1. And the more we go in the opposite direction, the closer we go to negative affinity with our activation z, and the closer we get to 0 for the positive class. So we have the mechanism to do this now. What's the best choice of w? Well, we've actually seen this kind of methodology in last lecture. We're now making probabilistic decisions. When we make probabilistic decisions, there is a common framework you can use to find good parameter vectors. It's the maximum likelihood framework. It tells you, I have my data, and I want to maximize the likelihood of the data. And the primary vector that does that is the one we choose. So maximum likelihood. But now the likelihood will be focused on the decision we're making. So we want to maximize the sum of the log probabilities of the label given x or the features that are a function of x. What you saw in the last lecture was something along the lines of sum over i log probability of xi and yi under some parameterization w. So if you look at that, that would have been sum over i log probability of yi given xi under w plus the log probability of just xi under w. So we saw last lecture was trying to optimize both for modeling your x's and for modeling y given x all in one optimization. And what we're doing here, we're dropping this part. We're saying, well, we don't care about w paying attention to get it right, like what's the distribution over x's, because we don't care about the distribution over x's. We care about making decisions. We want to know, what is y given an x? And we're going to focus on that part and optimize our parameters to be maximally good at predicting y given x and not make a trade-off with anything else that we might not care about. So what are these probabilities that we have here? Well, those are using the sigmoid. So the probability of a positive label is this thing over here. So it's, again, the 1 over 1 plus e to the negative z where, again, this thing here is z, which is w inner product with f of x. And then the probability of the other label is, of course, 1 minus the probability of the first label. Once we fill this into this we have an objective. We can evaluate. This for any choice of w you can compute, what is my log likelihood of labels given inputs under this vector w? And you can then, if you have extreme amount of compute, cycle over all infinitely many w's that exist, find the one that maximizes this, and choose that one. Or if you're lucky, you can take a derivative, like you did in last lecture, and find the one that [INAUDIBLE] equal to 0 and just analytically know which one it's going to be. What we're seeing here, taking the derivative, it's not going to give us a nice expression. We'll look at that next lecture what we can do to find this w. But for now, just think about it as if you had infinite compute, you could cycle over all of them and pick the one that maximizes this. And that's the one we want to use. That's logistic regression. What it also does, if you have many options-- for example, here, same data set, but two decision boundaries that both separate the data perfectly-- you might say this one is better because it takes more margin from the data points. Perceptron won't care. Whichever one you feed it, it will say, I'm converged. I'm not doing any more updates. With the logistic regression formulation you're assigning probabilities. And if you assign a 50/50 where there is actually a data point here and here, that's not a great decision. That will be a low log likelihood score that you'd get as a result. Whereas if your data points are further from the decision boundary, you'll be more confident about them. You will get higher log likelihoods. And so this one here will be preferred if you use this scoring mechanism. And so if you use logistic regression, you'll end up with this decision boundary on the left rather than one on the right, or being indifferent between them like perceptron would be. OK. So it's solved, in some sense, the two issues we had with perceptron-- the notion of bouncing around when things are not separable by making it probabilistic, and it now has preferences in terms of wanting to put the decision boundary away from where the data points are so the log probabilities of a label given the data point are high. We generalize it to multiple classes. Well, perceptron had a weight vector for each class. We would score by wy. The weight vector for class y inner producted with f of x, the feature vector that we're considering now. We see which one has the highest score, and that's the one we decide for. Now, again, if the data is not separable, meaning you can't nicely compartmentalize in this plane according to three directions, then updates to the perceptron weights will keep going and going and going and never stop. We can use the same idea. We can say, instead of having deterministic decision boundaries, what if we make it probabilistic? We're OK with a region being 90% label one and 10% maybe label two, or 90% level two, 5% label three, 5% label one. We're OK with that. Well, then we can do the same thing. But how do we get this probabilities? Let's say we have some scores, z1, z2, z3. This one will be from negative infinity to positive infinity. This one also. This one also. We want to turn out into numbers between 0 and 1 where, if you have a very high score, close to positive infinity, you're close to 1. If you have a very low score close to negative infinity, you want to be close to 0. This is a calculation that will do this for you. Let's look at this. Let's look at the first one. Let's say I'm going to exponentiate the number z1. What does exponentiating do? z1 lives here. e to the z1 lives here. It's a curve that looks like that and grows pretty quickly, actually, as you move further to the right. So this will always return a positive number. So one thing we can already see is, then, all three numbers we generate-- the e to the z1, e to the z2, e to the z3-- will be positive numbers. The three that we end up with combined will sum to 1, because there's e to the z1, e to the z2, e to the z3, and we divide by the sum. So that's guaranteed to sum to 1. So we've turned our original numbers, whether they're positive or negative, into three positive numbers that sum to 1. So we have a probability distribution. Doesn't have the properties we want. The more you're out to the right, the more you're positive, the higher your e to the z will be and it'll grow very quickly. And so you'll have a higher score up here if z1 is very high. But it's all relative. It all depends on where z2 and z3 are. Whichever one is the highest will dominate this calculation. Why is that? Because we know exponentials grow very quickly. And so if z2 is a bit higher than z2, then e to the z1 will be quite a bit higher than e to the z2, and it'll dominate the probabilities here. If they're equal, well, then they'll equal dominate. So we get the right kind of properties here. If you're higher than the others, you will have the highest probability. If you're lower than the others, you'll have the lowest probability. If you're in the middle, you'll have the middle probability. The numbers on the left are the original activations, and the ones on the right are called softmax activations. So our perceptron, in some sense, outputs just regular activations. When we do multi-class logistic regression, we'll feed that through a softmax calculation which turns it into probabilities. Once we have probabilities, we, again, have a mechanism to optimize the parameters. We can, again, do maximum likelihood estimation. We can say, OK, which w is the best w? Well, the best w is the one that maximizes the likelihood of the labels given the features. So that's this quantity over here. I sum over all data points. I look at the log probability of the label given the input vector under a choice of parameter vector w. And the one that we conclude is best is among all infinitely many choices of w the one that maximizes this. Again, this is very similar to what you covered with naive Bayes. But the difference is that we directly focus on this conditional distribution. In naive Bayes you looked at sum over i log pyi comma xi under your parameter vector w, which is sum over i log pyi given xi under w plus log probability xi under w. The difference in terminology here that's often used, what we're looking at today, is called discriminative classification. A discriminative approach which focus on discriminating the points-- do they belong to one label or another label?-- focus on just this first term here, whereas what we covered in the last lecture is called generative models. They learn how to generate all of the data, and then after the fact might do a calculation to discriminate. But they're trained to generate and might after fact be reused to discriminate. Again, why might you prefer what we're doing today? Because if you learn a generative model, you pay attention to two terms. And if ultimately all you care about is the first term, why would you let your choice of w be influenced by the second term if the first term is what matters? Now you might also wonder, would anybody ever use the generative approach? Well, it turns out that having this second term can somehow regularize at times. So there are scenarios where having that second term can kind of moderate your choice of w in some way that makes it somehow better in not overfitting the data. So there are some trade-offs there. Sometimes it might better to go one way, sometimes the other way. If you have a very small amount of data, often having the second term can be good to make up for the fact you have a small amount of data. If you have a very large amount of data, just focusing on the first term will allow you to focus on what you care about. And the data will be so much in that scenario that you don't care about any kind of moderation on w. You just want to get it right. You focus on that first term. What does it look like underneath? Well, this here is the activation, z, for label y, for data point i. And here we sum over all other possible labels that we could have. So this is the softmax that we just saw on this slide over here. It's now just written out with the z's as inner products of w's and features. So that's what we have here. So this is multi class logistic regression. Once we use this we will not have the issues anymore of bouncing around or stopping when we're still close to some data points and we could grab a better margin by keep training. This will automatically find the better margins and will automatically still converge, even when the data is not separable. Next lecture we'll look at how we solve this. So so far I've told you, well, we have a definition of what it means to be the best w. I have not told you how to find that best w, short of having infinite compute and checking all w's. But that's not practical because nobody has infinite compute. And so next lecture, we'll look at how we find this w in a reasonable amount of compute time, which we'll then be able to generalize to a lot of other problems too. All right. That's it for today. I'll see you on Thursday. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180927_Reinforcement_Learning_Part_22.txt | PROFESSOR: Hi, everyone. Welcome to Lecture 11 of 188. Let's start with a couple of announcements. Project two was due last week Friday, but there's a mini contest that's optional in which you get to compete with other agents, staff agents, and, in the future, other student agents on the board, where on one side you are ghosts, you are supposed to defend your territory, on the other side, you're Pac-Man, supposed to go eat the food pellets and bring them back. But if the ghost catch you on the other side before you bring them back, then you explode back into food pellets and you get reset. So this optional contest ends on Sunday. Currently on the leaderboard, at number one we have YZS. Is YZS here? Right there? Congratulations. At number two, we have Team No Bug. Are they here, Team No Bug? Over there? Congratulations. And at number three, we have [? Yu-Che Wu, ?] over here. Congratulations. So we have till Sunday for this contest. And we'll round up results of the final leaderboard likely next week. You can get extra credit by being near the top of the leaderboard, or also by beating staff agents. So independent of what the other students do, if you beat enough staff agents, there's also extra credit for that. 0.5 points for every staff agent you beat sufficiently often. So that's with the Sunday timeline. Homework five was released, and is due on Monday. As usual, it has three components. There is the electronic part, there's the written part, and then there is a self-assessment of the previous written part of the homework. Project three, reinforcement learning, is also released. And it's due next week, Friday, at 4:00 PM. Any questions about logistics? OK, let's dive into the technical content then. This is our second lecture on reinforcement learning. Just as a reminder, what's going on in reinforcement learning? We will still assume there's a market decision process. What does that mean? That there's a set of states, a set of actions which might depend what's available on which state you're in. There is an underlying model that says what is the probability of landing in state s-prime if you were in state s and took action A. And there's a reward function that tells you, if you're in a state s, took action A, landed in s-prime, how much reward did you get, or will you get when it happens again. And what we're looking for is a policy [? pi of s ?] that tells us, for every state, what is the best thing to do, where best is defined as maximizing expected discounted sum of future rewards. The twist in reinforcement learning is that we don't know t or r, so we have to experiment in the world to figure out how it works, how the transitions are, and where the reward is, to then, from that, maybe figure out how we can then optimize how much reward we collect. The big idea we covered last time is that you can compute averages, instead of having access to t, and we'll review some of that now. Big picture wise, with market decision processes and reinforced learning, a total of four lectures, what have we seen? In the first two lectures, the MDP was known, there were two types of techniques we looked at. We looked at computing optimal value v-star optimal, Q-value Q-star optimal policy pi star, we had two algorithms for that. Policy attrition and value attrition, they assume access to the model and to the reward function. And then we also saw, you can evaluate a policy-- so somebody fixes a policy, you can evaluate it, see what the value is, and that's called policy evaluation. Which you could do in two ways-- it turns out you could do it just like value attrition, but instead of having a max over actions, the action is fixed to whatever the policy prescribes. And so it's like being in a new MDP where only one action's available in every state, and this running val attrition that new MDP, just-- it's a very simple MDP where the action is fixed for every state. That's policy evaluation. One way to do it. The other thing that happened is once we got rid of that max in the value attrition equation, it became a linear equation. So we could also just solve a linear system of equations to find the values of a policy. That's when we know the model and we know the reward function. Then last lecture we started looking at, what if we don't know the model and/or the reward function? And we saw there's two ways of tackling this. The model-based approach is somewhere we say, OK, let's collect data. From that data, estimate t and r. And then we can reuse the techniques you already know. And that's model-based reinforcement learning. So that's one way to do it. Another way to do is to say, well actually, rather than learning the models, we can directly learn values, Q-functions policies. We saw this in two ways. We saw first value learning. Under value learning, we saw two approaches-- we saw direct evaluation, where we just looked from a state every time we were there, how much reward did we get in the future, and average it. And we saw a temporal difference learning, or indirect evaluation, where we used the Bellman equation, but a sampled version, to from every experience, update our estimate of the value of the state we were just in. And we saw, then, that we can also do this with Q values, and that has a lot of benefits. Because if you do it with Q values, it also tells us what action to take in any given state. So that's what we'll look out more today, and expand on what we covered there. So in model free learning, what we have is effectively a stream of experience. And from that stream of experience, we want to somehow learn something. And we want to do it whenever there is a transition, an entity of this type. So one part of that stream. We want to update what we know so far. And the way we've done is by mimicking the Bellman equation, but doing it in a sample-based way rather than requiring access to the model. Let's take a look again at how that worked. This is the Bellman equation for Q values. What does it say? It says, the value for being in state s, and taking action a, if we have k plus 1 steps yet to live in the life of the agent, is equal to a sum over states that we might land in. And it's a weighted sum based on the probability of landing in that state. The reward we get for that transition, plus a discount factor, gamma times what we expect to get in the future. And in the future, it is only k steps left now, so this is OK. We start from Q0 equals 0, and can work our way up to any k, going up one at a time. Why a max here? Well, we're trying to compute the optimal Q values. And so when we think about, what is the value from the current state in action? It's the reward we got instantaneously, plus what we expect to get if we act optimally from then onwards. And optimally from then onwards means that we need to take the optimal action over all choices of action available at that next date as prime. Of course, now we want to compute this without knowing t and r-- and t and r are featured quite prominently in here. So what we do is, we experience a transition, s a r s prime. What that gives us is in some sense the ability to compute one such term in this weighted sum. So this is our sample estimate, based on just one of these terms. Of course, we don't want to just say that the Q-value for state s action is equal to this, because there could be many future states s prime, and this is just one that we experienced out of many possible futures states, s prime. So we're going to do an averaging over the many times we were in state s and and took action a, what was the sample estimate of the value? So the way we average this with a running average-- so we don't have to keep everything around. We say, OK, our current running average is Q s a. We have a new sample estimate, this one over here. We're going to average them together in a weighted way. If alpha is 1/2, then we have 1/2 what we used to have, 1/2 the new sample estimate. In practice, alpha will be much smaller than 1/2, so the update here will be dominated by this term over here, but a little bit of an adjustment based on this term here. Any questions about this? Because this will be the foundation for the first half of lecture today. Yes? STUDENT: [INAUDIBLE] PROFESSOR: Good, that's a good question. So you could say, well, in sum, the question is, how do we even choose alpha? If alpha is fixed, then what we get is the latest ones contribution more than the early ones, because they get down-weighted over time. So you could think of it as a feature or a bug. Let's think about this. Well, if the later ones get more weight in the result than the earlier ones, the reason we might like that is because, as we've been doing this for a while, the Q values become more and more accurate. And as the Q values become more accurate, then the sample estimate that we compute here will be a more accurate one than the one we had before. And so the later experiences result in sample estimates that are more accurate than the earlier experiences. And so that's why we actually consider it a good thing that the later ones contribute more than the earlier ones. But in principle, if you wanted them all to contribute equally, you could probably find a calculation to do that. In practice, we don't care about that. We'd rather have the later ones contribute more. And it happens to also be very convenient, because then we can do this exponentially weighted average, for which we don't need to keep as much information around. Any other questions? OK, let's build on this, then. What are some of the properties? The update equation we just looked at, if we use it often enough-- meaning we collect enough experience-- the Q values will converge to do the optimal Q values, even if you're acting sub-optimally. And this is very surprising, because typically, when you're trying to compute values, like we saw on the standard evaluation type setup, you would compute the value of the current policy. But with this update equation-- and this is because of the max that's happening in here, here-- that max makes it that the samples are referencing the optimal policy, even if the way you collect your data is not optimally. In fact, it would never be optimal, initially, because you don't know how to do things optimally. And so this is off-policy learning. That max in there allows off-policy learning, allows us to use any policy to collect data-- not any, there are a few caveats that we will look at-- but any policy that visits every state action pair sufficiently often is OK, and will result in the optimal Q values. What are these caveats? Caveats are you have to explore enough-- more about that later-- but one way to think of this, you need to somehow satisfy the condition that every state action pairs visit infinitely often. Your alpha has to go down, otherwise you don't get convergence-- because the updates remain too big over time. So gradually decaying your alpha. But don't decay too quickly, because if you decay too quickly, you've only averaged a small amount of experience together, and it won't have converged yet to the right average. In the limit, doesn't matter how you select actions, as long as these things are satisfied. So let's look at a demo to remind us of how this works, we have. So here we're going to watch a [INAUDIBLE] agent, the presence of which you can see with a blue dot. And it's going to run on its own. So I'm not moving it around to let it run faster, it's going to run on its own. And it's running in this red world, which happens to have a bunch of terminal states at the bottom. Also terminal state all the way up the right, that's actually a good one. The bottom line is actually pretty bad to be at, I believe it's negative 100 reward when you end up exiting from there. And the policy we see run here is non-optimal policy, often jumps off of the cliff. And the things to pay attention to here is that, if you look on the optimal path, indeed it's converging onto a value of roughly 10, which is the optimal value if the discount factor is 1. You can also see that even when it jumps off the cliff, it retains the optimal values, because it knows that that was not the optimal action. The update happens based on the max of Q values, not based on the action you actually took. And so jumping off the cliff does not affect the values that get propagated back, except of course for the auction from where you actually do the jumping off the cliff, from which it's irreversible, where it has to affected it, all the downward actions from the middle row. There are some places it's barely ever been, that's something we'll address, But in principle, there might be something really good there that it hasn't found out about yet, because it hasn't explored very fully. But overall, after a while, it converges to roughly the right values in the places, at least in the middle and the bottom, and this is happening despite acting fairly randomly. So the notion of needing to visit every state action pair sufficiently often is actually a tricky one. Because it requires that you somehow understand where you've been, and where you've not been, and somehow make sure that you are there in all places often enough. Simple example would be, in real world, let's say you have your usual place you go for lunch, and it's pretty good, so you're pretty happy every time you go there. But then a new place has its grand opening, and now you have to make a decision. Are you going to try it out? It's risky. It's probably worse, because you're so happy with your usual lunch place. But if you don't try it, you'll never know. And so, if you want to make sure you have the optimal policy, you have to go try this new place. If it's bad, you might never go back. But then maybe it's a good place, and it's more optimal that you keep going there. But without trying you won't know. And that's the critical aspect of exploration-- you'll have to try things, and many of them that you'll try will actually not be good for you. But you didn't know ahead of time, you had to try it to find out. So what's a very simple scheme for exploration? Something called epsilon-greedy. It's very simple, and actually used surprisingly often. Every time step, your agent will flip a coin. And with some small probability epsilon, so the coin is biased, and with a small probability epsilon comes up one way, probability 1 minus epsilon comes up the other way. Probability epsilon, it'll act randomly-- probably 1 minus epsilon, it will look at the current Q values, look at which action is optimal based on the current estimates of the Q values, take that action. So this introduces randomness, while usually playing according to the strategy that you think is best. And so what does this do? Well, it ensures that while you're learning, you usually do pretty good things-- which you might want, so just always acting randomly. But it also ensures that you have randomness in there. And if you run this long enough, every state action pair will be visited sufficiently often, and you will have your cue values converge to the optimal cue values. Let's take a look at this in action on the crawler. So what we're going to watch here, we're going to have the crawler robot again. You're going to work with that robot in your project three. Now, this interface here will start making more sense. What do we have here? We have an explicit epsilon. It's right now at 0.8-- that's actually pretty high. That means that with probability, 0.8, actions are taken randomly. Probability 0.2, the current Q function estimate is followed. There's a discount factor, there is a learning rate, and then there is some things we can use to accelerate the learning-- meaning, we can not watch everything live, but let it behind the scenes, collect a lot more experience. So let's take a look at this up and running with epsilon equals 0.8. So this thing is acting mostly randomly at this point, so it's not really going anywhere. But we also know that's kind of OK-- we know to learn good Q values, we don't need to act according to any optimal policy, as long as we get good coverage. And so if we see this thing get good coverage, we can expect it to learn a good policy. Now, we skip forward 30,000 steps-- so it's collected even more experience-- skip forward a million steps, now through these random actions, it's seen pretty much everything. But if we look at it, it's still not acting that great. Why is that? It's actually learned the optimal Q values, but it's still using an epsilon that's 0.8. And so most of the time, it's not using those Q values to decide what to do. And so it's still behaving fairly poorly. But it essentially learned things underneath. We can't know from just what we've seen so far, but we can hope for that. And soon it will decay, make epsilon smaller, and see that indeed, when it starts listening to its Q values, it's going to do really, really well. But as long as epsilon is large, it's never going to work all that well. But it's learning. It's doing a little better than initially, with this large epsilon. But to really see what it's learned, we've got to start decaying epsilon. So let's decay epsilon, bring it down to maybe even 0 completely, pure exploitation, no randomness anymore, and it's steadily making progress. And now we can see what it learned. Now some of you can, in general, do with your Q-learning agent, periodically switch to an epsilon that is 0, to see what it's learned so far. OK, so this worked. Eventually, after we skip forward many, many, many episodes, we had to lower epsilon over time to make sure we see better behavior. So that's something we need to keep in mind if we want to get good behavior out. But we didn't have anything automatic. And what we're going to look at now is something called exploration functions that will help us decay the exploration automatically, based on how much you have already learned. Once you've learned a lot, should maybe explore less. OK, so we want to explore random actions that are good in some sense, but they explore a fixed amount. A better idea is to pay attention to the parts of the space, where you don't know much yet. If you don't know much yet, you should try things you haven't seen yet. But if you already know how things work, there's no reason to, again, do random things. OK, so initially, you don't know, you need to go check it out. But then once you've been there, you don't want to keep randomly checking it out. What we're going to use is something called an exploration function. Let me say something about that. F here is a function that combines two things-- an estimate of what will be a utility, u, and then takes in also account. So for each state action combination, we'll keep track of a utility and account. And rather than just choosing an action, based on its expected utility, we'll choose an action based on expected utility, plus this correction. This correction, k, is a constant-- let's see, maybe it's 10, who knows-- some number. N is how often we've seen that state action pair. Maybe we'll want an n plus 1 here, just to make sure that when we're at 0 it doesn't divide up to infinity. But so what happens here is, when you've been somewhere only 0 times, this will be k. Once you've been somewhere two times, it'll be k over 2. Once you've been somewhere 100 times, it'll be k over 100. And so the more often you've been somewhere, the lower the second quantity becomes, and so the less you add to how you perceive the utility of exercising that state action pair. So what would it mean? Regular Q updates look like this, and this is a shorthand notation for a weighted update. We already have an estimate of Q. We're going to have a new sample estimate of Q, and we're going to combine it, 1 minus alpha, the old thing, plus alpha the new thing, that's what this means. We're going to replace this now, instead of using this over here, we're going to correct it. So we're going to look for all actions that are available, and state s prime. We're going to look at the Q values associated with action a prime, and state s prime. But also look at how often have we been in state s prime and taken action a prime. And this thing here will be computed, which will give a bonus to actions that haven't been taken very often yet in that state. So now, we're going to be favoring actions we haven't taken very often yet. What does it actually mean to favor these actions? What it means is that when we compute the sample estimate in this calculation over here, we'll end up with higher values. So what that really means is that, if we're looking at the value of state s and action a, and we wonder how much is this really worth it to take action a and state s, it'll say, well, that depends on whether in state s prime there are high Q values. But also in state s prime, are there actions available we haven't taken very often yet? And if there are actions available in s prime that you haven't taken very often yet, and you can reach s prime from taking action a and state s, then that will start favoring that action a and state s. So you're kind of thinking ahead here with this update-- you're saying, how good is my action, not just in terms of reward I'm going to get, but in terms of landing in a state s prime, where there are actions available I have not explored much yet. And this will, of course, propagate back to the next one, next one, next one. And what we'll build up over time is Q values-- they're not the actual Q values, initially, but that estimate whether a certain action gives you exploration potential. And if an action is high exploration potential, you're more likely to take it. And so what that means is if you have a complicated environment, or maybe need to run down a very long hallway, and behind that hallway there's a huge room with a lot of things to explore, then this thing will encourage you to go down that hallway until you've explored everything in that room. And once you have explored everything in that room sufficiently often, then these bonuses will start to decay-- because the council will be high-- and the bonus is based on 1 over the count. These bonuses will start to decay, you'll stop visiting that room. Or maybe, if there's actually a real value in that room, thanks to high Q values there, you'll still keep going there and go collect rewards. So it's a much more structured way of doing exploration, than just in the moment, picking something random. Any questions about this? Yes? STUDENT: If you went down a hallway, wouldn't your n updates keep track of the fact that you've been down the hallway multiple times, and no longer treat it as a place to explore, even though you might not have seen everything in [INAUDIBLE]?? PROFESSOR: So let's say we have a hallway, bunch of states. And then here we have a massive room. And maybe here we have a hallway going another way, but only a small room. If you run standard epsilon-greedy based exploration, and let's say we start out here, we'll end up spending less than half of our time in this room, and more than half our time on this side. That's not good. We don't get a whole lot. Now, your question is, why do we do better with this approach? Is it really true that we're doing better with this approach? Let's think about this. So let's say we go down this hallway. First time we go down, what will happen is, based on novelty, there will be updates to the states before, because we had actions available in the next state that make that state look good. These updates, based on high exploration bonus, because n is still low, will initially favor all states here. Then your question is, after you've been down that hallway many, many times, what will happen? What will happen is they will still be high, because you will still experience new things somewhere in this room, and they will back-propagate through these updates into the states in the hallway. Until at some point, there is no bonus any more to be collected from this big room, at which point, these Q values will start dropping down, because there will be no contributions anymore from the exploration bonus. So the main point here is that, even if exploration bonuses might not be that high anymore intrinsically in the moment, if you just think of them as rewards, because they propagate into Q values, you'll still continue to favor going down that hallway. Any other questions about this? OK, then let's take a look at crawler with an exploration function. So this same setup before, epsilon is still 0.1 to ensure that there is at least initially a little bit of randomness, because if epsilon is 0 initially, then you already have a deterministic policy and you're not going to see multiple things to see that there are exploration bonuses. So your epsilon is non-zero, but pretty small. So your behavior will be dominated by the Q function. With an exploration function, what that means, it'll be dominated by a combination of how novel are the things available to me after I take this action, and how much reward is available to me after I take this action? Let's run this. This time, we're not going to jump forward by skipping 30,000 steps, or 1 million steps, we're just letting it run live. And so we're going to just, in about 10, 20 seconds, see this thing learn to do things. Right now there's still a lot of exploration, still a lot of things it hasn't tried. And so, it's doing a lot of things to learn more. But then at some point, the exploration bonus starts to decay, because you've seen things before, you've tried them before. Then that propagates into Q values, which then become dominated by actual reward, and it starts exploiting. And so here, we've got something in just order of a few hundred or 1,000 steps, rather than needing to skip forward a million steps before we get good behavior. OK, so that's Q-learning with exploration. There is some other concept that's good to be aware of, it's called regret. It's a term that you know in real life-- it's when you do something and you wish you hadn't done it. Maybe some people are thinking about that right now in Washington, not sure. So there is a notion in AI also about regret, and it's actually quite related to the real life notion, at least the one of wishing you hadn't done something you did. More formally, when we have an algorithm, we could try to compare them just based on did it get to an optimal policy, or we could compare it based on how quickly did it achieve an optimal policy. For example, we know epsilon-greedy Q-learning, and exploration function Q-learning both ultimately achieve optimal policies. So how can we quantify that notion that we believe one of them is slightly better than another one? Well, one way to do it is to say, let's see during the learning how much reward is accumulated. Let's have this thing run, let's say, for 10 million steps, and see over the entire duration how much reward was accumulated. Not just how good is at the end, but in the entire process. Then all of a sudden you're measuring how fast the learning is, not just how good the learning is when it's done. So regret measures the difference in that sense. If you had an optimal learner, that learned as quickly as possible, versus your learner, what's the difference? And so random exploration will have much higher regret than an exploration function-based learner. Now, you're never going to have zero regret, by the way, in reinforced learning, because if you don't know how in the world works, you still have to try things out and find out whether they're good or bad. But different approaches will have less regret if they're better. OK, so far we've looked at exact methods-- that is methods that give us the exact solution we need if we run it long enough. Now we're going to look at ways of approximating the solution. It's the first type of approximation we'll do, and we'll see a lot more approximation the later half of the course. But we'll see already one pretty basic machine learning idea today. So what we're interested in is generalizing across states. When we look at basically Q-learning, it's nice, it keeps track of a table of all Q values. And that's good if you can fit all your Q values in a small table. But in realistic situations, the number of states to visit is very, very large, and you will not be able to store a table with a Q value for every state action pair. So if you can't do that, how are you going to run Q-learning? Well, maybe you say just I buy a bigger computer to store everything. But let's say that even that will not do the job for you-- the space is too large, you can not store a Q value for every state action pair. OK, well the key idea is going to be that we want to generalize-- we want to learn about a small number of training states from experience, and then generalize that experience to do well in new situations we've not been in before, but that are related. A simple example would be, maybe you've never been in some new building before-- but you've been in other buildings. And from having been in other buildings, you know that trying a door is a good idea to try to get into a building. Even though you never tried that door before, you've never been in that specific state before, you knew it from other experience. Can we get the same thing here? So you don't need to relearn everything in a new state, especially not if it's related to things you've seen in the past. Let's take a look at Pac-Man in action. So first, we will look at Pac-Man running Q-learning in a very small world. And we'll just see Pac-Man learning from scratch. A lot of losses, because initially the Q values aren't all that good. In fact, mostly losses. But sometimes a win gets him some better reward, and might learn from that how to do better in the future. But doesn't learn too quickly. OK, this is taking a long time. Often in Q-learning, you don't want to watch everything in detail, and we try to close this. Let's now have it silently train for 2,000 episodes, and then after that, we'll watch it again. So it's trained silently, or it is training silently for 2,000 episodes. It's becoming better. And now that she has Q function, that prescribes a pretty good policy, and it seems to usually win. So let's think about 2,000 episodes. It's actually a very small world-- there is only six positions for Pac-Man, six positions for the ghost, there's one food pellet, but it took 2,000 episodes to learn. Very long time. Let's look at a slightly bigger one. Let's look at still very small, but just slightly bigger. Well, north action was interesting to try at first. Here's trying some other things. You look at it, it seems like it's pretty dumb, why is it not doing the right thing? Well, actually, it's in new states. Experiencing things it's never experienced before. To us, they look similar, because being eaten by a ghost, doesn't matter which square you're on, you're eaten. But for Pac-Man, being eaten in at square, 2-2 might be very different from being eaten at square 2-3, because it's a different state. And so it needs to learn a different value for that, if you run standard Q-learning. And so you want to get away from that-- we don't want to have to re-learn that getting eaten by a ghost, what it means in every single square. Same thing for eating a food pellet, and so forth. We want to somehow generalize experience. And that should lead to much faster learning. OK, so to make this even more explicit, imagine you run Q-learning, and you have discovered that this state is a bad state. It seems a good thing to have learned. Now, what if you're in this state? That's almost the same state. But actually, it's just a different entry in the Q table, and it doesn't-- if you do standard Q-learning, you know nothing about this. How about this one? I mean, this is extremely similar to that first one, spot the differences up here-- small difference. One food pellet is not there, but in the Q table, it's just a different entry. And you have learned nothing about that new entry from your past experience, if you use regular tabular Q-learning. We've seen those in action. So what can we do? You've actually seen this before in lecture five, I believe. Let's see, maybe-- in games, lecture seven, lectures seven in games. We looked at evaluation functions. Evaluation functions was this thing where you said to decide how good a situation is, I can't run all the way till the end of the game to evaluate it, because the min and max calculation is just too much. What I'm going to do is just have an estimate. And that estimate is some weighted sum of different contributing factors. So for example, distance to closest ghost-- that might be important in evaluating how good your current situation is. Distance to closest dot, number of ghosts, maybe one over the distance to dots squared. Is Pac-Man in a tunnel or not? If you're in a tunnel, you might get cornered more easily, so maybe that's not as good a place to be. And many more. And so what you do there is you design a set of features, and these features evaluate some aspects of the current situation that you think contribute to how good that situation is. And then actually in your project two, you played with those features and how to sum them together in a weighted fashion that this might come out well. What we're going to get new now in Q-learning is that you we'll learn how to combine these features in the right way. So more concretely, we could either do this for value learning or for Q-learning-- let's focus on the Q-learning one-- we'll now see our Q function is a weighted sum of feature values. And the weights here, rather than setting them by hand and us saying, oh, I think this ghost is so much more important than this is the food pellet, maybe 10 here, one there. Rather than doing that, we're going to run Q-learning, and we're going to expect from Q-learning to learn these weights. Such that, once you've learned them, these Q values computed, using those weights, are hopefully fairly precise Q values. Will they be perfect? Probably not. Because it's unlikely that with your weighted sum of features that you can represent the exact optimal Q function. But hopefully you can get pretty close with your weighted sum of features, and learn something that's very useful as a consequence to act in a new environment. And also, you can learn a lot more quickly, because thanks to these features, if you're in new states, where similar features trigger, you already know something about those states before you've ever been there. Disadvantage to keep in mind is that, if you don't have enough features, it could be that two states, as far as the features are concerned, are identical. And that means you can not distinguish them anymore with your Q function. So you look at your features, and if you find two states that you think are really, really different. And one is really good, the other one is really bad, but they have the exact same feature values, there's no way your Q-learning agent, your approximate Q-learning agent, can distinguish them anymore, because the only thing it can do is interpret the state through the feature values. OK, let's see. Actually let's take our small break here, and then look at how this works after the break. All right, any questions about the first half of lecture? Yes? STUDENT: [INAUDIBLE] PROFESSOR: So the question is, why do we use this specific form factor to approximate our Q values? There are many choices that can be made there. One nice thing about having a linear combination of features is that the learning updates we'll see are simpler to understand. There are ways to make this non-linear, and we will look at that at the very end of the course. But for now, we're going to assume that we have a set of features, and that the Q function that we want to fit is a weighted combination of those features. But we'll revisit that assumption maybe in lecture 20, or something, or 22. OK, so we wanted to approximate Q-learning now, so we want to learn these weights from experiences. Let's take a look at how this works. We have a transition, and based on this transition we want to get a better estimate of these weights. Well, just like before, we can look at the difference between the sample estimate of the Q function for state s and action a, with our current approximation so far. So this is the current sample, and this is the approximation so far. Then, in traditional Q-learning, what we would do is we would say, OK, we have our estimate, and we're going to corrected by some scale factor alpha times the difference. And nudge it in the direction of the sample we just saw, effectively. But not necessarily all the way there, that's why alpha is going to be smaller than one, because it's just a sample. And we need to average many samples to know what it really should be. So we just nudge it in that direction. But now we don't have a table in which to store such updates. We can definitely retrieve QSA for the state action that we just experienced, we just need to run this evaluation here, weighted some of the features for that state action pair, but we don't have a table to store this update into. And we also don't want to have such a table-- we want to, instead, update the weights, rather than table with Q values. Turns out that this is the update you end up with, in case you want to update the weights. It's not derived on this slide-- we'll see some intuition behind the derivation on some later slides, but for now, let's interpret this update. What does this say? It's saying that the weight for feature i, is whatever the weight for feature i was, plus some learning rate, times the difference, times the feature value. Let's think about this-- let's say the difference is positive. And for now, let's assume that the feature values are either zero or one. Then if the feature value is zero, nothing changes to the weights. So this was a state action pair where the feature value was zero-- it was not active, and nothing changes about the weight, because it's a state where that feature isn't present. Now, what if the feature value is one, what happens? Well the feature value is one, and the difference is positive, meaning that the sample was higher than our estimate that we have. Then we get a positive update on the weight. What's the result of that? We make the weight more positive, that more positive weight would get multiplied with this feature value, which is one in our current assumption, which would increase the Q value at state SA. So we see we have the right behavior, in terms of direction that it goes. When feature value is one, difference is positive, weight goes up, which results in our Q estimate also going up, which is what we want. Gets nudged in the right direction. Now, what if the difference is negative? Meaning that the experience we got through the sample was lower than we had anticipated, based on our current Q function approximation. Well, then the way it will be nudged down, and that's also what we want. Because if the weight gets nudged down, then that means if we reevaluate the Q value for SA, it's now going to be lower than it was before, because a lower weight was multiplied with one, resulting in a lower sum of these weighted features. So for the zero-one case, we get a behavior that makes a lot of sense. Now, to generalize this interpretation, imagine you have a feature that is more continuous in range-- maybe it can go from negative 10 to plus 10-- then what's going to happen? Let's first think about it being plus 10-- so as high as it can possibly get. And let's say our experience is such that the difference is positive-- we got more than we expected from our sample. Well, if our feature value is 10, very high, and we've got a positive difference here, then the weight will go up. If another feature was only one, the difference is still the same. It's positive. Then it'll go up that way by only one. So what we see here is that, if your feature was more active, the weight will go up by more. So whenever something is very active, there'll be a big correction-- how big depends, of course, on the difference. And which direction depends on the difference. If the difference is positively a correction, the positive direction, the difference is negative in the negative direction. But we see that the correction is scaled by how big our feature was. And that makes sense. Like if a feature takes on a higher value, it's a feature that's dominating how we think about that particular state. And so if there's a correction to be made for that state, the weights associated with those features should be updated the most. Whereas, if a feature is very close to zero, then actually, it doesn't play much of a role in that state. And so when we do an update, even when the difference might be relatively high, it gets multiplied with this low number, and it won't update very much for those weights. OK, so now we know how to interpret this. And if things are negative, then things will work out, too, and there will be double negatives that cancel out the right way. So what's happening on the underneath is weights get adjusted, and more adjustment-- when there's a bigger difference, and when your features take on bigger values in absolute sense. We'll see a formal verification in a couple of slides. But let's first take a look at this in action. Imagine you are in a Pac-Man world, and you have two features-- one related to distance to the closest dot, and the other one related to distance to the closest ghost, maybe. And so maybe you initialize it, and you say, well, I think being close to a ghost is not good. So maybe I should have something where somehow it contributes in the opposite way where distance is actually good. So this is actually not a good initialization. We wouldn't like this negative 1. You really would want ghosts to be far away, so large distance should be positive. But there's just how you initialized it. OK, let's see what happens. We have a world. We have a current experience. We take the action north. We get from that a next state and a reward. It was a pretty bad action, we got eaten by the ghost. Reward is minus 500. And the Q-value of s prime-- no action's available, the game's over-- is 0. No reward anymore when the game is over. OK. So that gives us a sample estimate. Our sample estimate is minus 500. What was our actual estimate? Well, that's based on this equation over here. It's 4 times F dot, which is 0.5, so that contributes 2 here. And then negative 1 times 1, which is a negative 1. Sum that together, it gives us plus 1. So our estimate used to be plus 1 for the state-action pair. Our sample says negative 500. So we need to revise this down. What will the update do? It will say, OK, for each weight we update it by a little nudge in the direction-- the difference says in which direction-- and then multiply it with the feature vector. And so here, the first one, the feature value was 0.5. Second one, 1. After the update, we get this approximation for our Q-value. Of course, just one update is not enough. We'll need to keep doing this to get better Q-values over time. But that's mechanically what happens underneath. Now, let's justify this. So actually, let's watch it in action first, before we justify it. We'll run approximate Q-learning in a pretty big grid. No success in the first one. It didn't realize the ghosts are bad. But now it's done an update, and hopefully it realizes being close to ghosts is bad. But it still didn't anticipate it well enough. Definitely those food pellets are good. There's a lot of reward in those. And now it seems to have learned that staying away from ghosts is a good thing. It hasn't explored the power pellets, so it doesn't know anything about those. It's possible that the random initialization of the weights in front of the power pellet feature made it a negative thing. And so maybe that's why it's staying away from it, if we don't explore enough. We see that actually this is all the episodes it's had-- about 10 episodes so far maybe-- and it's already playing pretty well. Compare this with a really tiny world, where we needed hundreds if not thousands of episodes before we learned anything about using tabular Q-learning. Why can it learn so quickly here? It's approximate Q-learning with a small number of features. It's not much to be learned. If your features are just how close you get to the next food pellet and how close are you to a ghost, then it's only two numbers to be learned. It's how to trade those off. Once you learn those numbers, you're good to go. And that's why from a relatively small amount of experience, it's possible to learn a Q-function that's pretty good. So how do we justify this a little more formally? So we'll refer to something called least squares here. Hopefully, many of you have seen that before. The notion of least squares is that you have a bunch of points, and you'd like to fit a line through those points, at least in the simplest version of least squares. So how does that work? So we have a feature value, F1, just one feature. We have a y, the value we want to predict. And we have a bunch of red dots, which shows experiences, samples, of what we should have predicted for those specific feature values. And the way we're going to predict in the future is by fitting a line to those points. We're going to say, well, I'm going to put it as some offset plus a slope times feature value. And one line is shown there. The question is, what would be a good line to choose? You'd probably say a line that's close to all the points. But what does it mean to be close to all the points? What does that mean? How do we know which w0, w1 make this line closest to all the points? Can we make this mathematically formal? Then ideally, our Q-values would be mathematically, in some formal way, closer to the optimal Q-values. And also, you can do this in higher dimensions, of course. When we estimate the Q-values, it's not going to be just one feature. It's going to be many, many features. Now there's two features here, but in practice it might be three, four, even hundreds or thousands of features. Generalization for two features then would be fitting a plane through a set of points in a three-dimensional space. And then beyond that, we can't visualize it. Actually, one thing I want to note-- on the slides when we have one of these, it means that this is something we think is really useful for you to know, but that we're not going to quiz you on on an exam. That's what the star means here. So it's really good for you to know this. But we also know that we don't require maybe in our pre-reqs the background that makes this maximally easy to follow along. And some of you might not have that background. And so we're not going to require you to now go study that background to understand these few slides. But we think most of you do have the background and will benefit a lot from seeing where this comes from. So least squares, what does it do? It says, I'm going to measure vertical distance of my line to each of the points. And I want the sum of the squares of the vertical distances to be as small as possible. That's how I'm going to measure how good my line is. So that would be this. Y i-hat is my prediction. There is y i, which is given. And then, I want to see the difference squared, y-squared. Because if we just average the differences, that's not a good measure, because then positives and negatives will cancel out. We need to make sure the positive and negative errors don't cancel out, because both positive and negative errors are abound. So we square them, sum it all together, and that's our objective. Our predictions are in the form of a weighted sum of feature values, so that's what it looks like. So what we really want to find then is a set of weights-- w1, 2, 3, and so forth-- such that this thing is minimized. In principle, you could try all possible settings of weights, see which one achieves the lowest, and that's the one you want. But these are continuous variables. So trying all possible settings of the weights is not practical. There's infinitely many choices. So we need a better way to find the best setting of the weights. So what we're going to do is something a lot like the local search you've seen in the CSP lectures. So let's say we had just one point, one term in that objective. And this will generalize to if we had many points. And we have this thing over here. Can we do something close at the local search? What did we do in local search? We had a current choice of our variables. In this case, our variables are w0, 1, 2, and so forth. And can we maybe change that choice to make it better? And then repeat, repeat, repeat, and hopefully end up at something good. Can we do the same thing here? Well, the question is, can we do something maybe even smarter? Instead of just changing it and hope that it might get better and otherwise reverted, can we know in which direction we should change each weight? Should weight 1 go up or down? If we can decide that, then we can, in a much more informed way, get to a good low error. How do we know whether we should move a weight up or down? Well, we should say, well, if we change the weight up or down, does the error go up or down? And we want the error to go down and nudge the weight in the according direction. Mathematically, what that means is we compute a derivative. So you would say, what is my error, which is my function I'm trying to minimize, as a function of wm, one of the weights? And I want to see the slope. I want to say, if I change wm, how much does my error go up or down? That's what this thing is saying. It's saying, if I change wm and I do plus 1 for wm, then this is how much my error will change. OK. If we can do that, then we can do an update to the weights. We can say, OK, well, if the slope is zero, no update needs to happen, because that weight does not affect the error. But if the slope is non-zero-- let's say this is a positive slope-- if I make wm go up, the error will go up. Then, I just step in the opposite direction, step in a negative direction, and that's exactly what's happening here. There's a negative sign. That negative sign has disappeared. We step in the opposite direction of the slope, because we want to go downhill. So that's our update here for least squares for that weight. We can do that for all weights, and that's exactly what's happening in the Q-learning just as well. In Q-learning, we formulate an error function, a squared error function for the current sample. What is our sample estimate? What is our Q-function? We can look at the squared error, take the derivative with respect to a weight, and then based on that derivative, that will be corresponding to this thing over here. This here corresponds to this over here. And that tells us in which direction to step. And then we'll step scaled by alpha. We didn't fully derive on the slides. It's just meant to give you some intuition behind why what we're doing in approximate Q-learning is actually well-justified. Any questions about this? Yes? STUDENT: [INAUDIBLE] PROFESSOR: OK. The question was about the negative. So let's look at this again. So what we're trying to do, we're trying to make the error as a function of our weights as small as possible. What this derivative is saying, it's saying, if I change wm by increasing it by one unit, how much will my error go up? But I want to move in the direction where the error goes down. So this quantity here will be positive if moving w up makes the error go up. But if that's the case, you actually want to move in the negative direction. And so that's why the negative that ends up-- if we take derivative, we end up with a negative over here. That negative disappears when we look at the update. Because we want to move in the opposite direction of the up direction on the error. OK? One thing to think about-- and we'll look at this more in the machine learning at the end of the course-- is that you don't necessarily want the more features the better. You can have too many features if you have a finite amount of data. For example, imagine you are fitting a function to this set of points. You might say, well, let's fit a line. And you say, OK, that's my line. You find the least squares error line, and you're like OK. But it's still a measure of error. So you might then say, well, not so much a fan of a line, how about a parabola? Parabola, OK, let's fit a parabola. Maybe now you're a little happier. What does it mean to fit a line? A line means that you say y is w0 plus w1x. A parabola means that you say, y is w0 plus w1x plus w2x squared. If you want a third-order polynomial, same thing-- plus w3x 3. So as we want a higher order polynomial, what that means is we introduce more features. A new power of x is a new feature. And so as you introduce more features, you can start fitting this more and more closely, because you have more flexibility. The set of functions you represent by a third-order polynomial is strictly more than you can represent by a first-order polynomial. Because in fact, they're all included in the third-order polynomial by just setting some coefficient equal to zero. But you can set those coefficients to non-zero, expanding the things you can represent. So you might wonder, well, what if we fit, let's say, a 15th-order polynomial to this. Wouldn't that be nice? Well, maybe, but this is what you'd get. Well, what you see here is that, if you just look at the points that we're trying to fit, it does really well. But then in some other places, it does something that we likely don't want it to do. That's called overfitting. It's trying to get all the points correct, but as a consequence, it ends up doing some crazy things in other places. And so in practice-- and we'll see a lot about this in the late lectures-- you need to guard against this kind of overfitting. For now, the best way to think of it is that you don't always want just more and more features, because you'll have these weird things come up and then crazy things happen. And you might as well have anybody draw some line through the points and call it done. OK. So that's Q-learning. Now, we're going to see a different approach to learning good policies to act in worlds. So we've seen value learning, we've seen Q-learning, now we're going to see policy search. In policy search, you kind of just say, hey, there's many policies to choose from. Ultimately, I want to end up with a policy. Let's forget about learning Q-values, V-values. Let's just try different policies and see which one is the best. And then we'll call it done, because we found the best policy. So the reason you might want this is because when you have approximate Q-values, even though it might work well, if they're approximate, what they're approximating is trying to get this least square error down on this Bellman equation. And we know that's related to good performance, but it's not directly tied to good performance. It's approximating something that is related to good performance. Can we do something else? Well, for example, if you look at Q-values, it could be that some Q-values are much better than other ones. But the way they differentiate between actions, one of them picks the right action, the other one picks the wrong action. The priority in Q-learning is learning Q-values that are close, not making sure that the action that's the best is favored over the action that's not the best. OK. So what if we directly pay attention to the actions? Policy learning. One way to do this, we could say, we start with an OK solution, so we have the Q-values. And then we're going to fine tune by hill climbing on the feature weights. What does that mean? We have some Q-values, and then we can nudge the feature weights up or down, see what happens, just like we did in local search for CSPs. If it's better, we keep the change. If it's worse, we discard it, and we repeat. And over time, we might find the process better and better and better. Now, what does it mean if a set of weights is better or worse? Well, you need to execute the policy and see how well it does. Now, you might have to do this many, many times, because the world is stochastic. And so sometimes maybe you're lucky, sometimes not. So it can be somewhat expensive. And in general, policy search tends to be less sample-efficient, which means you need to collect more experience to learn to do well with policy search than you need with Q-learning. But it's often a little easier to understand what's going on. You have this policy. You nudge it. If it's better, you keep it. If it's worse, you discard it. And you go again. OK, so often people will use a combination, where maybe you start Q-learning, fine tune with policy searches. There's even other ways to combine them, which we're not going to get into here. Let's look at some example success stories. So here is a helicopter. Helicopters are very difficult to control. You might wonder why. You've flown in a plane many times, and you've heard people say, well, if the pilot goes to sleep, the plane will still fly. Because even if you make a paper plane, you just throw it and it flies. It naturally flies. With a helicopter, actually, it's not like that. It's more like a rock in the sky that wants to drop, and you need to put effort into keeping it in the sky at all times. To think about it a little more carefully, what's really going on with a helicopter is closer to-- let's say you have a very flat surface, maybe something like this. But a tray that you can hold, but with no boundaries to it-- just flat. And it ends at some point, but there's nothing coming up on the side. You have a marble on it. And now you need to walk somewhere, and you need to make sure this marble doesn't roll off your tray. That's effectively what pilots are doing when they're controlling helicopters. So it's really hard. If you don't adjust for what's happening right now, a wind gust that comes in or some other perturbation that comes in, then you might be on your way down. So you need to pay a lot of attention as the pilot to make sure to keep it in the air. So it's also hard to design controllers for this. But with reinforcement learning, a controller was designed for this helicopter. And let's take a look at this helicopter in action. [ENGINE ROARING] OK. So this is an extremely stable helicopter flight. The helicopter is pretty much not moving. Very hard to do, even for the best human pilots. It's also upside down. So what's going on there? You might not have flown a helicopter upside down or seen it fly upside down. How come this works? Let's think about this. How do you control a helicopter? You have four control channels. Channel one is the average angle of attack of the blades. And the steeper you make that, the more air you push down, the more thrust you have. You can actually also make that negative. You make it negative, you would go down very quickly, except if you are upside down, in which case, the negative angle would keep you up. And it actually turns out it's more efficient to fly this way. Any thoughts why? It's definitely not more comfortable, but-- [LAUGHTER] STUDENT: [INAUDIBLE] PROFESSOR: Sorry, say it again. STUDENT: All the thrust is used [INAUDIBLE].. PROFESSOR: So the proposed answer is, all the thrust that you generate is used to lift the weight of the helicopter here. Whereas, maybe if you fly the normal way, you're not using all the thrust. And that's exactly-- that's the right direction, but I'm going to refine that answer just a little bit. If you think about what happens when you keep a helicopter up in the air, you're pushing air down. Force for the air down, the helicopter gives the counterforce to stay up. When you push that air down, it's much faster than it was before. It's going down quickly. If it's going down quickly over the body of the helicopter, it's dragging that body down. And so what you have is kind of a double effect of pushing air down. You get a counterforce keeping you up, but also then, you generate drag over the body of the helicopter, dragging you down or trying to drag you down. When you flip it upside down, that's not happening. There's still air coming over the body of the helicopter, but it's coming in from a wider intake. And it only goes at half the speed compared to the air that's coming out. And because drag is squared in the velocity of the air, you get four times less drag flying this way than flying the other way. Now, I wonder, how do you control this tray that you're kind of balancing your marble on? One thing is the angle of the blades. The other thing you control is differential angle of the blades throughout the cycle. So for a helicopter to move forward, you can't just ask, let's move forward. You actually have to tilt the nose down, and then the thrust will be such that some of your thrust is getting it to go forward once combined with gravity. How do you get your nose to go down? Differential amount of air pushed down from back, left, right, allows you to rotate the body of the helicopter. So that's the cyclic control. And so there's something in there called a swash plate that ensures that you go through cycles of different angles of attack as your blades rotate at something like maybe 30 times per second. You have a fourth control, which is to make sure that you're facing the direction you need to be facing. If you don't have the tail rotor, what would happen is your helicopter body would spin counter to your blades. But the tail rotor compensates for that by also having a differential angle of attack. And you can control that to then also decide which way you want to look. OK, that's helicopters. You might also wonder, why don't people fly upside down? I've wondered that, too. Probably it's just not that easy to build helicopters that humans sit in in a way that is comfortable to fly them yet. Then, a sustainable flight upside down would, in principle, be more efficient. Here's another example. So here we're going to watch reinforced learning in action for a legged robot. What do you expect to happen here? Well, we know how reinforced learning works. Initially, it doesn't know what to do. It's exploring. But then hopefully over time, it becomes better and better and better and figures out what it should be doing. So it's just falling over. It's falling over again, but it's falling over a little later. And it's better to fall over later than earlier. And it knows that. It's getting higher reward now. And now it's getting very high reward. Now, the beauty of reinforcement learning is that you don't have to write new code for a new robot. You just swap in a new robot, run the same code, it can learn to run. Here, the robot starts on the ground. It has some extra learning to do-- it wasn't on the ground before. It gets rewarded for getting the head as close as possible to standing head height. The closer the better. And it figures it out. Here, two of these robots are learning together, and they're practicing soccer. [LAUGHTER] One of the beauties here, aside from how they run, is they can train each other. So they can gradually both improve in what they're able to do. As the goalie gets better, the penalty kicker needs to get better to get reward. And they can gradually get better and better together. You can do this with real robots. Let me pause this for a moment. This is BRETT-- the Berkeley Robot for the Elimination of Tedious Tasks. BRETT lives on the seventh floor of Sutadja Dai Hall, and in this video is learning to play with a children's toy. This video is sped up a little bit. This takes about one hour in real time. But it is reinforcement learning to do something it's never done before. You might say an hour is a long time, because you could do this the first time. At least I hope y'all could do this the first time. But it's not really true you could do this the first time. Because the first time you did this was maybe when you were a one-year-old. And you might not have succeeded the first time around when you were a one-year-old. And so you need to think of this reinforcement learning agent as really a zero-year-old. He hasn't done anything yet, is born in this world not having done anything but getting rewarded for putting that red block into the matching opening. And it just goes at it till it optimizes how to get it in there. Here's another example. Let me pause this before we run this. This is NASA's Super Ball. It's a research collaboration we've had with NASA on this robot. You might wonder, why would they make a robot this way? And especially, this one is for planetary exploration. So it'll go visit planets that maybe humans it's hard to send to, but maybe a robot can go explore it for us. Well, what's beautiful about this robot is that it's a sphere, in some sense. And so what you really get is wheels in every possible direction. And wheels is what you need to get places. The other thing you get is if you lengthen the cables enough, this thing can be completely flat. If it's completely flat, then it fits in a rocket very nicely-- very small volume. If you then expand it again, it can land with some damping. So you need smaller air box this way than with a standard rover that you might land. The only tricky part now is that this is very non-intuitive to program how it should do something. But reinforcement learning was able to solve that problem. And here, in this case, we had reinforcement learning run in a simulated environment that simulates this robot, learning simulation after many, many trials, how to control the simulated robot. And that was then deployed on the real robot. And this is tested here at NASA in Mountain View. How about a hand? We use our hands a lot. Clearly, there's control involved. Can we have reinforcement learning learn to control a hand? In robotics, hands have really proven challenging. And the reason they've proven so challenging is because there's a lot of actions to be taken at any given time. There's a lot of degrees of freedom. And you need to control them all the right way to get a good outcome. Most outcomes will be bad. Most outcomes of controlling this thing will be the cube drops on the ground, and that's it. There's only a very small set of ways of controlling this that actually lead to a good result. And how do you find that and zone in on that? Here, we're not going to watch the training, we're just going to watch the final capability. This is a result done at OpenAI. And what we see here at the bottom-right is what it's supposed to achieve. And we see that, indeed, it's capable of reorienting the object it's holding to match up with what's in the bottom-right. One interesting thing here is that this one also was trained only in simulation. But it turns out, building simulators for something like this that are accurate is not practical. But the agent was trained in many, many, many simulators and was trained in a way that it could do well in all of these simulators. And it turned out that if it was good enough to do well in all of these simulators, even though all of them were inaccurate, something that was good for all inaccurate simulators was also good for the real world. So it's a strategy to get something to work in the real world is get it working on many, many different simulators. [MUSIC PLAYING] Go robot. There it is. So at this point, we're not just concluding this lecture, we're actually concluding the first stretch of the course. What we've seen in the first stretch of the course is search, CSPs, games, MDPs, and RL. That's about search and planning with a bit of learning. What we're going to see in the second part starting Tuesday is probabilistic reasoning to deal with uncertainty and a lot more learning to learn from data rather than having a simulator ahead of time available to you. OK. That's it for today. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181025_Hidden_Markov_Models.txt | [INTERPOSING VOICES] PROFESSOR: Hi, everyone. Welcome to the 18th lecture of CS188. Two announcements today. The project [? 2 ?] mini contest results are ready for presentation. I'll present them right after the break in today's lecture. And you have one thing to work on momentarily, which is homework 8, which, as always, consists of three parts. Self-assessment of the previous one, then a written, and an electronic. And that's due on Monday. Any questions about logistics? OK. Today's topic is Hidden Markov Models, possibly the most frequently used Bayes net model. Before we do the math, let's look at a quick demo. So this is what the Pac-Man world in your project 4 will look like. So what we got here, we've got Pac-Man in the top left corner. We've got what looks like an empty board. But actually, there's ghosts on there. But they're invisible. And Pac-Man is more powerful than you're used to in this project. Pac-Man can blast a ghost whenever they're on the same square. So Pac-Man wins out. And the goal is to find the ghosts, track him down, and take him out, put him into ghost jail, which is the bottom left. So how to do this. Well, you could start running around. This is just me playing with a keyboard. And the bottom right, you see four numbers hop up and down. And those are the distances to the different ghosts. There's four ghosts. And there's different distances to the different ghosts. And these ghosts are actually moving around. So let's see. Moving closer to blue ghost-- maybe, oh. We're very close to blue ghost. Sometimes, 0. But we still didn't eat it somehow. Let's see if we can find another ghost. So it's pretty hard to do to find these ghosts when you can't see them and all you have is distance measurements. But if you manage to run into one of them, you would actually get points. Oh, we ran into the cyan colored ghost. Actually, they ran into us. And this gave us a ghost in jail, which gives us points. Let's see if another one wants to run into us. You should try to help them, of course, with your program-- not just sit still. Where is the orange one? Looks like we're getting closer. It's not that easy a game to play just looking at these numbers. And that's the motivation for today's lecture. If you have access to these numbers that were shown at the bottom here, can you do a calculation that makes it possible to-- another one got themselves eaten. These are not the smartest ghosts. If you have access to those numbers, can you do something better than this poor play I've been doing here, by maybe somehow extracting information about where the ghosts are most likely, and then move towards that? And that's what HMM tools will give us. Of course, it's not just for tracking down ghosts. We'll see other applications for HMMs. So this will be all probabilistic reasoning. So quick probability recap. What is Conditional Probability, which will be key here? Conditional probability is, in this case, conditional of x given y is the probability of x and y are both taken on the values xy divided by probability of y. So the fraction of the time, we have small x among the times that we have small y. Product rule is rearranging this and allows us to compose a joint distribution from a marginal and a conditional. The chain rule is repeated application of this. Essentially, the same thing. You can do this is in any order of your choice for n factorial choices of expanding the chain rule. And x and y are independent if and only if, for all values x and y, can take on the probability of x and y is equal to probability of x times the probability of y. And then Conditional Independence is the same idea. But conditional independence, instead of having just x and y, there's also z. And you just condition everywhere on z. And that's generally true in these equalities that we have for probabilities. If you have something that's true, in this case for x and y, you can just conditional z everywhere. And it'll still be true. And that's exactly what we did here. x and y are conditionally independent given z if this condition holds true for all values of x, y, and z. And this is our notation for conditional independence. So we covered a lot of that in the context of Bayes nets. And today we're going to see a special kind of Bayes net. That sees a particularly large amount of use compared to most Bayes nets. OK. Well, what we want to do is reason about sequences of observations. Where might this pop up in the real world? Let's say, speech recognition. In speech recognition, you get a sequence of numbers coming in corresponding to the pressure of the air, at that time. And you're supposed to decode that into a sentence or something. In Remote Localization, you have a robot going around a building. But even though we can see where the robot is, for the robot itself to know where it is, it needs to use some sensory information. That sensory information will not be perfect. It might have sensory information from maybe a camera looking around or from a lidar, which is a laser beam you send out. And then, it bounces back at you, if you're lucky. And then you measure how long it took before it got back to you. And then, you multiply that with the speed of light. And then based on that, you know how far away that obstacle is. You need a really good timer, obviously. Because speed of light is very, very fast. But that's actually used quite often. User Attention. As a user's navigating your website, then goes to another website, what do you expect them to do? What might they be caring about on your website? Medical Monitoring. Let's say you have a patient in the intensive care unit. You're measuring a lot of signals. Can you, from that, infer-- from the aggregate of signals you've seen so far-- infer if there's any emergency and you should call in a physician, or whether you can just leave them alone and let them rest? So it's all about aggregating signals from the past into drawing a conclusion about what situation you are in right now. So we'll need to introduce time or space into our models. So here's a Bayes net in which we have time index variables. So x is the state of a system. xi is the state of a system at time i. We've seen this before. In market decision processes, we had a state. We called it s. It's called x, now. Essentially, it's the same thing. We have a state of the world. And it can vary over time-- be different at every time step. It can also stay the same. What we have here, also, is actually a-- this here is a Bayes net, where does a conditional probability distribution for x2 given x1, the distribution for x3 given x2, and so forth. And what's special about these models that the parameters we'll use will be tied, meaning that we'll have a generic distribution for xt given xt minus 1 that is the same for every time step. This was actually also true in our MDP models. It was always the same transition model. We're going to make the same assumption here. And so, even though this Bayes net could, in principle, be infinitely long, the number of parameters is bounded. It's actually not that many. There was just two tables. One table for the conditional. And one table for the initial distribution. OK. So this parameters here are called transition probabilities-- or dynamics-- and specify how the state evolves over time. And then these are the initial state probabilities. [? So ?] this narrative assumption is that the transition probabilities stay the same at all times. If somebody says, we have a non-stationary process, then you'll need a different transition model for every time. So, same as MDPs. But what's missing is actions. We're not going to worry about actions. They're not going to be present in this lecture. OK. So what are some conditional independencies implied by this Bayes net? Well, past and the future are independent, given the present. When we know this variable-- let's call this, the present, time t. Then once we know that variable, knowing something about the future does not tell us anything about the past and the other way around. We can calculate that with deseparation. Or we can just intuitively re-conclude that. Each time step only depends on the previous. To predict what happens at the next time, just knowing the current time is the best thing you can know to predict that. Knowing more things about the past is not going to help you. This is called a First Order Markov property. You might say, well, what if it doesn't apply in my situation? What if, in my situation, the state of the next time depends on the state of the current time and the state of the previous time? Well, to still be able to fit it in this format and to really fit the notion of what actually it means to be a state, you should then combine the state of the current time and the state of the previous time, in one bigger state variable that you now call your state. And then you can use this model again. Because now it is bigger state, which includes current state and previous time step state in your original formulation together is enough to predict next state. Predict here could be probabilistic, of course. So really what this is just a Bayes net that you can keep growing. And as time goes by, you keep growing it to keep up with time passing. Or as you traverse space, you keep growing it to keep up with the amount of space you have traversed. If we want to do any probabilistic reasoning in this Bayes net, what we can do is we can just say, OK, we'll look at everything up to the current time or up to the time we care about, cap it off, and then we can run our standard Bayes net inference algorithms. We can do variable elimination. We can do sampling and so forth. But we'll see in this lecture and next lecture is some variations of this algorithms that are easy to derive from first principles for Markov models and that have equivalence simplifications of the sampling and variable elimination algorithms you've already seen. OK. Let's look at an example Markov chain. Here we have a state space with two possible values the state can take on. It can be rainy or sunny. And maybe we observe this on a daily basis. So Monday, sunny. Tuesday, sunny. Wednesday, rainy. Thursday, rainy, and so forth. So our initial distribution is going to say, it's sunny on the first time slice. And our conditional probability table-- our dynamics model-- is given by this table over here saying that when it's sunny, there is a 90% chance of sunny again the next day. When it's rainy, there is a 70% chance it's rainy again the next day and then the complimentary probability of being sunny. All right. So now we can start doing some calculations with this. But also, let's look at a few other visualizations. So this is one way to look at it. That's rolling out the Markov model. Another way to represent the conditional probability table is to look at it in this format. So this looks like a finite state automaton. There is two possible values the state can be. So you have rain or sun and then there [? are ?] transition probabilities associated with each possible transition. So that's a pretty valid way to think of a Markov model. Another way to represent it and keep time a little more explicit in it is do essentially the same thing, but unroll one step when you draw this thing. Because when you go from sun to rain, it's really at the next time it'll be rainy. And so this model on the right makes that a little more explicit that you progress in time. OK. Any questions about the formalism? Then let's do some calculations. So let's say the initial distribution is all probability on sun. What is the distribution after one time step? So let's see. Let's think about this. We have the initial distribution. We have a dynamics model for next state given current state. So we should be able to combine both of those to get the distribution at the next time step. OK. Probability of it being sunny at the next time step is the probability of it could initially be sunny or rainy. When it's sunny, then we need to transition from sun to sun. When it's rainy, you transition from rain to sun. That's how a generic updated equation could go from a distribution at time 1 to a distribution at time 2. This particular case, initially we put a probability of 1 on sun. So only this first term here will have a non-zero contribution. And it'll be, in this case, the transition model was 90% chance of staying sunny. We knew it was sunny at time 1. So we have 0.9 times 1 gives us 0.9. OK. So what is the probability for x taking on a certain value on some time t in the further future? Well, what we just did, we can actually repeat that process quite easily. Because it gave us the distribution of time 2. And going from 2 to 3 is the same calculation as going from 1 to 2. So we can just iterate this same calculation over and over to progress over time. Writing this out on the slides. We know distribution at time 1. Then for time t, well, we say, we could compute it recursively based on the distribution of time t minus 1, which is this summation over here-- the sum over all possible values we could have time t minus 1-- jointly with xt. This itself we don't have available. But we have a conditional of xt, given xt minus 1. And we'll recursively assume we have xt minus 1 distribution. So that gives us this expression over here. And this is an expression we can apply for each time slice to propagate forward to any time we care about. And this here with forward simulation-- that's referring to the model that we have available to model how the dynamics works in this world. OK. So that now means we can run things forward in time. If we have a distribution at the initial time, we can use this update equation here to get the distribution at the next time. If you think about it, what is actually going on here-- think about the Markov model. You have x1, x2. It's a Bayes net with two variables. You have all the distributions, all the tables. You have instantiated a distribution for the initial time for x1. And now you're running variable elimination to get the distribution for x2. So you're trying to compute p of x2 in this Bayes net. If you do that with variable elimination, you'll get the exact same expression shown here. Will not be any different. But this is such a simple situation. It's easier to just rederive it from scratch than to go through the very general framework and then finally arrive here. OK. So let's do this. Let's say, initially, it's sunny. We've observed it. So all probability mass is on sun. We can propagate forward in our model. We already did that for one time step. It became 0.9, 0.1. What if you propagate forward again? With same calculation, we end up with 0.84, 0.16. We can keep repeating this. And if we keep running this forever, if we had the patience. But if we run it long enough, we already see the pattern occurring-- [? will ?] converge onto 0.75, 0.25. This might remind you of something we've done before, also. In value iteration, in policy evaluation, we had an updated equation. It was done for the values. Now it's for the distribution over states. And if you repeated the update equation infinitely often. It would converge to a stationary point. OK. What about if initially we observe it's raining? So we start from 0 and 1. We can follow the same update equation. We now get 0.3, 0.7. And we can repeat this process over and over and over and see what happens. We actually end up with the same distribution at time t equals infinity. So just like value attrition policy evaluation, it doesn't matter how you initialize your values. There it was because it decays over time because of the discount factor. Here it's a slightly different effect, which we'll discuss in a little bit. But intuitively, it's the notion that if there is some randomness in the next day's weather, over time, the information about the weather on the first day will be erased. Because it will not affect anymore what you expect to happen. Let's say, I ask you today, what's the weather going to be like in the year-- I don't know-- 3,000. You will not be basing that on exactly what the weather is like today. That will not affect your decision on what the weather will be like in the year 3,000. It's too far out. And the transition model will dominate, not the initial distribution. I can do this with anything. You can have any initial distribution. And if you do the calculation with that transition model, you'll find you converge onto the same distribution. 0.75, 0.25. Of course, the specific numbers here are specific to the fact that we have the conditional probability tables shown early on here. This conditional probability table is what results in that particular stationary distribution. OK. Let's take a look at this in action. I think you've seen this grid in the value of perfect information lecture. So what are these numbers? They are the probabilities of the ghost being in that particular spot. There's only one ghost in this case. Let's first find this ghost by doing a few measurements. Where is the ghost? Somewhere here it seems. Pretty high probability. OK. Now this button here-- we're not going to worry about busting the ghost in this lecture-- this button here, time plus 1, will make time progress. The ghost is moving randomly in this dynamics model. So equal probability of moving to any neighboring square. So if we make time pass, what do we expect to happen? We expect the probability mass to spread out from being all concentrated here to neighboring squares. So let's see if that happens. Indeed. Probability mass is sifting out. So the ghost stays in place with a high probability and then sometimes randomly moves off to a neighboring square. So let time pass again, again, again. The probability mass will diffuse more and more and ultimately diffuse back out over the entire board that we have here if we play this long enough. Takes a while, though. A lot of clicking. If we play this long enough, what we'll see is that we'll hit the stationary distribution where this doesn't change anymore. And it doesn't matter where we started. The distribution will be the same. What do we expect? It'll go back to the uniform distribution where everything is the same probability. It's starting to get closer. But this will require a lot more clicks before it gets all the way out there. Infinity, in this case, is not super close to us. But we're getting there, at least, with most of the probability mass largely spread out and only very little left on the initial square. OK. So that's the process of probability mass diffusing as time passes by. Now this becomes more interesting when you have more interesting dynamics. So here we have circular dynamics, which means the ghosts will be going around in a circle. So what do you expect to happen when I press time plus 1? I see some No nods. Because, actually, nothing's going to happen. These numbers are going to stay the same. Because if the ghost moves around in a circle, well, all that mass will just shift around and fall right back into place. So time plus 1, time plus 1, time plus 1. We're on the stationary distribution already for the circular motion. And so as time passes, nothing changes. We hit the stationary distribution from the beginning. Now if I go find the ghost to change this distribution a little bit-- let's see, somewhere here. OK. We got some pretty high probability here for the ghost. If now we let time pass, what do we expect to happen? We expect that high probability to move around in a circle. Actually, for you guys, it'll move around in a circle this way. Let's see if that indeed happens. So there's the ghost moving around in a circle. That's our estimate of where that ghost is. The ghost doesn't need to be here at the red square. We think, 50% chance the ghost is there. And probability mass keeps following along with what we expect to happen. But at the same time, diffusing, because there is some probability the ghost moves randomly off to the side instead of following the circular pattern. And what do we expect to happen if we keep pushing the time plus 1 button? As more and more time passes, we'll go back to the stationary distribution where every square has a probability of 0.2. And again, this might require more clicks than we have patience for. But we see it's already getting pretty close just a few rounds into what would be a full circle around. Now right now, we have no idea where the ghost is. The probability mass is largely diffused over the entire board. What's another example of a distribution or a dynamics model? In this one, it's a whirlpool. So the ghost spins to the center. Wherever you start, you spin around, but towards the center. So what happens if I just do time plus 1 here? This is not the stationary distribution. Because you're inclined to spin to the center. So no matter where you are, a bunch of probability mass is going to shift around that circle and towards the center. So as I keep pushing time plus 1, here, actually, without even measuring anything, because the stationary distribution is fairly concentrated to those two squares, we actually find out where the ghost is by just letting time pass and doing the appropriate calculation. So, OK. It looks like the ghost is either here or there. We can go check, measure where the ghost actually is. It doesn't look like the ghost ended up there just yet. But it may be pretty close. So we now have observed the ghost is most likely here. We'll let time pass. What do you expect to happen? It will go back to-- we'll see it circle around, while drifting towards the center. And once it's in the center, it hops back and forth between those two squares. It's just how the whirlpool dynamics work in this world. So we've seen a few different world dynamics here. The first one was just randomly moving into a neighboring square. The stationary distribution was uniform over all the squares, as we saw. Then we saw the going around in a circle, where also the stationary distribution was uniform and a little bit of noise was in there. And then this one here, it's drifting to the middle. And the stationary distribution is then putting probability mass mostly in the middle. So usually when time passes, we lose information about where the ghost might be. But for very specific dynamics, where things converge together automatically, you can just let time pass and learn about where the ghost is. Any questions about this? Yes. STUDENT: [INAUDIBLE] all the probabilities sum up to 1 because there's only one goes to-- PROFESSOR: There's only one ghost here. This probability should always sum up to 1. I'm not sure if you were doing all the calculations in your head while these were flashing by. It's possible that they didn't always perfectly sum to 1 because of rounding errors showing only a small number of digits. But under the hood, they will sum to 1 if you show them in full precision. OK. Let's think a bit more about stationary distributions. For most chains, the influence of the initial distribution gets less and less over time. And so the distribution we end up with is independent of how you started out. There are exceptions to this. You can carefully design exceptions to this. But the most common case that you'll encounter-- and if you just randomly put some transition model in place by randomly choosing numbers, this is what will happen-- the stationary distribution is the distribution you hit at that point. The stationary distribution is one where, if you apply the transition model, nothing changes. So actually, rather than applying the transition model infinitely often to understand what is now the stationary distribution, you can also just set up an equation. You can say, the distribution at time infinity plus 1 has to be equal to the distribution at time infinity. That's what it means to be stationary, [? where ?] it's the infinite time stationary distribution. This is the update equation. And so we can solve this system of equations here to directly solve for the stationary distribution. It's just linear equations. Because the transition model here-- transition model-- is known as the [? p ?] infinity of x-- that's unknown. Those are the unknown variables. You might say, isn't 0 a solution? Yeah. So you need to find a solution to this linear system of equations that is not the solution that makes everything 0. You need to find-- there's actually infinitely many solutions to this equation. There will typically be only one solution where all entries sum to 1. And that's the solution you want to find. So you want to solve this linear system, in addition to sum over x-- p infinity x equals 1. If you add this equation to the system of linear equations we already have there, then there will be, most typically, a unique solution, which is your stationary distribution. This should actually remind you of something we saw when we covered Markov decision processes and we looked at policy evaluation. In policy evaluation, we saw you can iterate the value attrition equations where you removed the max. And you just choose the action based on the policy. And that gives you the value of that policy for each state. Or you could realize that once you removed the max, this is just a linear system of equations, and just solve the linear system of equations directly to evaluate the value of a policy at each state. Same thing here. We could either iterate, which would be, likely, a lot of work. If we asked you on an exam, find the stationary distribution, if you iterate, you're going to be busy for a long time. And you might not have time to solve some other questions. But if you use this equation and say, I just need to solve those linear system of equations-- and if we ask you to do that, it'll probably be a linear system that's very easy to solve. Then you could just solve that linear system of equations, have the solution, and be done. OK. Let's look at some examples. And let's use the linear system of equations to find the stationary distribution of the rain, sunny Markov model. OK. This is the two equations that say, the distribution at time infinity should be the same after one update. The equations-- the right hand sides are what happens with one time update. OK. So the unknowns are these guys here. The lot of them are the same. There's not six separate unknowns. There's only two unknowns here. p infinity of rain. p infinity of sun. The other four entries, which I'm drawing lines underneath, those are just numbers that we can take from this table over here. Let's fill them out. Now let's solve this system of equations. Do a little bit of work. We get this here. Actually, we see that both of those equations tell us the same thing. It's really only one equation. But also, we have that they need to sum to 1. And that gives us the solution summing to 1. And 1 has to be three times the other one. Then it's going to be 0.75 and 0.25, which is our solution. Any questions about how this was done? Yes. STUDENT: [INAUDIBLE] PROFESSOR: There will always be at least one solution. It is not guaranteed that there is a unique solution. And so, let's think about this. How could we end up with a non-unique solution? Let's think about the iteration case, all right? Let's imagine your initial state could be either in Wheeler or in Soda hall. And let's say your dynamics model is such that when you're in Soda, you cannot leave Soda. And when you're in Wheeler, you cannot leave Wheeler. Then all of a sudden, your initial state will determine the stationary distribution you get at the end of doing infinite repetition. And when you solve the linear system of equations corresponding to this kind of transition model, you will find that it has two solutions instead of one. It could be more. It could be that maybe if you're dropped in Soda Hall, if you're on the fourth floor, you always end up staying on the fourth floor. You can't exit. And same for 1, 2, 3, 5, 6, and 7. And if that were the case, then you'd have seven stationary distributions in Soda Hall, which you hit depending on where you started, and then one for Wheeler. So if you keep things separated like that-- essentially, if there's 0 transition probabilities between certain regions of the state space, you can end up with multiple stationary distributions. But once you introduce a non-zero probability of transitioning between any pair of states, then typically we end up with a single solution. There is another funny case. And these things are not too important for one idea. But just because [INAUDIBLE] has another funny cases where let's say, you have a system where maybe you could either be in front of the classroom or you could be sitting in a chair in the classroom. And your transition model is, at the next time, if you're sitting, you're in front. And if you're in front, you're sitting. They always go back and forth. Then you will not hit a stationary distribution, either. Because the dynamics are such that you always swap. So if you start here, you'll end up there. If you start there, you'll end up here. And so you don't get this kind of convergence. And so whenever these special cases happen is because there's a lot of 0's in the transitions between states. And so things don't get to naturally diffuse. Once there's some natural diffusion, then you get a single stationary distribution. Any other questions? OK. Let's think about where these Markov models might have already helped you in your life. Who here has used Google before? Who has not used Google before? Anybody? Curious. Everybody has used Google before. OK. That's cool. So you guys are so young. You probably don't remember the time-- but you were not alive-- before Google existed. I was alive before Google existed. Google came into existence in '98. And before Google existed, it was very hard to find anything on the internet. And people used to bookmark things. Because it was so hard to find something. And so there were some search engines. But they didn't work all that well. And then you might have to scroll down to the 100th option they returned. And then, definitely, you're bookmarking. So you don't want to do that again. Google really changed that. What was the key idea that they came up with at the time? The idea of page rank. So page rank is a number assigned to each node in the web graph. So for every website, there is a rank number. And so Google came up with a rank number for every website. And so when you were to search something, in previous search engines, you would search. And it would then either just return websites that have matching words, but not know which websites are more interesting. Or somebody would manually curate and say, I think this website is more important than this website. So if there's multiple websites with the correct word, let's show this one over that one. So we do a manual page rank, which is really hard to do. A lot of work. Very expensive to hire people to do this. They have the constant crawl the internet. Google came up with a way for a computer to just crawl the internet and assign the page ranks automatically. What they said is, the computer will start anywhere on the internet, at random. And then when you're on a page with a bunch of out links, you click one of them at random. You follow that one. And then you repeat this process. And as you repeat this, you're actually following a Markov model. There is a distribution over next states, given current state, where state is the page that you're on. And the next state would be one of the states that you can go to from that state. And there's a probability of getting reset, which actually helps with making sure there is a nice stationary distribution so you don't have weird zero probabilities in certain states and so forth. So probability of being reset uniformly. But most of the time, we just transition to an outlink page. OK. So let's say you do that. And you follow this process. You look at the stationary distribution. Let's say you were to compute that. And that's exactly what Google did. They compute the stationary distribution. Then which pages will have a high probability to be on at time equals infinity? It's pages to which a lot of other pages link. But not just pages through which a lot of other pages link, also if a lot of pages link to a page that, in turn, links to you, that's also good for you. Some subtleties there. It might encourage you to not have any outlinks. So the random navigator gets stuck on your page. And there's fixes for that to make sure you never just stay on a page. But that's what they did. They computed a stationary distribution of this random web surfer Markov model. And [INAUDIBLE] distribution was then used to rank. So you would search, type in maybe two words. It would retrieve all the pages that have those words on them. And among those, they showed them in order of page rank. That's Google 1.0. That was so much better than anything else that everybody switched over to using Google rather than the other ones. And then the next thing happened, of course, is everybody uses it. But initially, you still had to click, maybe, on result number five or a result number seven. They log that click stream data. They machine learn on top of their page rank, whatever else matters for you to select a particular page. And at that point, it's hard to catch up. Because if you were to build Google '98, it won't be that great as Google today. And so you won't get the users. And so you wont get the ability to update how it ranks pages based on what users have done. But that was the foundation. And it was a lot better than anything people were able to do before. It might still be the best you can do of starting from scratch and having no user data of what users prefer. Any questions about this? You actually have seen a Markov model in one other place, already, too, in 188, which is Gibbs sampling. The star here means that this is something a little advanced. And we don't expect you to necessarily fully follow along with what's happening here. But it's good to be aware of. And probably several of you will understand what's happening. So let's say you do Gibbs sampling. What was Gibbs sampling? It's this process where you say, I have a Bayes net. I have a bunch of variables that are evidence variables-- e1 through em. I have a bunch of query variables-- x1 through xm. And I want to infer distribution over the x variables, given the evidence variables. And let's say your Bayes net is big. And variable elimination doesn't work, because you would need to generate a factor that's so big that you can't represent it on your computer. And it would take too long. OK. You might run a sampling algorithm. Gibbs sampling is one of the most popular ones. In Gibbs sampling, you take a random instantiation of those x variables, then randomly pick one of them and re-sample it conditioned on all the other variables. And sampling one variable in a Bayes net conditioned on all the others is actually a very efficient operation. So you do that operation. And then you randomly pick another variable, re-sample the condition on all the others. Repeat, repeat, repeat. And it turns out, if you keep doing this, the distribution over assignments of variables will converge to a stationary distribution. So just like in the weather Markov model, it was always either sunny or rainy. That was never at stake. It was either one or the other. It was never a combination. But the stationary distribution put 75% on sun, 25% on rain. Same thing will happen here. At any step in your process, all these variables have been assigned. Maybe it's plus x1, negative x2, plus x3, plus x4, negative x5, plus x6. But the distribution of what you end up with ends up being a stationary solution that matches the correct posterior distribution of-- oh, not this one-- this one here. This conditional distribution is the stationary distribution of that process. Now, when you run this by sampling, you only get one sample of that distribution after you run infinitely long. Then you repeat the process. You re-initialize everything, run the process again, get a second sample, re-initialize, run it again, get a third sample, and so forth. And then the combination of samples together will reflect a stationary distribution, which you can then use to make some inferences from. So this star also means that-- it's not expected just from this slide that you understand how the proof would work, but just want you to know that this is out there. Any questions about Markov models? The thing about Markov models is that in some sense, yes, they have some applications, but they don't really have that many applications. They have a roughly trillion dollar application, which is pretty good. But how about more applications? Typically, you need more than just a Markov model. You need something called a hidden Markov model. So let's go back to this situation here. We watched me play this at the beginning of lecture, and honestly, it was pretty hard to play. But in principle, we should fuse the information we get from the readings down here to get an estimate of where the ghosts are. The ghosts' location is falling in Markov model, but we're also having measurements, and these measurements we don't have in the Markov model. Once we have a hidden Markov model, we'll also have measurements. So let's take a look at that. So in a hidden Markov model, there's still a sequence of states shown here, and that's not different. We've already covered that. That's the Markov model. But now in addition, we have observation-- evidence variables at each time-- for example, distance to each of the ghosts. In a hidden Markov model, we don't get to observe the hidden state-- x1, x2, x3, x4. But the hope is that by observing the evidence variables, we can somehow infer a posterior distribution over the hidden states that allows us to do something interesting. Again, this is just a Bayes net, right? If you look at this and you say, we only have four time steps so we cut it off right here, this is a Bayes net. You have observed, let's say, e1 through e4, you can now run variable elimination or anything else you'd like to run to infer distribution over x1, x2, x3, x4. What we're going to look at is something that, over time, keeps track of that distribution that's a little specific to HMMs. But think about it after the fact. You'll realize it's just variable elimination run from left to right. So here is an other example HMM. This is from the Russell and Norvig book. They call it the weather HMM, but if you hear the story, you would probably call it the sad grad student HMM. The story goes as follows. There's a grad student, but as it goes with very diligent grad students, they are just in the basement at all times. They don't come out of that basement. But luck has it, every now and then Professor stops by and says hi to them. And sometimes, Professor has an umbrella, sometimes not. And that, for the grad student, is a way to extract information about whether today might be a sunny day or a rainy day, OK? [CHUCKLING] I'm not saying this is realistic grad student life, but that's the story in the book. I hope it's not realistic. So that's what's going on here. Grad student really wants to know about the weather outside. Doesn't get to look at it, but gets to see the professor carrying an umbrella or not. Now, there's multiple distributions involved. There's initial state distribution-- initially sunny or rainy. There's a distribution of for next day, like we saw in the Markov model, the transition model here. And then there's a distribution for evidence given current state, OK? So let's put some numbers in. The transition model we've already seen before. Actually, it's slightly different here, slightly simplified. It's made symmetric. Probability of rain the next day given rain on a previous day is 0.7, and same deal the other way around. So it's 70% chance things stay the same and 30% chance things switch up. Then turns out, 90% of the time when it rains, Professor has an umbrella. And then, when it doesn't rain, Professor still has an umbrella 20% of the time. So you can't just look at the umbrella or not to know whether it's been raining. You need to do some probabilistic calculus to get a posterior over when it might be raining or not outside. Not sure if it matters to know if you're always in the basement, but it's the calculation we're going to do. OK. Here's another example-- Ghostbusters. So this is a smaller version of the grid that we looked at-- just three by three, uniform distribution initially. We have a transition model where they usually move clockwise in this grid, but might also veer randomly off that. And then there is an observation model where we have sensors that you also covered in last week's lecture-- oh, no, this Tuesdya's lecture on value of perfect information-- where the sensor could take on different colors depending on how far away the ghost is from where you made your measurement. And that's our observation given state conditional distribution. OK, so let's take a look at the corresponding demo here. OK, so we'll let time pass. Don't see anything happen because we hit the stationary distribution of the Markov model, but now we're going to consider it as an HMM. So we'll get a measurement this time. This will change the distribution. Then time passes again. We'll get another measurement. We'll change distribution. Time passes again. We'll get another measurement, maybe here, get another distribution, and so forth. So you have an alternation between time passes, which tends to diffuse where the ghost probability mass is, and then a measurement which tends to help us concentrate where the ghosts might be. And that's the dynamics of an HMM. There's this alternation between time passing, a measurement, time passing, a measurement, and every step along that process, the distribution gets updated, and hopefully, gives us more and more concentrated information on where the ghost might be. OK, so that's the HMM process in action. Let's think about the independence assumptions we make in this model. So the reason we talked about things like deseparation is that when we look at a graph like this, we can just read off, what is it that we're assuming by choosing this Bayes net graph? So what is it that we're assuming? Well, look at the hidden process. The hidden process up here-- x1 through x4. What can we say about that? It's actually similar to what we had for the Markov model. It's still the case that if you know the state, let's say, at time three, knowing anything about the past before time three does not tell you anything about the future past time three. So knowing a state at a given time separates past from future. How about if we have observations? What this model is saying is that if we know x3, the evidence we expect to observe, the distribution over evidence, is independent of anything else we could find out once we know x3. It's just x3 directly influences our measurement, and nothing else has any influence anymore. How about this question at the bottom here? Does this mean evidence variables are guaranteed to be independent? Who thinks they're guaranteed to be independent? Who thinks they're not guaranteed to be independent? Let's see. Very few hands are going up. This is a question you should be able to answer. So either it's just so simple you feel offended you're even asked, or you need to do a little bit of review of this material. I'm not going to ask you which one it is. I'll just answer the question. So are these evidence variables guaranteed to be independent? What's the question there? Well, let's look at some evidence variables. For example, e1 and e4-- are they guaranteed to be independent? Let's see. What paths do we have? This is a path, and it's actually the only path. Is that an active path or an inactive path? Well, let's take a look. The first triple here is active, second triple active, third triple active. All these triples are active because the first one is a common cause, and the next ones are causal chains. And nothing is observed in-between because we said-- already independent, but not conditional or anything. Are they just independent? So nothing's observed. So there is influence running, so indeed, we cannot claim this. So no. Answer is no. It changes, of course, if we were to observe, for example, x2. Then all of a sudden, we have independents here. We have e1 independent of e4 given x2. So those are the assumptions we're making. If you're not happy with those assumptions, you don't think they capture how the world works, well, then you should modify your model because otherwise, the conclusions from the model are not going to be very precise for what you're trying to do. Let's see. Let's take a short break here, and then, let's do mini-contest results and do the remainder of HMMs. STUDENT: Do you have office hours next week or is that-- PROFESSOR: OK, let's restart. Let's look at the mini-contest 2 results. How did the game work? It was a game. Each side controls multiple agents. Each agent is a ghost in their own territory, which they can then defend, and a Pac-Man on the other side, where it can go collect food pellets, but you only get points when you bring them back. If you get attacked by the opponent's ghost and eaten, then the food pellets re-spawn around you. So you win by returning all but two food dots. Well, you can-- all but two, or it's fine if you return more and return all of them, but that's the minimum. All but two should be returned to win, or after 1,200 steps. If nobody has won yet, according to this rule, the game ends, and we just look at who has the most food pellets returned so far-- so the running score. Many different layouts-- so it's not about solving one particular scenario. The layouts are different in different games. How are the rankings calculated? For each of the four staff agents on the six different maps, three games per map are played. And for each map, the score of the three games is averaged, and then a total score is the sum of all these average scores. There's also some extra credit involved. On project 2, if you are first place in the ranking, you get two extra credit points; second and third place, 1.5; fourth to 10, 1 point; and 0.5 points for if you win at least 51% of the time against a baseline team we provided [? similarly ?] for the three staff agents. OK, let's look at what happened. 35 participating teams-- many interesting team names, which were always appreciated. Let's see. "Team No Bug," "Get Wrecked," "My Stupid Code Just Cannot Work," "Please Don't Eat Me." Then I'm going to read this as-- [LAUGHTER] "Please Don't Read Me"? I think it has a meaning. I looked it up yesterday, and I'm pretty sure it was not offensive and it's OK to show in class, but I don't remember what it is. And then, actually, three dots isn't indicating there were more team names, but actually, one of the team names was just three dots. Results-- in 10th place, we had Wilson Wu, "I Would Prefer Not To," with 25 points. In ninth place, Victor Chang, "War of Greed." In eighth place, [INAUDIBLE] and [INAUDIBLE] with Ruji. In seventh place, [INAUDIBLE] with "Team No Bug." In sixth place, Chang [INAUDIBLE],, "Find A Way." Fifth place, Johannes [INAUDIBLE] with "A Unique Leaderboard Name." Fourth place, Ariel Hirschberg with "Waka x3." Congratulations, everyone in the top 10 listed so far. [APPLAUSE] Then in third place, we have "Don't Forget-- Register to Vote." If you want to get any message across in this class, your team name is your opportunity. The sole team member is Sean Liu. Who is Sean-- is Sean here? Can you raise your hand if you're here, Sean? Oh, you have two people. Oh. What's your name? STUDENT: Hm. PROFESSOR: Adam? STUDENT: Hm. PROFESSOR: H-M. Hm. We'll see more Hm in one of the future lectures, actually. Hm and Sean, congratulations. So score of 99-- apologies for no Hm on this slide. Somehow, clicking on [? great scope ?] did not show your name as a partner, so we might have to do some fixing so you get the extra credit points. Bot description-- [? min/max ?] with the offensive agent looking at score-- distance to ghost, distance to home, distance to pellets, number of pellets left, number of power pellets left, when the enemy is two away, encourage to go home. Otherwise, encourage to collect more food. So you can re-weight those features in different ways, depending on which situation you're in. The defensive agent is measuring distance to enemy Pac-Man and trying to get as close as possible. Then, the defensive agent plays a trick. It flaunts and dances near the border to lure you in. So sometimes at the border, these things get stationary and don't move, but this agent will, well, dance and flaunt, whatever that means exactly, and you will come closer and be eaten. This is what it looks like in action. So this is against one of the staff agents. On the left in red is the third-place team, "Don't Forget-- Register to Vote," working their way. And on the right is the staff team, also trying to work their way, though maybe one of the ghosts is not working their way down much. So the score is still 0, even though food pellets have been eaten, because nobody has brought food pallets back yet. So red is up by 2. [CHUCKLING] So red is still up by 2. Oh, he didn't-- lured in the staff agents and ate them. [CHUCKLING] Time to go back, bring back a lot of food pellets. Score is 6 now. Playing more games. Let's fast-forward this a little bit, if we can. So Team Red wins this one. Let's move to the next won. Second place is YZY, [INAUDIBLE] and [INAUDIBLE].. Are [INAUDIBLE] and [INAUDIBLE] here? Both here. Congratulations. [APPLAUSE] We're on a [? search-- ?] I'll play this in parallel to explaining. If the closest ghost is near, alpha/beta is run for two turns. If the closest ghost is not near-- this is when you're an offensive agent-- greedy tree search to just cover as many food pellets as quickly as possible. Evaluation functions have a lot of sophisticated features, more than I could put on this slide. Also, pay attention to dead ends very specifically because of course, you don't want to get trapped, and then more emphasis on y-coordinate than x because y is often the way you can keep people blocked, whereas x doesn't play as big a role in that. So decision and action-- also beats staff agent. I'm going to move forward through this one to first place. First place is @_@, Phillip [INAUDIBLE] and [INAUDIBLE].. Are you here? Philip [INAUDIBLE], over there. Congratulations. [APPLAUSE] So let me play this one while we explain. If there are no opponents on their side, both agents are offensive. When opponents enter their side, one of the agents will turn defensive, often the closer agent. The defensive agent works similar to the baseline defensive agent. During initialization, the dead ends in the map are found and blocks in the dead area assigned a danger coefficient, which is the number of steps required to escape. This danger coefficient plays an important role in the evaluation function, and the offensive agent runs a min/max of depth 1. Let's this in action here. [APPLAUSE] Zero food pellets left on the other side. So HMMs-- this is the Bayes net model corresponding to it. What are some real HMM examples? Well, speech recognition-- the observed variables are the sequence of numbers corresponding to the pressure that you observe in the air coming at you, and the unobserved variables are the characters or the phonemes or the words, whichever version of transcription you're using. Machine translation-- the observed evidence is words or characters or symbols in one language. The unobserved is characters, symbols, or words in the other language. Robot tracking observations could be sensory readings, like laser rangefinder readings, and the unobserved is the state, the localization of the robot. Filter monitoring is the process of tracking the distribution over state given evidence. So keeping track of this belief state-- that's why it's called belief state over state, which is a conditional of state at a time xt given all evidence up to time t. We will start with belief state of time. One, usually uniform-- and then, as time passes or we get observations, we update the belief. The common [INAUDIBLE] is an example of an HMM that happens to use continuous variables, which we're not going to do, but was used in the Apollo project and critical to getting to the moon. So what's an example of this in action for robot localization? This robot could be anywhere in this grid, so everything's equally gray. The observation model is that it senses to the top, the bottom, left, right. It senses whether there is a wall or not. It makes at most one error. So when it's there, it could have five different types of readings-- the reading where there is no error, or the other four readings where you make one error in your four readings. That also means that when you read something, you can count on it being an exact match to where you are. So at time zero, we have a uniform distribution over wherever the robot could be. Then the robot will move. We know the action of the robot, so we can do a next state calculation. Actually, will this robot move? Yeah, it'll move. So we first do a sensor reading. We do a belief update based on the sensor reading. We read wall, top, wall, bottom. This is now our posterior. The robot will move to the right. We'll have a transition model for that. It's not guaranteed to move one to the right, but with high probability, it will. Then we get another sensor reading. We see wall, top, wall, bottom. We're showing it here, and that's because we know it's there because we're kind of looking from above and knowing how the world works. But the HMM itself doesn't know where the robot is. It has a distribution of where the robot might be. Then moves again, also does a sensory update and keeps repeating this. And then, when it's here, symmetry gets broken. It knows it's in the top hallway, not the bottom hallway-- not for sure, but very likely-- and knows its localization pretty well. OK, so what are the base cases? We either observe evidence, which is we want to then compute the posterior of x1 given the evidence e1. How do we do this? Well, this is the calculation. What's happening here? The conditional of x1 given e1 is, by definition, the joint of x1 and e1 divided by the probability of e1. Then we use this sign over here saying we're now going to get rid of this thing over here by using this sign. It's kind of weird, but it's just saying we're interested [INAUDIBLE] over x1. So our variable's x1. e1 is not a variable. Only x1 is the variable. So anything that does not involve x1, we can just remove if it's just multiplied in. Now, here it's divided, and that's the same as multiply. We can just remove it say it's proportional to this. And then we can fill in the model and get this thing over here. What would be the result of this calculation? We would have a table for values of x1. We would then have values that are proportional to x1 given e1, and maybe it would look something like this. x1 could be 0 or 1, and then proportional says 0.2 on this and 0.3 on this. If you want them to know the actual probabilities, you will say, oh, well, that's only proportional. I now need to re-normalize, so it will be 0.2 over the sum of those and 0.3 over the sum of those again. That will be the actual probability. So that's what's happening over here. We got rid of this thing, which happens to be corresponding to the sum of those. This is often done because it's easier to just get rid of things that don't really matter if you know all you need to do at the end is just re-normalize to get the correct probabilities. OK, so that's computing the conditional of x1 when new evidence comes in. No, go away. Then the other thing that can happen is a transition over time. This is the Markov model update. We've seen this before. This is just a [INAUDIBLE] x1 and x2 summing out over x1, which is this thing over here, and that's exactly what we've been doing in the first half of lecture. OK, So let's now make this a little more explicit. These are the base cases. See, they're very simple bases with only two variables. Can we now use this in an HMM, where this process repeats over and over and over? Well, let's see. We want to believe at time t, which is conditional of xt given all past evidence. After one step passes, we now want to transition to xt plus 1, given e1 through t. This notation here means we're conditioning on all evidence variables from e1 all the way through et. It's just a shorthand for many, many evidence variables, all the way from 1 through t. OK, well, what is it other than sum? We can bring in an extra variable and sum it out. You can always do that with a distribution over one variable. Bring in another one and just sum it out. The reason we bring in xt is because we assume recursively that we already know the belief for time t. So by bringing in xt, we might be able to build us something we already have, whereas for xt plus 1, we don't have the answer yet. So we bring in xt. We sum over it. We can now use the dynamics model, say that, OK, this joint is the conditional here times the distribution for xt given all evidence. This is exactly what a dynamics model looks like, except that everywhere we have additional conditioning on e1 through et. That's the only thing we added in. Otherwise, this is just a dynamics model update. Now, we don't have all of these available. The red one we already have available. The black one we don't. What do we need to do? We need to somehow make an assumption. The hidden Markov model gives us the assumption that this thing here does not depend on these variables. The distribution over xt plus 1 is-- given xt doesn't depend on the evidence. So we can get rid of that. Now we have quantities we know. This one we know recursively from the previous computation. This one is in our dynamics model, and we have the ability to compute, maybe in compact notation, the belief at the next time. So this is exactly the Markov model calculation done again. We're just conditioning on e1 through et everywhere, and the place where our assumption comes in, the one place where we make an assumption in this entire calculation, is when we get rid of this conditioning here. This here is something we can always do. It does not depend on the structure of our Bayes net or HMM. But this transition here-- getting rid of this-- that's using a very specific conditional independence assumption that we have in this model. OK? So we see that believes essentially just get pushed through the transition model. Get the beliefs for the next time. Often, what it means is that uncertainty will accumulate. We've seen this with the ghost environments. As we push through the transition model, the ghosts will kind of circle around, but also diffuse their probability, and it will take measurements to concentrate that distribution again. So we've seen this in demo. How about the observation? We want to know the belief after an observation. So we assume we already have the believe before an observation. So this is-- assume we know that recursively from the previous calculation. Can we use this to now get the distribution for xt plus 1, given all evidence up to time t plus 1? This is only up to time t. OK, well, how do we do that? To be able to condition and also evidence of time t plus 1, this is what we need. And we somehow need to get this in terms of this expression we already have in our measurement model-- the distribution over et plus 1 given xt plus 1, because that's going to tell us how the evidence influences our belief. OK, well, we know that we can expand this by definition this way. And notice that, again, e1 through t are just kind of hanging off the back. If they weren't there, this would look a lot simpler to you. But of course, we need them there to be complete here. Then we do the proportional thing. What does that mean again? Well, we're interested in distribution over xt plus 1. That's the only variable. This stuff in the back here doesn't have xt plus 1 in it. It's a constant as far as xt plus 1 is concerned, so you can remove it. Note that it needs to be a multiplication with a constant. If it's plus some constant, you can't just drop it. But if you multiply with a constant, you can drop it, and later, re-normalize. OK, then we can use-- and again, imagine these are not here for a moment. Then this should look very simple to you. It's just decomposing the joint over two variables in the marginal plus the conditional. And then just everywhere, we also condition on all those past evidence variables. Then, we apply an assumption. Getting rid of this thing over here is saying that the evidence of time t plus 1 not depend on any past evidence if we know the state at time t plus 1. That's an assumption. This here is where our assumption comes in. Now we have it expressed in terms of what we already had and something that's in our model, our measurement model, and we can do this calculation. What will be the result of this calculation? It will be something that's proportional to what we care about. And then we can just re-normalize, which means we'll have a table with a bunch of entries. They won't sum to 1, but we'll sum them up nevertheless. We'll see it's not 1, and then we'll divide every entry by the sum to get it to sum to 1. And what we're doing in that process is effectively, we're computing this thing over here, which we kind of decided to forget about. And by summing all the entries, we get this thing over here, and then we divide by it to get things to sum to 1. OK, far more compactly, we have the belief that the next time is proportional to belief at the previous time multiplied by probability of evidence. So if the evidence is very compatible with the next state, then the probability mass will go up for that next state. If the evidence is very incompatible with the next state, the probability mass will go down for that state. That's the multiplicative effect happening over here. OK. So typically, when an observation happens, beliefs will concentrate, as we see going from left to right here. Let's look at the weather HMM. This slide is working through the math. We start with a uniform distribution over here. Then we transition to, again, uniform, because it's just the way the transition model is set up. But then we have an observation, which sees plus r, which then moves more weight onto where we see an umbrella, which is correlated with plus r. And so we see more mass shifted on to the plus r, and then this process repeats. These are the numbers corresponding to these tables over there. If you work through the equations on the previous slide, you then get these numbers here. And this is just a very procedural thing you can just repeat over and over to get the conditional distribution of current state given all past evidence. There's another way to write this. Another way to write this is in one big update. So rather than saying I'm doing a time update and I'm doing an evidence update, I'm going to just do one update. So how would that work? Let's say I want this quantity as a function of the quantity at the previous time. I'm first going to say, well, really, I should be dividing here by e1 through t. But that doesn't depend on xt, so I'm just using this proportional sign here to get rid of that division, and we can later make things sum to 1, and that will make up for this. Then I can write this out as bringing in xt minus 1. Why? Because we want a recursive update equation as a function of what we had at the previous time. So bring in xt minus 1 sum it out. Then we can decompose this into what we already have, [? being ?] for xt minus 1, then the transition model and the evidence. So we have both things happening at the same time here-- transition and evidence. We can reorganize this a little bit, and this would be the kind of single update way of getting both a transition incorporated and evidence. It comes down to the same result as doing it one and then the other, but sometimes this might be convenient to do it this way. If you look at the details, you'll see that if you do variable elimination and you elimination variable x1, then x2, then x3, then x4, all the way to xt minus 1, I end up with distribution for xt given all past evidence. It's exactly the same as this calculation here, as long as you do your variable elimination going from time 1 all the way to the time you're currently at. Online belief updates, which is most common, is where you would essentially just do these things. So you would update for time, which we derived as this equation. Then you'd update for evidence and repeat. If you have multiple time of dates without evidence, then you just repeat this one multiple times. And then, once finally some evidence comes in again, you can do an evidence update. Keep in mind, again, this thing here means this is an un-normalized update. It will be an un-normalized distribution, which you need to normalize by summing all the entries and dividing every entry by the sum. OK. Once we do this, we can reconsider the Pac-Man game that we saw at the very beginning, and we can have an agent play it with beliefs visualized. So it's showing in color distribution over possible locations of each of the ghosts. And then, of course, you can run a search to try to get as close as possible to the ghost because you're more powerful than the ghost in this game and go put it in its ghost jail slot at the bottom left. And if your project 4 is successful, this is pretty much what it should look like when fully up and running. Huh? Difficult one. Ghost is escaping, it seems. Oh, come on, random seed. Give Pac-Man a break. Ah, OK. That kind of showcases that it's not always the ghosts is where the maximum probability is. It might very well be one of the low probability squares. So let's see. What's left for us? Next time, we'll look at particle filtering and more applications of HMMs. What particle filtering will do will be the counterpart of sampling methods that we saw in Bayesian networks. But then sampling methods for HMMs-- this is for when your HMM has two large a state space to be able to run the algorithms we saw today. All right? |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180911_Search_with_Other_Agents_Minimax.txt | [NO AUDIO] [SIDE CONVERSATION] PROFESSOR: Hi, everyone. Welcome to the sixth lecture of CS 188. Couple of announcements for today-- your homework 2 was due yesterday. So this is an after-the-fact announcement, but it's good if you missed it that you know that next time not to miss it. Also, your homework going forward always has three components. There is an electronic part, a written part, and a self-assessment of the previous written homework, so just a reminder of that. For the written and for the self-assessment, use the template we provide such that it matches exactly up with what we have and fill in the blanks as you solve the problem. There is a mini contest-- a little more about that in a second. Homework 3 games will go out soon, probably tonight, otherwise tomorrow. It will be due next Monday. We'll again have three components. And project 2 games will go out probably tomorrow and will be due next week Friday at 4:00 PM. The contest that's part of project 1, this is optional but we think very exciting. So I want to highlight a little bit what's going on there. So this only counts for extra credit and for glory, not for regular points. What do you get to do in this contest? In project 1, the mazes were relatively small, and there was only one Pac-Man trying to eat the food. In the mini contest, you get to control multiple agents who together are supposed to clear out the board. And in addition to controlling multiple agents on large boards, you also get to be time-constrained. So we all know that if you get infinite time, you can even run uniform cost search on these problems and ultimately return a solution. But that's not useful in practice. In mini contest 1, you get penalized for time used. The more time you use, the more you get penalized. If you can make fast decisions, you lose less points. And then if you eat dots, you win points. So this is an example of a board there. If you submit-- so a couple things to note-- this will be closing on Sunday. So there's about a week left. You get 0.5 points of extra credit on project 1 per staff bot beaten. There is two staff bots coincidentally named staff bot 3 and staff bot 2. And there is 0.5 points for submitting any bot. So just go in there, get yourself a bot, and submit. You get 0.5 points already. And then we'll also rank students based on performance of their bots. And there is a little bit of extra credit related to that. And there's a leaderboard. Here is the current leader board. Staff bot's not on top anymore. Jason [? Lu ?] here-- here? Congratulations, you're currently first. That's awesome. [APPLAUSE] [INAUDIBLE] here? [INAUDIBLE]? Anybody just want to claim it? Don't do that. [LAUGHS] White guy claiming [INAUDIBLE].. [LAUGHTER] You can choose the name of your team freely. It doesn't have to be your own name. You can do it in teams of two, just like for the project itself. And when you submit, your agent will appear on the leaderboard. Right now, this is the first 20. But actually, there is six of those pages for about 120 total agents in competition now. Some agents have one student behind them. Some agents have two students working together behind them. So I encourage you to check it out. It's set up such that once you've completed project 1, getting a starter agent going should take extremely little time. Then, of course, to do something more interesting requires some extra thought about what it takes to get two agents or multiple agents to work together and how to work under time constraints. Any questions about logistics? Yes. STUDENT: So you find the string for the how long it takes to find a path or how long it takes Pac-Man to move on the path? PROFESSOR: OK. So you get penalized for amount of time you use before you decide what your next action is. So the interface is the environment waits for the next action to be given. And the agent is computing, computing, computing, finally gives the next action. Then the environment says, how much time did you take? Based on that, points get subtracted and then executes the action. And thanks to the action, you might get new points. And this process repeats. If you have a shorter path, you might collect more points than if you used the longer path. But if you take a very long time thinking to get a short path, you might still do worse than somebody who has an agent who chooses a longer path but can find it quickly. STUDENT: [INAUDIBLE] [LAUGHTER] PROFESSOR: Any other questions about logistics? OK, let's start with the technical content of this lecture, adversarial search. This is about game playing in many ways. So let's take a look at game playing state of the art. Checkers-- one of the first games for which there were good AI players. In the '50s, there was a computer player-- decent. In '94, there was the first computer champion. This computer champion took over from Marion Tinsley, who had for 40 years straight been the number one in the world in playing checkers. So this was somebody who was just superior to everybody else in the world for 40 years. But then a computer came in and took it from him in '94. Then in 2007, checkers was "solved." What does it mean to be "solved" versus beating the best humans? Beating the best humans is something where it's just the level of play that you're at. Being "solved" will become more clear later in lecture, but it means that you know that you can force a win or force a draw if both sides play optimally. And we'll find out more about what that means. But for now, let's already put it on the chart-- checkers solved. Chess-- '97, Deep Blue defeated Kasparov, the human champion, in a six-game match. Deep Blue was able to examine 200 positions per second at that time, with computers from 20 years ago. Used a very sophisticated evaluation function, which we'll see more about today, and some undisclosed methods that IBM did not tell anybody. And some searches were 40-ply deep, which we'll also see more about today, but that's searching very deep in your game tree of possible scenarios of how things could play out. Current programs are even better, but a little less historic because the best human player has been beaten. Chess is not solved in the sense that checkers is solved in that we don't know for sure who is guaranteed to win or whether it's guaranteed to be a draw if both players play optimally. A lot of people suspect it's a draw, but it's not been proven. Go-- this is what we would say just a few years ago. Human champions are now starting to be challenged by machines. And that's a recent thing as of the 2010s. Before the 2010s, humans wouldn't play against computer Go players, because it was just too boring. It would be an insult to be asked to spend your time that way. Why was it so hard to build good Go players? So on Go, the branching factor is 300 roughly on average throughout the game. And that means there's a lot of possible future scenarios. So classic programs used knowledge bases. But then a lot of advances started to come in through Monte Carlo. Expansion methods, which essentially means something along the lines of, from the current situation, if I let both players play randomly a few times, who wins more often? And you use that to decide whether that's a good position for white or for black to be in. But this changed. In 2016, AlphaGo defeated the human champion using still Monte Carlo tree search but also learned evaluation functions of a quality nobody had really anticipated were achievable anytime soon. And again, we'll see more about evaluation functions in this lecture. But that's where the big surprise was. A lot of people thought Go was maybe still 10, 20, 30 years away. But the quality of evaluation functions completely changed that. And then Pac-Man-- that's what we all want to know here. Still have to figure it out. Maybe we'll figure it out by the end of the semester. Open problem for now. OK, so let's take a look at Pac-Man in action. So here is-- where is the game? There it is. Here is a game situation. Now there are ghosts. Let's watch a mastermind Pac-Man in action. In the terminal window, you see a bunch of competition happening, which we'll understand more about later. For now, the thing to pay attention to is that indeed it is somehow avoiding the ghosts, eating food pellets, and periodically eating one of the power pellets. And when it eats power pellets, it can eat ghosts and get extra score for eating the ghosts at that time. OK. That's how I mastermind Pac-Man. And that's what we we're going to be after in this lecture, to try to understand what it takes to build something like this but while not having to write lines of code of the type "if a ghost is nearby on that side, move in the other direction. If there's a power pellet in that direction, move in that direction." We want it to reason about the consequences of its actions and the ghosts' actions to infer from that what is the right thing to do. OK, so let's first do a quick step back and talk about types of games. There are many different types of games. You've already probably in your free time played a lot of games-- computer games, board games, and so forth. What are some axes along which we can categorize them? Well, one axis is whether they're deterministic or stochastic. So an example of deterministic would be checkers, or chess, or Go. Example of stochastic would be backgammon, where there is a roll of dice that determines which moves are available to you. Then the number of players-- one, two, or more players. Solitaire would be a canonical game to play alone. Two players-- checkers, chess, Go, backgammon. More players-- poker. A lot of board games have more than two players. Then another axis along which you could categorize games is whether they are zero-sum or not. We'll see a little more on that very soon. But the general notion here is, are you all playing against each other or is there some notion that you're not all playing against each other? And then there is a question of perfect information or not. Do you know everything about the current state of the game when you decide on your action? Or do you not know everything? An example of knowing everything would be chess, checkers, Go. You see the board, and that's all there is to it. An example of not knowing everything would be poker. You don't know the cards of the other players. So that's an imperfect information game where you have to think about possible scenarios-- what cards might they be holding; as a function of what cards they're holding, what might I want to do; and so forth. What we want in this setting-- any one of those settings-- is somehow an approach that allows us to calculate a strategy, a policy that tells us what to do in a current situation and then again in future situations. So this is going to be a little different, the result of this, than what we get in regular planning. In regular planning, we just generate a sequence of actions that we can just execute. But here, because there's an opponent that we don't control very often, we need to find a strategy which prescribes what to do as a function of the situation we are currently in or going to be in. Well, folks, in deterministic games for now, we'll see stochastic games next lecture. Now, the same thing we've done with search and with CSPs is that we want to find a unified interface for all these real-world problems such that once we have the unified interface, we can then have one implementation of an algorithm that applies to any of the real-world problems that's cast into this interface. So here's one way to formalize the interface for games. There is a set of states and a start state as 0. There is a set of players which often take turns. Definitely in what we consider, they take turns. Then there is a set of actions. This can depend on which player you are, what actions are available to you. There is a transition function, which is a lot like a successor function in regular search-- state and action to next state. There is a terminal test. Is the game over or not in the current situation? And then there are terminal utilities. This is maybe the newest type of thing. But what we're going to do is every outcome of a game will be scored. Simple games might have just three types of outcomes-- win, draw, or lose. But more complicated games might have a wider range of outcomes. For example, in poker, the amount of money you make might be the utility assigned to an outcome. And then it could be a range of values rather than just win, lose, or draw. Solution to something put into this interface is a policy. And a policy is something that tells you for every possible situation you could be in what action you should be taking. What's this zero-sum thing that we're going to be focusing on in this lecture? In zero-sum games, agents have opposite utilities. They have to fight over one resource or set of resources that's available. And if one agent gets it, the other one doesn't get it. And so the more the other one gets, the less you get. This allows us to think of utilities as not separate utilities for each agent for each situation. We just need one number. We have the utility for agent 1. And then agent 2 we know has the opposite utility, the negative of the utility of agent 1. So this models situations where it's pure competition. It's one agent against the other. There are other types of games-- and we'll see some of them in the future-- where agents have independent utilities. And sometimes this opens up more opportunities. This opens up opportunities for win-win outcomes where you say, I might not care so much about the orange ones. I don't care about the green ones. But the other way around, you do care. And so here, blue agent collects all the green ones. Red agent collects all the orange ones. And in the process, they're helping each other, making it easier to collect the ones they're looking for. Many real-world scenarios, in some sense, are of this type. But often, the essence can still be simplified in many situations to zero-sum games. And those are the ones we're studying today. OK, so to solve zero-sum games, we'll use a set of approaches called adversarial search. It's something where you, as an agent, have to think about as a function of what you do what will the other agent do who's working against you, what opportunities will you then see after that, what will they see after that, and keep thinking about what if, if, if, if, and so forth through, ideally, the end of the game. So let's first look at something we're already familiar with, which is a single-agent game tree. A lot like search, but we'll formalize it as a game tree. And then from there, we'll generalize to multiple agents-- two agents for today. OK, here is Pac-Man trying to collect food-- simple maze. Two actions available-- west or east. Then after that, again two actions available and so forth. All the way on the right, it's done. The other ones, it can still keep going. In these game scenarios, with each terminal state, we associate a utility. So for example, this has a utility of 8-- which is a standard scoring we have in our Pac-Man games-- minus 1 for every time step spent, every action taken, plus 10 for every pellet eaten. So two steps, one pellet eaten, that's 10 minus 2, 8. And maybe the numbers for the other ones end up being something like 2, 0, 2, 6, 4, 6, and so forth. If you're a single-agent, you control where you go in this tree. So things are actually relatively easy. You would say, OK, let me look at the tree. What do I see here? Well, 8 is the highest. So I should do this and then this. And that solves the game. But let's formalize this a little bit more. So each state in the game has a value associated with it, which is the best achievable outcome from that state. So for example, for this data over here, that's very easy. It's a terminal state. The best achievable outcome? Well, there's only one outcome from that state. It's that you terminate the game, and you get 8. So the value of this state is 8. How about a state like this one? What would be the value of that state? Well, it's the best you can achieve starting from that state. And it looks like from looking at this tree that you can end up with a 4, with a 6, or with an 8. 8 is the best achievable. And so the value here would also be 8. How about the value of this state over here? Well, it looks like there's opportunities for a 2, a 0, a 2, a 6. 6 is the highest. So the value would be 6. How about this one at the top here? Well, you can either go to the left or to the right. Either you end up with a value of 6 or with a value of 8. 8 is better. You control that. So 8 would be the value of your state over here. And so the way we did this is that for non-terminal states, we looked at the children and took the max of the values of the children, because we control where we go and that's the value of that non-terminal state. Let's now generalize this to adversarial games. So here we have Pac-Man and ghosts. They're working against each other. Pac-Man tries to maximize score. The ghost wants to minimize score by eating Pac-Man. So let's see what happens. First, Pac-Man gets to take an action. So we're assuming alternating moves here. Could be going left or right. Then after that, the ghost gets to take an action. Then after that, things keep repeating. And then for each sequence of alternating actions, at some point the game might end. And then with the end of that game, some utility is associated. If Pac-Man survives and has eaten all the food pellets, it would probably be a high score. If Pac-Man doesn't survive, it'd be a very low score. So let's zone in on a smaller version of this to annotate this with some concepts. Let's assume that the game is over after each player has made a move. This is not how the real game works, but just to get it on a slide. This is a very short game-- one move for Pac-Man, one move for the ghost. And here are the scores we have associated with that. There is a minus 8, a minus 5, a minus 10, a plus 8. You might say, well, I'd love the plus 8. That sounds pretty good to me. And you might say, well, OK, I should try to get here. But that's not how it works in this kind of game, because actually the ghost controls this node. And if you were to end up in this node, ghost would prefer to have the lowest possible outcome. So ghost would say, I'm going to do this, and it's going to end up with minus 10, such the value of this node over here is minus 10. The plus 8 is just there. It might be tempting you. You might want to try to get it. But you're just not going to get it. The ghost is not going to give it to you. So this value here is minus 10. Similarly, on the other side there's a minus 8, minus 5. Ghost working against you will decide it's going to be minus 8. So the value over here is minus 8. Going one level up-- again, from bottom to top-- now, what's the value of this node over here? Well, it's the maximum of the values of the leaves-- not the leaves, but nodes below it. What's below it? An option to get minus 8 and an option to get minus 10. Minus 8 is better. So the value here is minus 8. Pac-Man will choose this move, after which the ghost will choose this move. And that's how the game will play out if both players are playing optimally. Formalizing this a little bit, terminal states, we have known values. v of s is whatever we annotated it with when this slide was not written on yet. Then states under opponent's control, we have a minimization happening of the utility. And states under our own control, for the maximizer agent, we have a maximization happening. And again, keep in mind there is a plus 8 here, but we're not going to get that. What we're actually going to get is this outcome over here. OK, let's look at a very simple board game-- Tic-Tac-Toe. Has anyone not played Tic-Tac-Toe before? OK, so everybody knows Tic-Tac-Toe. You need to get three in a row. It starts with an empty board. You can choose any of the nine squares to put your mark in. Then the other player goes. And this process repeats until either one player has three in a row or the board is full and then the game is over with a draw. What is the value of this node over here? Well, you have to think about this. You have to think about, well, I mean, there is a plus 1 at the bottom somewhere. There's many plus 1's for various outcomes. There's various 0's and various negative 1's. But what can you get to when you're playing against an opponent who's really playing cleverly against you? Well, one thing we definitely know is that either the value here is going to be negative 1, or 0, or plus 1. It's not going to be anything else, because the values are obtained by just taking maxes and mins of values that are leafing out to leaf nodes. And by taking maxes and mins of negative 1, 0, and plus 1, you never end up with anything else than those three. But which one it is? Well, for that, you have to work through the game tree. You figure out the entire tree. You work through the process that we just did on the previous slide here, taking maxes and mins at each respective node till you're all the way out the top. And it turns out that if you work through this carefully, you find out that the value is 0, which means that if you're two smart people who are going to play this game, you know ahead of time it's going to be a draw. And that's actually what it means when I referred to at the very beginning of this lecture for a game to be "solved." The game of Tic-Tac-Toe is solved because we know that if both players play optimally, the value of the top node is 0. Same thing is true for checkers. People have worked through it with more efficient algorithms than just drawing out the whole game tree but have worked through it in efficient ways to conclude that for checkers it's also 0, I believe. But I'm not 100% sure whether it's 0 or 1. So that's what it means to be "solved." OK. So now let's design an algorithm that is more computer executable than us looking at trees. Yes. STUDENT: Back to the last slide for a second. PROFESSOR: OK. STUDENT: A game with the possibility for an arbitrarily long cycle of states, how could you end up solving it? PROFESSOR: OK, so the question is, what if there is a way to keep playing? So the tree would be infinite in that scenario, right? For example, what if in Tic-Tac-Toe maybe you could erase something and then you could keep going? So a lot of games have setups where you cannot repeat the same move over and over to avoid getting into loops. But if there is a possibility to get into a loop that indeed the tree could keep going, and then you'd somehow have to-- well, you then have to decide what that even means. If both players decide to keep playing forever, is that a draw or-- I mean, it's not very clear what the definition is of what that means if both players just keep playing forever. But you somehow have to come up with a definition of what that means. Let's say you call it a draw. That's what some games might do, are just forbid it to go in the same cycle again. Then all of a sudden, you have something again. Because now we've gone through a cycle a certain number of times, that's the limit. Now it's a draw. They could begin so the trees are infinite in other ways without having repeats. Maybe you have some kind of infinite board you can play on in principle, and you can keep expanding in many, many directions. Then you might need a little bit extra machinery compared to what we're seeing today in lecture. So we're not going to specifically look at that. OK, so let's formalize this. We've been looking at deterministic zero-sum games. Examples are Tic-Tac-Toe, chess, checkers. One player maximizes the utility at the leaf nodes. The other one minimizes it. In minimax search, we have a state space search tree. Players alternate turns. And to compute minimax values at each node, we can work from the bottom to the top. For example, for this here, we have minimizer says this is 2. Minimizer says this is 5. And then the maximizer knows that the value at the top is 5. And the game would be played out this way. That's the minimax solution to the game, with both players playing minimax strategy. This is illustrated on the slides. How do we formalize this? Well, at a max node, what we're doing is we're initializing our estimate of the value of that node as negative infinity. And then we start inspecting all the children, see what the value are of the successor nodes, and if they're higher than what we have so far, we increase and ultimately return the maximum value we've seen among all successors. Min, that's the exact counterpart. Start out plus infinity when you've seen nothing. And then whenever you have examined a new successor, you check if it's better for the minimizer than what you've seen before. And if so, you update the value. And you keep looping till you have gone through everything. Of course, if you just use this, this is a recursion where you go from min to max from max to min and keep going. There is no base case here. So this actually wouldn't really work. So in your code, you'll have to install a base case of some type so you'd have dispatch function that computes value of a state. First checks, am I hitting a base case? If I'm hitting a base case that is a terminal state, return the value of that terminal state. If not, then check what kind of node I'm in. Am I in a maximizer node? Then call max. If I am in a minimizer node, call the min value function and execute that. And to the earlier question, if the game keeps going forever, then some of these occasions you will never hit the terminal state, and you'll have to somehow decide how your algorithm is going to stop early. It might say, after I am a million moves deep in the game, I'm just calling it done on this branch and have some kind of resolution calling it nevertheless a terminal state because it's been going for so long. STUDENT: Question. PROFESSOR: Question-- yes. STUDENT: So if there's an algorithm, are we [INAUDIBLE] to move step-by-step? For example, the one Pac-Man question where we first moved Pac-Man then the ghost movement, is it a different type on all [INAUDIBLE]?? PROFESSOR: Correct. In everything we're looking at here, it's going to be turn-based games, where the moves are alternating between players. There are other types of games where players takes decisions simultaneously. We're not going to look at those in the current two lectures. OK, now we have an algorithm. This is essentially the counterpart of our tree search or graph search from when we did search in the first-- well, in lecture 2 and 3. And we can now run this. This is a lot like depth-first search. It's just traversing the entire tree. It's not skipping anything. It's not prioritizing anything over anything else. Just going successor, by successor, by successor, and going deeper, deeper, deeper before it's coming back up. So if you had a game tree and the program was running through it, it would start with the start state. It wouldn't draw out the entire tree like we've been doing and then from the bottom push things up. It would actually start at the top. It would say, OK, what are the successors? Three successors. Let me look at the first one. Let me dispatch that one. It's a min node. OK, let me call min value on that. What does it mean to call min value on this one? It means to call its successors and loop through them. Oh, looks like the first successor is a terminal state, value of 3. That can be passed back up. And now we have v equal 3 here as our estimate. Next one is 12. Since we're minimizing, v stays 3. Next one is 8. v stays 3. At this point, we're done. We've seen all successors. And we can pass this back up. And our estimate at the top here is now 3. But it might still change, because we've only seen one of the successor nodes, not all of them yet. Now the depth-first search goes down next branch. Oh, it's a minimizer node again. Look at its successors-- terminal state. This now says, oh, the value here is 2. But it might still change as we see other successors. 4 is not going to change that. 6 is not going to change that. So it's 2. We pass this back up. The maximizer says, I had 3 so far. This new option is 2. That's worse. So I'm sticking with 3. Going to the next successor, again minimizer. Here, the value would be 14 right now. Then the minimizer looks at the next successor. 5-- so updates to 5. Then sees 2. Updates to 2. 2 over here. That gets passed back up. Maximizer says, well, 2 is worse than 3. 3 is the best I've seen. I've seen all my successors. 3 it is. And I will take the corresponding action, which is going this way, after which the minimizer would take the corresponding action, which is going this way. And the game would end up over here. So that's the procedural execution of how the minimax search would work. Any questions about this? OK. Then let's take a look at some properties. Let's say we have this tree over here. And the question we're asking ourselves here is along the lines of we're faced with a real-world situation in some sense, and we've formalized it into a game tree. And what will minimax give us? Well, look at this. What's the value for the top player, max? Top node? STUDENT: 10. PROFESSOR: 10? Saying 10. Why 10? STUDENT: Because if you go in the other direction, the other player is going to do 9. PROFESSOR: So the answer is well, it looks very tempting here to go for the 100 maybe. But there is a 9 there, and the minimizer would force the 9. So the value here would be 9. On the other side, it's 10. So the value of the game is 10. OK. So we have a value of the game that's 10. So that's when you play against a perfect player, like a mastermind player. You would always assume that they get it really, really, really right. But what if you play against a player like this one? Maybe you are willing to take your chances. Maybe you don't think playing Tic-Tac-Toe is a waste of your time. You can actually win a few games. Even though the value of the game in principle for Tic-Tac-Toe is 0 against this player, it might be plus 1 for you, because they might make mistakes. And so when you play against a player that might make mistakes, maybe you want to go this way, because maybe they would go there. And if they don't, 9 is not that much worse than 10. But if they do make a mistake, a hundred is so much better than 10. And so, if just periodically they were to make a mistake, it's worth it going that way. So now we're doing some probabilistic calculation, really. It's like, what's the chances that they might make a mistake? And based on that, maybe we go that way, if the chances are high enough and the payoff is high enough for when they make a mistake. That's a different kind of way of looking at modeling the world. It's modeling the opponent not as a optimal mastermind playing against you but as somebody who might do some things stochastically. We're not going to look at the details of how that works in this lecture. That's for next lecture. But let's look at some of the consequences of the differences of how you look at the world will affect what you do. So here we'll look at-- well, some apologies for the situation we're putting Pac-Man into here. Pretty grim situation-- no dots nearby, just ghosts. What's our scoring system? Plus 10 for eating a dot. Negative 1 for each action you take. So what do you think Pac-Man should do in this scenario? STUDENT: [INAUDIBLE] PROFESSOR: Some people are pointing to the right. What does that mean? There is a ghost right there. Why would you go to the right, dive-bomb the ghost? Well, it turns out to give you a higher score. If you can manage to be eaten by the ghost in one step, you only lose 1 point due to time. If you get eaten after more steps, you lose more points. So you want to get eaten as quickly as possible in this case. [LAUGHTER] That's the minimax solution. Let's see what happens. So Pac-Man playing minimax. Boom-- one step, and game is over. But now let's consider this again. And let's think back to the not-so-smart players and look at the ghosts. Maybe they're not that smart. You say, they look a little bleary-eyed. They're not that smart, probably. Maybe I should take my chances. Well, what does it mean to take your chances? The red one is always going to come towards you. It only has one way to go. But the blue one could go up or down. So maybe half the time it will come towards, half the time it'll go away from you. If it goes away from you, then you're lucky, because the ghosts keep running the direction they're going whenever there's no intersection. And it'll keep going away from you. You'll be able to eat all the dots and win the game. Of course, you're taking the risk that you live longer. If it does come towards you, then you could have just dive-bombed the red ghost. So let's see what happens. Of course, if the ghost is really taking random actions to choose whether to go up or down, we don't know right now what's going to happen. But let's see what happens. Pac-Man took their chances. And indeed, blue ghost moved away. And got lucky and ate everything just in time. Now, if we play the game again, it might be that it doesn't work out the same way. I'm going to try this. OK, let's see what happens this time. Got lucky again. But in principle, after it runs a few times, half the time it would be lucky, half the time unlucky. But it might be better than always dive-bombing into the red ghost. So the second scenario here is when you're saying, well, I'm playing one of these kind of players. So it's worth taking my chances. And we'll see how to compute optimal strategies for that in next lecture. How much compute do we need to do this minimax computation? Well, we already covered depth-first search and how much compute it needs in the second lecture. What was it? Well, it's order branching factor to the power depth of the game, so b to the m, where m is depth of the game. Space-- b times m. So space complexity is pretty good, but time complexity is pretty bad. For chess, branching factor about 35. Of course, not the same in every moment of the game, but some average-- 35. Maybe a hundred moves total. Then 35 to the 100, which is completely infeasible to search the entire tree. So that's not how Deep Blue beat Kasparov. It's not by searching the entire tree. Other machinery was needed to get there. And we'll see some of that machinery now. OK, since we only have finite compute, we cannot explore the entire game tree. So what are we going to do? A first method we'll look at is game tree pruning. The question here is, do we really need to look at the entire tree to find a solution? If we look back at previous weeks in solving CSPs, we had backtracking whenever filtering said that there will be no solution down here. It's not worth doing more work because we've already determined that the domain of one of the variables is empty. So it's waste of time to go look further here. We don't have domains of variables here that can go empty. But maybe there's something similar we can do where at some point we can conclude this part of the search tree is just not useful to explore anymore to find a solution that we need to find. So let's do this by example first. Here's our minimax simple example again. And let's say we run depth-first search. As a reminder, what happens-- we keep going left to right through the tree. We're straight through, collect values. So at this point, this is a 3, which has been passed up. This guy thinks it might be 3, but it might still be something else. Here we now have 2. And now stays 2. Stays 2. The 2 gets passed up. This one stays 3, and so forth. Do we really need to look at all these numbers to solve this problem? Another way to phrase this, is there is something-- like, are there any nodes at the bottom here, leaf nodes, where I could have changed their value to anything else and the outcome would have been the same, the value would have still been 3? Because if that's the case, if I can change the value of leaf node to anything else and the value was still 3 and I could know that while I'm doing the search that that leaf node, no matter what the value, the top-- the evaluation will remain the same, then we don't need to go look at that leaf node, and we can skip it. So that's what we're after now. OK, let's see how we can achieve this. We start out just the same way. Right now we have no information about the game. So it's going to be very hard to make any claims like, oh, I can skip this because such and such. We don't know anything yet. After we've gone through the left bottom-most node and we know this value is 3, we can propagate this back up. We have maybe 3 here. Now we have a little bit of knowledge about the game. We know that one way of ending the game is by going to this before-last node where the minimizer will choose 3. Could we have skipped any of these or these nodes? No, because if it had been lower than 3, the minimizer would have chosen it. So there's nothing we could have skipped here so far. Now go to the next one. Is there anything we can do here? Can we just say, let's skip it? No, we can't, because the minimizer is sitting there. And what if the minimizer's options are only a hundred and above? Then definitely we need to know about that. But if they're below 3, we also need to know about that. Essentially, what we'll need to know, if there are going to be options here below or above 3, because that's what we have right now at the top. So let's take a look. The first one is 2. What does that tell us? This tells us that the value of this minimizer node is less than or equal to 2. We don't know if it's 2 or less. We don't know if it's 2 or maybe another other value, but we know it's less than 2. Once we know it's less than 2, we know the maximizer would never prefer this. It would always prefer going that way. So it does not matter what the values are here. Because no matter what they are, the maximizer would never let the game get to that node, because it would be worse than going to a node where you get a 3 guaranteed. The guaranteed 3 it already has up here is better than anything that could happen down there, no matter what lives over here. So we can just skip over these. Don't need to look at them. And note this would be true even if these were not terminal nodes. Imagine we're a whole big tree living underneath here-- a whole big tree living underneath here. We don't have to explore any of that tree because we know the maximizer would never let the game go that way because minimizer will be able to make it less than or equal to 2, and maximizer could force a 3. And so we could prune massive amounts of computation if there are big subtrees over there. We continue. OK, we have a minimizer node again. At this point, we know nothing yet about this branch. We know the maximizer can force a 3. We don't know if this might be a better or worse option than the 3 it can already force. Now we see a value of 14. Oh, wow. This looks promising. We know the value here is less than or equal to 14. But if it could be 14, that would be great. Can we stop here and just ignore these? No, we can't, because we're not guaranteed to get 14. It could be negative infinity for all we know, and then we don't want to go there. So we need to keep looking. Now it's 5. So we know this one is less than or equal to 5. 5 would be great. It's better than 3. But there are still other parts that we haven't seen. And as long as we haven't seen that, we don't know what the minimizer will force us to get there. So we need to keep looking. What is it? A 2? Now we know that it's actually 2. That's worse. And maximizer will conclude to go this way, and the game will end up over here. What computation did we save? All of the computation that would have happened over here. In a small tree like this, it might not be a lot. But this principle can be reused everywhere in your game tree. It's called alpha-beta pruning. So what's the general version? We have a game tree. And in this game tree, we're computing the min value at some node, n, over here. To do that, we loop over n's children. As we loop over n's children, the estimate of the value of that node will keep dropping. It will never go up. Any time you see a next child's value, it can only go down what your estimate is, never go back up. So if at some point our estimate here-- we said at this node, the value of this node is, let's say, less than or equal to some value, x. If any time x is smaller than a, which is a value the maximizer can guarantee over here, then we know that no matter what else is still in other successor nodes, it's only going to make it worse than x. And so what we know is that it's already worse than a. Maximizer can force a. So maximizer will never let this path happen, because that would open up the opportunity to end up with something worse. So what we now know is that maximizer will never let the game get over here. So we don't really care about the exact value of the node over there because the game will not end up over there. We know it's a worse value than what the maximizer can force over here. And that's all we need to know. And we can stop exploring the successors and just pass up that value, x. As a substitute, we know the value is not necessarily really x. But we know it's x or worse. And that's good enough a value to pass up for the maximizer to conclude, I'm never going to go there, and we're good to go. And so this way you might be able to prune a large amount of computation over here. Note when looking at the comparison, when we compare with this value a over here, what are we actually comparing with? We take a path from the node that we're currently exploring all the way to the root of the tree. And anywhere along the path to the root, we look at maximizer nodes. There's a maximizer node here. There's one here. Maybe one here. And for each of the maximizer nodes, we see, what have they already been able to guarantee themselves? And the best thing that's been guaranteed by any of the maximizer nodes so far is the thing we compare with. And that's what we call a. Formally, what does this look like, a or alpha? Alpha is the maximizer's best option on the path to root from the current node we're working on. And there will be a counterpart, beta, which is min's best option on the path to the root. We were talking about min. So let's go here. We're working on a min value node. As before, we initialize with plus infinity. We loop over successors. There's a recursive call in there to get the value of the successor. You compare the value of the successor with the current value estimate. If it's lower, you update, because that means you can get even lower than you thought before. You do pass along alpha and beta in the recursive call. And what are alpha and beta? It's from where you currently are all the way to the root. For alpha, you check what's max best option already available. And for beta, you check what's min's best option already available on that path. So those you can just pass on to your child. Then if the value we see is less than alpha-- so min can do something that guarantees less than alpha, which max can already force somewhere else-- then we know we're done. This part of the tree is not going to be visited because max will not allow it. So we might as well just return the current v that we have, which is not a precise value of that node. It's just a value that signals it's going to be v or worse. And that's enough information from max to know further up the tree to never go there. If along the way as min is checking options it sees something that is a better option than beta, the best thing it's seen on the path to the root, then of course I should update beta. Note that these betas are local. If it updates beta here-- I'm going back into this drawing. If this min node has a beta that goes down by seeing something, that does not get passed back up. Beta is about things you can guarantee on the path to the root. And if you're over here, that option down there is not on the path to the root. So the update of beta will only get passed into children, not passed back up. What's being passed back up is only v. OK. Any questions about this? There. STUDENT: Isn't the calculation of the value of a node requires you to convert the tree? Because you get the value of the node, but to get to the value of that part, you have to traverse the tree or have some sort of heuristic or-- PROFESSOR: Yeah, so the question is-- if I can rephrase it, there's really two parts to the question. One part of the question is, when we're doing this work, the value of successor, what does that mean to do that work? And the second part of your question was, can we even afford to do that work? Because it sounds like a recursion, and that recursion could be really deep. And the answer is yes. You will have to do a recursion. That's what it is doing. And this could go very, very deep and branching very widely and could be very expensive. We're hoping to reduce how expensive it is by the alpha-beta pruning, which allows us to skip over subtrees in various places. But even then, there might still be too much work. And we'll look at how much we can prune soon. And then we'll see what else we'll need in additional machinery for many, many games to make it all work together to get to something that you can actually run. This will probably still run out of time just running it this way on a big game. You're absolutely right. What are the properties of alpha-beta pruning? One is that it has no effect on the minimax value computed at the root. Because we only skip over things that we know no matter what the values are there, we'll end up with the same result. That's how we started out inventing this process. The intermediate nodes might have incorrect values. So when you go through this process, if you were to then keep the tree around and look at the values passed around, they will not be exact very often, because you've decided to not compute them exactly to save time because you don't need them exactly to know what to do at the top. And all we care about is to do the right thing at the top. Then, actually, we can repeat this process. When it's next time our turn, we can run this again. So effectively, we end up with a replanning type of agent. The next time, we just run it again. Note that in the most naive way, you don't get enough information. Because if all you have is the value at the top, you might say, oh, it's 0. We shouldn't even play. And then maybe you can convince the other person it's 0 and you shouldn't play. But actually, the other person says, no, no, no. I want to play. Just knowing that it's 0 doesn't tell you what to do. You need to know which of these are the right one. For example, if we run alpha-beta pruning, we would first go down here. We would get a 10. Pass that up. We say 10 so far. Then we go down here. Min says it's going to be less than or equal to 10. Well, we already have a 10 that we can force. So we don't need to look at any of this. No matter what's there, we would go there. So we pass up the 10. And we end up with a 10 over here. But we actually need to remember in the process not just that we have a 10 at the top but that this 10 is one that we're able to guarantee. And what came from this side was a 10. That didn't mean you're going to get a 10 here. But that meant it's going to be a 10 or worse. So probably don't take this one. Take the one where you're guaranteed a 10. How do we keep this information around? There is a few different ways of doing it. The simplest way to do it is then it turns out that the first one you find is the one that tells you're able to guarantee something. So if you just remember which one was first, that's the one that told you for the other ones not to do all the work. So you keep around the one that you found first. You can take that action. If you have a hard time bookkeeping that, other options are you could instead of running the search from the top, you could run it from the children of the top node. Compute the exact values for these nodes. And then you can say, OK, these are the exact values for each of my children obtained by alpha beta pruning. Let me see which child is best. Or you could decide that you only prune on inequality, not on equality. So you'd say, I prune if the value-- oh, that's a little big. I'll prune on strict inequality, strict smaller than-- rather than allowing for equality. So there's three different ways you can do this-- keep track of which one was first, because that one you guaranteed it, whereas the other ones you skipped computation and just said, might be this or worse; run it on the children; or prune on only strict inequalities. Good child ordering improves effectiveness of pruning. What does this mean? Well, when you get to prune, when the other player already has a good guarantee somewhere else-- what does it mean to have a good guarantee? That means that the other player somehow looked at some good options already. So if somehow you're clever about exploring good options first and then the bad options, then more often this pruning might be triggered. With a perfect ordering, you can show that the time complexity drops to order b to the m over 2, so square root amount of computation from doing regular minimax if you have the perfect ordering of how you explore the tree. This doubles how deep you can look. If you have a fixed compute budget, you can look twice as deep, which is a lot. If you look twice as deep as your opponent, you're going to probably beat them. But even then, just the square root-- to the question earlier-- is not often enough to solve something like chess all the way to the bottom. This is a simple example of meta-reasoning. What do we mean with meta-reasoning? It's where you do compute to decide about what you're going to compute. So we compute alpha beta values, which allow us to then decide that for a certain subtree, we're not going to do any compute. And so we do a little bit of extra compute to then be able to discard other compute. Let's take a one-minute break here. And then after the break, let's do a quick quiz and look at what we still need to solve problems like chess and so forth. [SIDE CONVERSATION] All right, let's restart. Any questions about the first part of lecture? Question there. STUDENT: Can you go back to the last one? PROFESSOR: Sure. STUDENT: So what is m here on the [INAUDIBLE]?? PROFESSOR: Oh. The question is, what is m? m stands for the number of moves before the game ends. So let's say it takes in Tic-Tac-Toe nine moves to fill the board. I mean, sometimes it stops earlier. But then m would be 9. And with alpha-beta pruning, theoretically of the optimal expansion ordering, you would have an effective depth of 4 and 1/2 instead of 9. STUDENT: A different question also-- so if we do strict inequality for minimax, would that mean that we have to do more nodes that-- PROFESSOR: Correct. If you do strict inequality for pruning, you might lose a lot. Especially if a lot of nodes have similar values, then you would lose out on a lot of pruning. So your effectiveness will go down as a consequence of your approach. STUDENT: All those methods that we can start of the three, which one is the best? PROFESSOR: The most-- the one that requires the least compute will be the one where you carefully keep track of which was the first one that guaranteed you a certain outcome. And you prune also on equality. OK, let's do a quick quiz here. Let's have you look at this game tree. Talk with your neighbors and think about what would happen if you run alpha-beta pruning on this game tree. I'll give you 30 seconds. Just talk to each other. What's the value of the game? Which nodes might you be able to prune? [STUDENTS DISCUSSING QUIZ] OK, let's see. What is the value of this game, the minimax value of the game or alpha-beta value of the game? Any thoughts? What's the value? Anyone? STUDENT: 8. PROFESSOR: 8-- value 8. Do we need to run alpha-beta pruning to know the value is 8? No, we don't, because we know alpha-beta pruning will give us the same value as minimax. So if I ask you, what's the result of running alpha-beta pruning on this, you could say, well, that's a complicated procedure. You need to think about a lot of things. But I know it computes the same as minimax. And minimax I can just read off. On one side, min forces 8. On the other side, min forces 4. 8 is a better thing to be forced. So the value of the game is this. And this is how the game will play out, with a value of 8. So even though the question might have been about alpha-beta pruning, you could have solved it and found the answer without doing it. Now, what will happen when you run alpha-beta printing? Are there any nodes-- and this is a question we love to ask. It's like, OK, which nodes will not be expanded when running alpha-beta pruning? Well, let's see. And let's assume we go from left to right and then see where we get to skip expansion of nodes, or inspection of terminal values. OK, initially, we have no way of skipping, because we don't know anything yet. Then here, we don't know anything yet either. So keep going. We see a 10. Can we skip the next one? No, because it might be a better option for min. And it actually is better. It's 8. Now it's 8. That's being passed back up. We have a 8 over here. What now? Can we skip anything? Can we skip this next one? No, because maybe there's only good options down there, and min cannot force us to anything bad. So we need to go look. Min looks here, finds a 4. That means things are going to be less than or equal to 4 down here. Max can already force an 8 that way. Doesn't matter what's here. No matter how good or bad, the game's not going to end up over there. We can just pass up this 4 and continue. And also, it doesn't matter how big the tree is. If it happens to be a terminal node-- it could be a huge tree there-- you can completely skip over it. How about this one here? What's the value of the game and which nodes can be pruned in the process? I'll give you a minute to think about it. And feel free to talk with your neighbors, see what you conclude. [STUDENTS DISCUSSING QUIZ] All right. What's the value of the game? What do we get from running minimax or alpha-beta pruning? They all give the same result. Anybody? STUDENT: Is that a 10? PROFESSOR: 10, OK. How do we get 10? Again, whether we run alpha-beta pruning or minimax, it's going to be the same thing. We can just look at this is going to be 10. This is going to be a hundred. This is going to be 2. This is going to be 20. That means min can force a 2 here. It can only force a 10 over here. 10 is better. Value of the game. But now we looked at everything. The question is if we ran alpha-beta pruning, could we have skipped looking at some things? OK, let's see. What nodes can be skipped? Any thoughts? Over there. STUDENT: The third branch. PROFESSOR: Can you give the letter on the branch that you're referring to? STUDENT: I. PROFESSOR: I? So the suggestion is that we can skip I. Any other things? STUDENT: L and M. STUDENT: After we look at I, we can skip L. PROFESSOR: After looking at I, we might be able to skip L. So that would be something not just a terminal node but in a tree that's bigger than just terminal node. Any other branches we can skip? G? G over here. So we can skip going down that one. Any other ones? OK, let's see if it's true, what happens. We go down this way. Can't skip anything. No information yet. We always have to first do the left bottom-most one. We're always going to have to look at everything. That's just always going to be the case. If you work through it, you'll see this one has 10. Once we have this, we can pass it up. This means that min can get less than or equal to 10 over here. We go down here. Max is looking at some things. Max says, I also already see a hundred. So I'm going to do better than or equal to a hundred. But min can force less than or equal to 10 over here and force 10 that side. So a hundred is worse. No matter what's here, doesn't matter. We can prune it. G was correct. Doesn't need to be looked at. The hundred gets passed back up, and min concludes it's 10. And then the 10 gets passed back up. And now max knows they can get at least 10 over here. Now we can go down the other branch. Min has to look for things here. As long as you haven't seen anything, you can't really say, I can skip the rest. So you have to go look here. Then here we're at a max. Max has to look around. Hasn't seen anything yet. And min hasn't guaranteed anything above max. So there's nothing max could do here. So we have to look at this one as 1. Then 2 is better. It becomes 2. Pass 2 back up. Now this node here can force something. It can force to be 2 or less. It specifically forced a 2 by going this way. Now I'll look over here. Actually, we know that we can force 2. We look up to what max can force. Max on the path to the root can force 10. So that means max will never let the game over here. This can get cut off. And we pass up the 2, and we're done. So L was also correct. We chopped off L and G in our expansion and saved some time. OK. That was alpha-beta. In terms of intuition, probably one of the harder things to wrap your head around. But after you've seen a few more examples that you've done on your own, it'll probably start coming together pretty well. Now, even when we use alpha-beta pruning and we have maybe only square root of the number of nodes to be expanded, it's still often too much. We cannot go all the way to the bottom of the tree. So what do we do? We do depth-limited search. So we might say instead of going to the bottom, we stop here. And we call these in some sense terminal nodes. They're not really terminal nodes. We say it's done after two moves and associate values with those nodes and then propagate that off. That means if these are the actual values of those nodes, that's great. Typically, we wouldn't know the actual values, because that's exactly what we're trying to find. And we'll have some approximations there. And we pass up the approximations. And we hope that these approximations are good enough. Remember A* heuristics, we had an approximation of distance to the goal? Same thing here. We will have some approximation of value of that node and work with it instead of the real value to achieve something. In this case, solve for minimax with-- well, evaluated against these numbers. So now suppose we have a hundred seconds. We can explore 10,000 nodes per second. That means we can check a million nodes per move. Alpha-beta can then reach a depth of about 8 on a decent chess program. So we would do this at depth 8. Guarantee of optimal play is gone, because these numbers will not be precise. And once they are not precise and we use them to decide what to do, we're not solving the actual game. We're solving some approximation that we hope is going to be good enough. And so the deeper we can go, the better, because then later we have to bring in our approximation. And often in practice, if you don't know how much time you have left, you might run iterative deepening. You say, I'll go up to depth 2. And if I still have time left after I'm done, I run again up to depth 3. If I still have time left, I run up to depth 4. And just like we had when we tried to combine the best of breadth-first and depth-first search through iterative deepening and where the last search really takes all the time because it's the next level in a tree which is much larger than anything you've seen before, same thing here. So very quickly, the shallow depths and you keep taking more and more time for next one, next one. But if at some time in that process your clock runs out, you have the solution from the previous search to tell you what to do. OK, let's take a look at this in action. So demo 6-- we run depth-limited minimax here or alpha-beta. It'll give the same result. Depth 2. This is the behavior we get. What's going on here? Why does it keep thrashing back and forth? Well, let's think about this. We have Pac-Man in this world. Two options-- left or right. Then only one option left, two options left. In our scoring, this would be a-- let's see. 8 plus 10 for one dot, negative 2 for two time steps. This would be a-- negative 2. And this would be also a 8. So what happens here is that there are two options that are equally good. Going to the left or to the right is equally good. It doesn't really distinguish, breaks ties, and happens to go to the right. Then once it's here, it's effectively in a symmetric situation. Happens to break ties the other way, go to the left, keeps going back and forth. Because what is it doing? It's solving a new type of game. It's not solving the original game. It's solving the game, if I look 2 ahead, what is the best thing to do as my first move? And if you only get to look 2 ahead, the best you can do is collect the food pellet. And it doesn't matter which direction you start. You can collect the food pellet in the next time step as needed. So that's the problem here. Our evaluation function by just looking at the score after two steps is not very indicative of how good that situation is. So maybe we need to use something better. So if we look at it, this situation versus this situation, well, the one on the left is clearly better, because you already made progress getting to the next one. So maybe we should get a bonus here, a plus 1 or something, for getting already closer to the next one. Now we have a 8 plus 1 here. This branch gives us 9. The other one gives us 8. And we will go that way. So this is one of the things you might see in your own agents too. If your evaluation function is not good enough, doesn't give enough signal, and you only look that deep, you might get weird behaviors that are not what you're hoping for. But you can solve it by having a better evaluation function that better evaluates how good a situation is. Don't just use the score in the game. Use something smarter. So here's what it looks like when fixed. And it clears the board effectively. So what are these evaluation functions in practice? Imagine you're building one for the game of chess. We just heard that you can only go 8 deep. After 8 deep, usually the game's not over yet. How are you going to evaluate the board situation? Well, you might have heard things like the queen is worth a lot. Maybe a rook is worth something. A bishop is worth something. And maybe you take a weighted sum of the quality scores of each of the pieces that you have left minus some weighted sum of the quality score of the pieces the opponent has left. But maybe you also have something about quality of the mobility on the board and so forth. So you come up with some way of evaluating the quality of the situation. And however good you make this will drastically affect how well your agent plays the game, because that is what it's going to use to evaluate what to do until you're very close to the end of the game, where you can search all the way till the end and use the actual outcome of the game. So a lot of thinking can go into this. Also, often machine learning can go into this to learn what are good or bad positions to be in. For example, for Pac-Man, let's say you design an evaluation function. You're in this situation here. You might say, oh, I have an evaluation function. You run it on the situation. You get a number back. Then you can sanity check. Now here is another situation-- better or worse? Well, this might look worse. If you think this is worse, you can check your evaluation function to see if indeed it gives this a worse score. If it doesn't, then maybe you should change your evaluation function or change your intuition and conclude that maybe this is better. How about this? Maybe even worse, because you're getting cornered more by the ghosts, getting closer to dead-ends than you were before. How about this? Even worse. Even worse. And so you want to have a few sample situations for which you evaluate your evaluation function and inspect that indeed it gives you good numbers back. So now we're going to look at it from the perspective of the ghosts. So the ghosts also run minimax. They're just the opposite player. Here's what you get with ghosts running minimax. They can't think all the way till the end of the game. So they use an evaluation function. The evaluation function says getting closer to Pac-Man is good. But then at the end, something else happens. Why does the behavior change at the end? Let's look at this again. What's happening here? You evaluate based on evaluation function. You can't get to the end of the game in your search. Evaluation function says close to Pac-Man is good. But then you're at a point where you can get to the end of the game, which once you realize that this is how you can finish up the game and that's what you're going to do. Here's a zoomed-in version. Even more difficult situation. So the ghosts initially will just try to follow Pac-Man. But when they can look till the end of the game, they'll change their strategy, and flank, and win. Depth matters. The deeper you can look by being fast at computing things, the better your evaluations will be. Your evaluation function will typically be better closer to the end of the game. There's a trade-off here. If you spend a lot of time on your evaluation function computationally, you'll have less time to run search, just like in A*. If your heuristic function is very expensive, then you don't have as much time to run the search. What does limited depth do to you? Here is a limited-depth execution. Pac-Man-- well, execution's also probably the right way to phrase the outcome. Pac-Man's looking only 2 ahead. What happens? Doesn't know what to do. Gets eaten. But what if Pac-Man can look much-- oh, wrong one. This one is unsolvable. What if you look much further ahead? What if you can look 10 steps ahead? What can you do? You can actually anticipate that once blue chooses which way to go, they'll keep going that way. So if you can just hold off-- rather than running away from the red ghost, run towards it to hold off and gain time to see where blue is headed, and then choose where you go-- you can actually win this game. And so this is an example of looking further ahead, giving you the edge compared to an agent that doesn't look nearly as far ahead. One thing you might wonder-- are there any synergies between what we've seen in terms of evaluation functions in alpha-beta pruning? We've looked at them as two orthogonal things. Evaluation function is a way to call a node a terminal node effectively rather than keep searching. Alpha-beta is a way to skip parts of the tree. Well, the amount of pruning you get in alpha-beta depends on the expansion ordering. The evaluation function can give you guidance about what to expand first. We said that if you find good options first, then you get to prune more. Well, the evaluation function for your successor nodes can tell you which successor nodes are more promising for you and which ones are less promising. You can go to the more promising ones first and that way increase the amount of pruning you get in alpha-beta pruning. Another thing that's happening is the value of a min node will keep going down. Once a value of a min node is lower than a better option available to max on the path to the root, you can stop. Here's an alternative how to decide to stop. If you have an evaluation function that is a bound-- you say, I evaluate this node. And I know that the value of this node is this much or less. And if the evaluation function is quick, you can quickly evaluate whether you get a certain value or less at a specific node. That is enough information to potentially already prune. So rather than going down your first child and hope you find something that is promising that allows you to prune, your evaluation function, if it's a bound in the right direction, will also allow you to already prune. I strongly encourage you to think about these two ideas a little bit on your own time, because I think if you fully understand what's going on here, that means you have a lot of the intuition in place of everything we covered in this lecture. See you next time, and we'll cover uncertainty. [SIDE CONVERSATION] [NO AUDIO] |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181011_Bayes_Nets_Independence_DSeparation.txt | PIETER ABBEEL: Hi, everyone. Welcome to the 14th lecture of CS 188. Let's start with a couple of announcements for today. Your midterm was earlier this week, I hope you know. Somebody knows. Solutions have been posted on Piazza. Check them out. If you disagree with any solutions, definitely let us know. Grading is in progress. We'll hopefully be done pretty soon. Next thing up for you to work on is homework 6, which was just released and will be due on Monday. It'll be, as usual, a homework with three parts. There will be a self-assessment of the previous homework. There will be a electronic for part of the homework and a written part of the homework. Any questions about logistics? OK. Let's start with the technical content then for today. Today will be about Bayes nets. You already had one lecture on Bayes nets where you get to understand what they are. And you had a lecture before that on probability. And we'll be building directly on both of those lectures today. Quick recap. Conditional probability. The conditional probability of, let's say, capital X taking on value small x, which we shorthand this way. Just as a reminder, this is a shorthand for probability. Random variable X takes on the value of small x. Given random variable Y took on the value of small y. So conditional probability of X given Y is the probability of them occurring together divided by the probability of Y. So that is saying, whenever Y occurs, what fraction of times does X happen? That's the conditional probability of X given Y. You can reorganize this. And this gives you the product rule. That the joint probability of X and Y is the probability of Y times probability of X given Y. You can also reorder the variables to get the other version. Y given X times the X. Then chain rule, which is a generalization of the product rule. And it's a way to write out joint distributions over many variables. The joint probability over all variables X1 through X10 is the product of the probability of X1. Then the conditional of X2 given X1. Then X3 given X1. And X2, and so forth. Written in shorthand notation, is the product of all conditional probabilities of each variable conditioned on all the preceding variables in your ordering. If you had only two variables, n equals 2. It becomes this thing over here. For n equals 2, there was two ways of doing this. You start with X, or you start with Y. When you have n variables, you have n factorial ways of writing out the chain rule. And depending on the problem you're solving, one way of ordering the variables might be more beneficial than another way. Independence. X and Y are independent if and only if for all values of X and Y, the joint probability of X and Y is equal to the product of the individual probabilities. So that's just a definition of independence. If this is true, then we call these variables independent. There's something called conditional independence, which is going to be true a lot more often, and hence a pretty useful concept in what we're doing. X and Y are conditionally dependent given Z if and only if-- and what will show up here is something that looks exactly like what we had before. But now everywhere there is a conditioning on Z. That's the only change. So everywhere we now condition on Z. And, of course, that has to be true also for all values of Z. Shorthand notation. X independent of Y given Z. Bayes nets. Bayes nets are a way of encoding joint distributions over many variables in some way that is more efficient to encode a distribution. And also gives us, maybe, a bit more insight into the structure of the distribution, because there is a graph associated with it. For example, here we have a bunch of random variables. Age, good student or not, mileage, risk aversion, vehicle here, extra car, [? senior ?] training, make, model, and so forth. And this graph is sketching out, in some sense, how these variables relate to each other. For example, age might affect driving skill. And age might affect medical costs in case somebody gets in an accident, and so forth. Questions we can ask once we have a Bayes net encoding a joint distribution are all the questions we can also ask from any joint distribution. Because a Bayes net is just one way of representing a joint distribution. The joint distribution is still there. We can still ask questions about it. So given a fixed Bayes net, we can ask what is the probable distribution for X given some evidence variable E? So maybe what's the probability of high medical costs given a certain student status and yearly mileage of the person you're looking to insure? You can also ask representation questions. Given a Bayes net graph, what kind of distributions can it encode? That's what we're going to look at a lot today. What kind of assumptions do you really make when you put down a Bayes net? What do you gain? What do you lose? And then, so a lot of modeling questions in practice is what Bayes net is actually a good model for the things that we see in the world? And to do the modeling well, you want to understand what independence assumptions you are making by choosing a particular structure. So let's recap the semantics of Bayes nets. Semantics of Bayes nets. A Bayes net is a directed, acyclic graph with one node per random variable. That's the graph part. Then in addition, there is a conditional probability table we call a CPT for each node. And that table says, what is the probability of this variable given its parent variables? Together, all these conditional probability tables, if you multiply them together, you get the full joint distribution. So Bayes nets are a way to build full joint distributions by only specifying conditionals given parents in the graph. In practice, that might be a lot more feasible. Specifying a full joint distribution can be very difficult. Because just, let's say, go back to this graph. Full joint distribution you want to specify numbers. This may be like 24 variables here. Say, OK, for each of those variables taking on a certain value, what is the probability? That's very hard to come up with that number. But if all you need to do is say, given the socioeconomic status, what is the probability this person has an extra car? That's a much easier thing to think about. And so by using a Bayes net it might be easier to specify the model by specifying pieces of the model, which you then multiply together to get the full joint distribution. Here's the canonical example that many books use. What are the variables here? B, burglary. E, earthquake. A, alarm. J, John. And M, Mary. And it encodes a story. The story here is that there could be a burglary. Or there could be an earthquake. Or there could not be. And as a consequence, your alarm might trigger. But you're not at home. And I guess this is back in the days when your alarm wouldn't talk to your phone directly. But you might have neighbors who hear the alarm go off in your house and might then decide to call you and say, hey, your alarm is going. Either John or Mary or both could decide to call you. And then the tables here encode the probabilities of these things happening. For example, when the alarm goes, which is plus A, the alarm is going. John has a probability of 0.9 of calling you. 0.1 of not calling you. For Mary, when the alarm goes, there's a probability of 0.7 of calling you, 0.3 of not calling you. And here's the bigger table. This one encodes the conditional, if alarm given burglary or earthquake, highest probability of alarm. Highest probability of-- let's see, plus A. Highest probability of alarm going off is 0.95 and happens when both burglary and earthquake are happening. If it's just burglary, already has a high chance of going off. If it's just earthquake, it also already has a pretty high probability of going off, and so forth. And so these numbers might be much easier to specify than a table that has, in this case, 2 to the 5 entries for every possible combination of all variables. If you want to then compute an entry in that table that's hard to specify directly, but we still want to reason about, all we need to do is find the corresponding entries. Multiply them together, and that would be the entry in the full joint distribution. OK, that's the recap. Any questions? Because we're going to be building on this for the rest of the lecture. OK. So let's start asking some questions about the benefits you get from using Bayes nets. Let's say we have a joint distribution over n Boolean variables. Each variable can take on two values. How big is the table representing this distribution? Well, this table will need an entry for every possible combination of these Boolean variables. How many combinations are there? 2 to the power of n. So this table will be a size 2 to the power n. It's a lot of numbers to specify and also difficult numbers to think about. How about a Bayes net with n nodes? So there's still n variables. And each node can have at most k parents. How large is the sum of the sizes of all the tables that we need to represent a Bayes net with this kind of structure? Any thoughts? Over there. STUDENT: n times 2 to the k? PIETER ABBEEL: The suggestion is n times 2 to the k. What would be the reasoning there? Well, let's think about each individual table, how big it's going to be. And then we can multiply with n to see how big the overall thing is going to be. So one table. How many entries does it need? Well, if a variable has k parents, you need to specify a probability for each possible combination of values of the parents. If there's k parents, there is 2 to the k combinations. So this table would need 2 to the k entries. To specify for each of those combinations what is, let's say, the probability of positive for this variable. And then the probability of negative will just be 1 minus that probability. If you explicitly represent the probability of the negative, then you'll have just twice as many entries. So if you explicitly also represent a negative, you have 2 to the k plus 1. One entry for each combination of the parents and the variable itself. This is for one CPT. And there will be n CPTs so that's the total size. Look at the difference. Exponential growth tends to be the one that gets you. Linear growth is not too bad. It's often pretty hard to avoid, actually. But exponential growth gets you in a bad way. Let's say you have a hundred variables. 2 to the 100 is a really large number. But if you just have 100 times 2 to the k, this shows that as long as you keep the number of parents per node small, the storage you need for your Bayes net is relatively small. And so you can always start thinking about this when you design Bayes nets. Often the name of the game will be ensure that not any single variable has a large number of parents. Because if you have any single variable with a large number of parents, that conditional probability table will dominate what you need to do. Both, by the way, when you have a full table or a Bayes net, allow you to calculate the full joint distribution. But then the Bayes net has lots of space savings as long as you don't give any node too many parents. It's much easier to put the numbers in these smaller conditional probability tables than in the full joint tables. And next lecture we'll see it's also quicker to answer questions of the form. What is the conditional distribution of X given some evidence when you have a Bayes net than we have a full joint table? OK, so that's a recap of representation. Topic for today will be the next thing here, conditional independencies. And then inference will be for next lecture, for exact inference. And the lecture after that for approximate inference. And then after that, we'll also start looking at learning the Bayes nets from data. Representation is going to be all about independencies, at least in this lecture. So let's recap those definitions and notation. X and Y are independent if, for all X and Y, the joint is equal to the product of the marginals. This is the notation. X and Y are conditionally independent given Z. This is the notation for that. If the exact same thing holds out as above, but now everywhere we condition on Z. Only change. In fact, that's true for many, many things in probability. If you have some equality that's true, and then you introduce an extra variable, and you everywhere condition to that extra variable, probably that same equality would still be true. Conditional independence is a property of a distribution. So when somebody gives you a distribution, you can go check. You can check all those conditions and see if they're true. Then you can claim independence or conditional independence. For example, let's say this robot is making a fire in the house or could not be making a fire in the house. A fire might cause smoke. And then the smoke might cause the alarm to go off. What is the conditional independence we would have here? Well, if we know there is smoke-- so we're conditioning on smoke. If we know there is smoke, knowing whether or not there is fire doesn't tell us anything new about whether the alarm will go off or not. Because we, in this model of the world, assume the alarm is triggered by smoke. And so once we know there is smoke, it doesn't really matter what caused that smoke. The alarm will be triggered based on that smoke. And so the alarm going off or not is independent of fire given we already know there is smoke. And by the way, the ordering here doesn't matter up front. You can just as well say fire is independent of alarm given smoke. If it holds one way, it holds the other way. Because if you look at the equations here about independence, the roles of X and Y are completely symmetric. So what assumptions do we make when building a Bayes net? If we want to represent any distribution, we can always represent it by applying the chain rule. A product of conditional distributions of the current variable given all preceding variables. What is a Bayes net do differently from just applying the chain rule in the general way? It says, instead of conditioning on all past variables, just conditioning on the past variables that are my parents in the graph of the Bayes net. By doing this, if we select a parent set that's smaller than all preceding variables, we have made an assumption. Because the chain rule always applies, but once we leave out one of the preceding variables, we lose some expressiveness. There are certain distributions we cannot represent anymore. We lost something. We also gain something. We gain that it's easier to write out conditional tables with a small number of parents. But we lost something in that some distributions will not be possible to represent by a Bayes net if the parents of a node are not all the previous nodes. Now this we know that we're doing when we write out the distribution as a part of the conditionals. And we only condition on parents, not on all proceeding variables to make an assumption. The question we're going to answer today is, is there anything that we implicitly assume in addition to this? Where by assuming this, we're forced to also assume other things that are not already listed in this equation here. And what we'll see is that, actually, we can often read these things off from the graph. So we will not have to go look at the actual numbers. We can just look at the graph of the Bayes net and draw conclusions about assumptions that are made about the distribution. Of course, that's important once you start building models. Because we want to understand what assumptions we built into our models. Let's do an example. So here's a very simple Bayes net. Four variables, X, Y, Z, W. In the first bullet point, let's look at the conditional independencies that we have introduced just by going from using the chain rule to using the conditioning on just the parents. So a chain rule would say pXYZW is pX, pY given X, pZ given X, comma Y. pW given XYZ. That's a chain rule. What does the Bayes net say? Bayes net says, I'm OK with instead using still p of X, still pY given X. But then for Z, X not a parent, only Y. So we just use Z given Y. And then for W, only Z is a parent. So we drop the X over here. We drop the X and the Y over there. Those are assumptions we made by using a Bayes net of this structure and not using the chain rule. What would you do if you didn't want to make those assumptions? Let's say you were not happy with those assumptions. You said, that's not right. I need W a condition on X, Y, and Z. Well, then you need to introduce more arrows in your Bayes net. If you want W to be condition on X, Y, and Z, then you need to add this arrow. And you need to add this arrow. Once you do that, you get W condition on X, Y, and Z. But that's not the Bayes net we have here. We're just having Z as the parent. So how do you write that out in terms of conditional independencies? We said by doing this one here corresponds to saying Z is independent of X given Y. This one here is saying, Z is independent of X and Y. Should be writing this in capitals. Z is independent of X and Y. It's not Z. Wrong one. W. Working with W here. W is independent of X and Y given Z. So these are the two assumptions we made by choosing this graph and by just looking at the chain rule simplified to what the Bayes net will prescribe. Are there any other independencies than these two that are guaranteed to be true for any distribution of this form factor? Not that one. This one here. Any others that are guaranteed to be true? Any thoughts? Let's think about this intuitively. What else might be true? If we look at this graph, it seems like in this particular case, we have a chain, a causal-effect change. X seems to cause Y, cause Z, cause W. That's what this roughly encodes. So knowing Y takes any influence of X on W away. So another independence we can write is W is independent of X once we know Y. We haven't proven this here, but that's an intuition that we can read off from this graph. And it's the kind of conclusion that today's lecture will be about. You can generally read off from graphs which independencies are guaranteed to be true in addition to the ones that the Bayes net definition already gives you. So important question about Bayes nets-- are two nodes independent given certain evidence? If the answer is yes, you can prove it using algebra. There's a definition for independence. So you can work through the algebra and prove it. This can be tedious in general. And so today's lecture really about avoiding needing to do all this algebra and having a graph algorithm that can tell you the answer. If the answer is no, you can prove it with a counter-example. So you could say, well, this independence is not true. Because here's an example distribution showing that this independence is not true. And the example distribution can be represented by the Bayes net that you just gave me. For example, here is a Bayes net. We can ask the question, are X and Z guaranteed to be independent? Who thinks X and Z are guaranteed to be independent? Who thinks they're not? OK, why not. STUDENT: Well, they're connected through Y. PIETER ABBEEL: So they're connected through Y. That's why the thinking is, well, they're not guaranteed to be independent. So what can you do? You can see how this reasoning. X can influence Z, because this story over here is one way to explain it. Maybe there is low pressure, which causes rain, which causes traffic. And I can associate a distribution with this. I can actually put numbers down and say, probability of Y given X maybe is Y is always equal to X. Z is always equal to Y. And if that's the case, then we know that X influences Z and they're not independent. And to show that there is no independence just one example is enough. If we did want to show independence, we'd have to show it to be true no matter what numbers you put in the conditional probability tables, which is what we'll do most of this lecture. One question. Is it possible, if somebody gives you a Bayes net like this, that X and Z are actually independent? Who thinks that's possible? Why? STUDENT: So only condition for independence is if you take the individual probability-- if the product was equal to conditional probability that is independent. So if somehow the product works out to be [INAUDIBLE] and independent. PIETER ABBEEL: OK, that's the right kind of reasoning. Now to complete it. So the reasoning was, if somehow the product works out that things are independent, then we'd still achieve independence even if this is the graph's structure. And now a question is, can we come up with a conditional probability table for X, Y, and Z such that indeed there will be independence? This is a little bit of a tricky thing. Because it's very counter-intuitive. Because why would you use a Bayes net like this if X and Z are independent? But the question here is, mathematically speaking, could we still make them independent? And the answer is yes. Just make X a coin flip. 50/50. You make Y a coin flip, 50/50, independent of the value of X. And same for Z. Now everything is independent coin flips. But you can still represent it with this kind of Bayes net. It's just not very smart to use this Bayes net to represent that distribution. Because this Bayes net is overkill for distribution over all independent variables. But you could do it. Just every entry in every CPT will be one half and not two. OK, let's start thinking about how we can come up with graph algorithms to conclude independencies between variables in Bayes nets. The way we'll do this, we'll first study this for triplets. And then for more complex cases, we'll refer back to the triplet cases and say, OK, this complex case can be solved by just looking at triplets that constitute the more complex case. And the thing we'll come up with ultimately is something called d-separation. OK, here's one triplet. X points to Y points to Z. For example, low pressure could or could not cause rain. Could or could not cause traffic. This is the distribution that we get when using the above Bayes net structure. First question, can be guarantee that X is independent of Z? We just talked about that. We cannot guarantee that. That was on the previous slide. The answer is no. Because X could influence Z in this kind of Bayes net. How do we show this? Well, we can show it by thinking through a story. But even more concretely, to mathematically show it, we could come up with some numbers. We can say, here's a story. Let's now put it in numbers. Y is always equal to X. Z is always equal to Y. And then let's say X is 50/50. Then there is clearly an influence from X onto Z. And we can not claim independence. So if somebody asked you, given a distribution represented with this Bayes net, but I did not give you the numbers in the CPTs. I just gave you the graph of the Bayes net, can you claim independence of X and Z? The answer is no. How about can we guarantee X is independent of Z given Y? Intuitively, yes. That's the whole intuition of this kind of causal chain is that one causes the next, causes the next. And if you know the one in the middle, it doesn't matter anymore what the one before it was. So our tentative answer is yes. X is independent of Z given Y. And, again, to make this question very explicit. The question is, if somebody tells you this is the structure of the Bayes net, but is not giving you any conditional probability tables. Just saying this is the structure. That's all you know. Without knowing the tables, can you conclude that X is independent of Z given Y? And we're saying yes, and we're going to prove it. So let's write it out. Conditional of Z given X and Y, by definition, is equal to the joint of X, Y, and Z, divided by the joint of just X and Y. Now for each of those, we can fill in how they are represented in the Bayes net. The joint of X, Y, and Z is the product of what's shown on top there. And the joint of X and Y is the product of what's shown at the bottom. Now we can simplify. What we're left with is just Z given Y. So we have here is the conditional of Z given X and Y equals the condition of Z given Y, which means Z is independent of X given Y. And what it shows is that if we have evidence along such a chain, it blocks the influence from the early variables onto the later variables. Now just to be very explicit about where did the Bayes net come in. Where did we use the fact that we have this specific Bayes net and why we can reach this particular conclusion. It happens in this step over here. So when we go from here to here, in general, when we apply the chain rule, we can't just try Z given Y here. We have to write Z given X and Y. But because of this Bayes net structure, it assumes we can write the probable distribution this way. That's where the assumption comes in. Next we can simplify and reach our conclusion of conditional Independence. Let's do another one. Common cause. So maybe a project could be due or not. Then based on whether our project is due or not, forums could be busy or not, and labs could be full or not. So we have one node at the top, which might cause two separate effects. This would be the joint distribution encoded by this model shown here. We call this common cause, because it's two effects that are caused by a single thing. These are guaranteed to be X independent of Z? Answer is no. Why not? Intuitively speaking, when projects are due, it's more likely that both the forums are busy and the labs are full. And when there's no project due, it's more likely that they're both not busy and not full. So clearly X and Z are correlated. They're not independent. They tend to be on and off more commonly together than not together. So that means they're not independent. So all we need to do to show this is not guaranteed-- if somebody gives you a Bayes net, just the structure, not the CPTs and asks you is there an independence, you say no. All we need to do to prove that no is to say, here is a conditional probability table that has it not be true. So for example, this is the story we just told. And in numbers, we make X take the same value as Y. We make Z take on the same value as Y. And if that's the case, then X and Z will take on the same values. And clearly they're not independent, then. So it's an example of a choice of conditional probability tables that shows just proclaiming this structure of your Bayes net is not enough to be able to conclude anything about independence of X and Z. In fact, generally it won't be true. How about if we condition on Y, the node in the middle? Are X and Z conditionally independent given Y? Intuitively, yes. Because that's the common cause. And once you know the cause for Z, it doesn't matter what X is doing. Because it's Y that's causing Z. We can now again go through the math. We have, what is the condition of Z given X and Y? We write it out, same definition as before. Next step, we're going to use the structure of the Bayes net to fill in this specific instantiation of the joint distributions for top and bottom. Again, assumption has been made. The last variable here, Z, is only conditioning on Y. If you apply the chain rule, we'd have to condition on X and Y. Just Y. That's the Bayes net assumption for this Bayes net structure. After we do that, things simplify. And we're left with just Z given Y, which shows that Z is independent of X given Y. Because once we know Y, adding in X in the conditioning does not change what we know about Z. So observing the cause blocks the influence. Here's another one. Third type of structure. With three variables there's only so many things we can do. This is the third type of structure that's available to us, where they have two variables pointing to the bottom one. For example, the story could be, it could rain or not. There could be a baseball game or not. And then, as a consequence, there could be traffic or not. Are X and Y independent in this kind of Bayes net? So again, what does that mean? If I ask the question, are X and Y independent in this kind of Bayes net? That means that no matter what choice of conditional probability tables you associate with each of those variables, the joint distribution should exhibit the fact that X and Y are independent. Who thinks yes? Somebody there. Red shirt. You want to explain why? STUDENT: There's no [INAUDIBLE]. PIETER ABBEEL: So the intuition here is X and Y are drawn independently. There's nothing causing them to be correlated. And so we expect independence between them. What do we need to do to prove this? We'll have to again to do the math that we've been doing all along. Not doing it on this slide here, but it's the same kind of math that we did on previous slides, where you would write out using that this distribution is of the form pX times pY times Z given X comma Y. And by using that structure, you will be able to show that the joint of X and Y equals just pX times pY. Are X and Y independent given Z? So once you know Z, once you know there is traffic, is there now any influence between X and Y? Or are they still independent? Who thinks they are now not independent anymore? Why? STUDENT: Maybe [INAUDIBLE]. PIETER ABBEEL: OK, so the intuition here is correct. I'm just going to rephrase it. Because it's hard for everybody to hear you. Intuition here is that, imagine you know there is traffic. And you're wondering, will there be rain? Will there be a ballgame? You're not so sure, but because there's traffic you say, well, my probability for both has gone up. Because there's traffic. And either one could have caused that traffic. So higher probability of rain. Higher probability of ballgame. Then somebody tells you there is a ballgame. At that point, your probability for rain will drop. Because there was two possible causes of traffic. Initially, you didn't know which one might be the one. And so the probability for both went up. But then, all of a sudden, you know there is a ball game. So that's enough explanation for there being traffic. And that now decreases the probability of the other explanation. All right, so whenever there is multiple possible explanations-- that's the scenario here. There's multiple possible explanations for something to happen. When that thing happens, probability of all of them goes up. But then when somebody tells you of one of them is true, that'll drop the probability of the others. So that's the story here. And you can actually then write out a probability distribution that has exactly this story encoded in it. And then if you check the condition for independence, of X and Y given Z, you'll see the condition doesn't hold true, which is proof that we can not claim this. Now let's look at the general case. What have we seen so far? Or what do we want to do now? We want to know, for any Bayes net, figure out whether the two variables are independent given some evidence. We want to do it by just looking at the graph, not looking at the conditional probability tables and do some tedious work there. And we'll do it by breaking it down into the canonical cases we already covered. So let's take a first pass at this. Let's say you're given a Bayes net, and somebody asks something about, are these two variables independent given the other variables? You might say, well, let's just check if there's a path between the variables. And then do something with that path. And a naive think could be, oh, to make it easier to think about this, I'm going to shade evidence nodes. So if we observe, let's say, R. We will shade R. And then maybe we might ask, is L independent of B given R? Might be a question. L independent of B given R could be a question we ask. And then we could say, well, maybe all we need to do is look at the path between L and B. And, oh, there is a shaded node in between which might block the influence. And we're all good to go. Turns out it's a little more complex than that. Because we've seen it in triples we just analyzed that, if the shading happens in one of those bottom nodes, it actually activates the influence, rather than deactivate the influence. So we need to be a little more careful. So let's spell this out explicitly. So whenever we have a path, we're going to break it up into triples. These triples are going to be overlapping triples that together make up an entire path. And if we ask the question, are X and Y conditionally independent given some evidence variables Z? We will check paths between X and Y. And check every triple along that path. And check for each triple if it's active or inactive. With the work we already did, we know that these are the active triples. And these are the inactive ones. We actually did not do the work for the bottom one here. So let's do a little bit of explanation about this one. What's this one here? This one is saying, if I have a common effect, or there's multiple things that could cause something, it's not just the case that influence can travel in this scenario here. But the evidence that we observe can also be much further down. And that will still trigger influence between the variables. It's like if we go back to the example we had here, we had traffic. But what if we, from traffic, hang off another node. Maybe accident has been reported or not. Then it's also going to be the case, initially you observe nothing. Then you observe there has been an accident. The accident having been there makes you think, probably, there is more traffic. Probably there is rain or ballgame. This model would assume the more traffic, the more likely there's an accident. And just knowing there's an accident will already cause influence between these two variables up here. So that's the bottom one here. And then we have the inactives on the right. So to draw any conclusions, we'll break it up into those. And we just need to remember which one falls on the which side. Any triple, and you'll want to be able to recognize that easily. Just remember, essentially, typically, when a node is shaded, it'll block the path. But the exception is when it's one of those V-structures where it will actually activate the path rather than block it. So that's a triple categorization. Now we'll get a query. Are two variables Xi and Xj independent given some evidence variables? OK, we'll check all undirected paths between Xi and Xj. We check a path. For every path we check-- and if one of these paths is active, then independence is not guaranteed. Because once there is an active path, some influence can flow. And that's it. There is no independence. If you have checked all paths, and by the time you checked all paths you have not found any active paths. They're all inactive, only then can you conclude there is independence. Because there's not a single path that can influence one variable from the other one. OK, let's take a short break here. And after the break, let's work through a few examples of how this works. All right, let's restart. Any questions about the first half of lecture? Yes? STUDENT: Why is it necessary to look for all conditional independencies among your Bayes net? PIETER ABBEEL: So the question is why is it necessary, or maybe interesting, to look at all the conditional independencies in your Bayes net? The answer is that when you want to model a joint distribution over many random variables. To do that a Bayes net can help you express that distribution by not requiring you to write out a full joint distribution, which is extremely large to write out and difficult to come up with the numbers. So it's nice. The business helps you with that. But then when you choose a certain Bayes net structure, you're making assumptions. And so the reason is to understand what assumptions you are making, depending on how you structure your Bayes net. Such that you don't make assumptions that you realize are actually faulty assumptions and will limit what you can do with the model that you come up with. Because the Bayes net graph is already constraining you and not allowing you to represent effects that you know are there, but that you made impossible to represent. Because you introduced some independencies. Let's do an example. First one. Is R independent of B? How do we do this? What's the procedure? First step, we try to find all paths that connect R and B. There's only one path. This is the path. Along that one path, we check all the triples. And check whether they're are active or inactive. There's only one triple here. So we need to check one triple. Is that triple active or inactive? Well, we can look at our categorization. Our categorization shows that this kind of triple V-structure, with no evidence observed in the bottom of the V-structure or anywhere below, is inactive. So this is an inactive triple. That means this triple ensures that along this path, there can be no influence from R to B. This is the only path. So the only path between R and B is not able to carry any influence between R and B. That means that R and B are independent. Next one. How about R independent of B given T? There's some evidence. So the first thing we do, we're going to shade the evidence node. Then we look at all the paths between R and B. Still only one path. This is the one path. That one path is only one triple long. We look at that triple. Is this triple active or inactive? Well it's a V-structure, but there is evidence observed in the bottom of the V-structure. We know in those scenarios that influence can run through that triple. That triple constitutes the entire path from R to B. So that means influence can run in the entire path. Because it's only one triple long, and that one triple is active. So there can be influence all the way from R to B given T. So we can not claim this conditional independence to be true. So can't claim anything about conditional independence here. How about R and B given T prime, which lives at the bottom here. Well, let's go through the same procedure again. It's actually very mechanical and fairly straightforward. We look at all paths from R to B. Only one path. We'll look at the triples along the path. Only one triple. That one triple here. Is this triple active or inactive? Well, this kind of structure, activity depends on whether there is evidence at the bottom or below the bottom. Indeed, there is evidence below here. So this triple is active. This active triple constitutes the entire path. So the entire path is active. If we have an active path from R to B, then we can not claim conditional independence. So no claim of conditional independence there. Next one. Slightly bigger network. How about, is L independent of T prime given T? So how do we do this? Given T-- so we'll shade the T variable. We're concerned with L and T prime. The only path connecting them is this path over here. This path has multiple triples along it. Here is a first triple. And this is important. The second triple starts here and runs this way. So you can see there's a lot of overlap when we pick all triples along a path. Here there's two triples along the path. We want to check, is this path active? OK, let's check the first triple, the one up here. It's a causal chain, L to R to T. Causal chain with no evidence in the middle. So it's active. So first triple is active. How about the second triple? Is that one active? It's also a causal chain, but a causal chain with evidence in the middle. Causal chain with evidence in the middle is an inactive triple. So this one is inactive. If every path consisting of multiple triples-- if any single one of them is inactive, that means that one blocks the flow of influence. And so then the path becomes inactive. So this path is inactive. It's the only path between L and T prime. The only path between L and T prime is inactive. That means L and T prime are independent conditioned on T. Next one. L and B. For L and B, let's have you take, let's say, a minute. Talk with your neighbor and think through whether or not we can claim independence of L and B given this Bayes net structure. OK, let's see. Who thinks yes, we can claim this independence? Raise your hands. Two people. OK, over there. Kara, why? STUDENT: Because you're not given any information. And then also, you have to have that set of [INAUDIBLE].. PIETER ABBEEL: OK, so the reasoning here was that there is no evidence along the path. And these are triples, that when there is no evidence they might or might not be active. I think we need to go in a little more detail than just repeating what you just said. So let's step through this in detail. L and B, no evidence. This is the only path between L and B. How many triples along the path? There's a triple here. And another triple here. Look at the first triple here. It's a causal chain. The middle node is not observed. So this one is active. Second one is a V-structure. There's no evidence in the V-structure. So this is inactive. Once you have one inactive triple along a path, the path becomes inactive. And this is the only path. The only path is inactive. What's going on here? Only path is inactive. So that means we have conditional independence of L and B given zero evidence. So yes, most of you were correct. We have conditional independence here. Now one thing you can do if we think about this one again. We have L and B we care about. And I said, well, this is the only path. And we have a bunch of triples. The slightly faster way to get through it than what I just did is if you can directly identify the triple that is inactive. So you say, instead of linearly going through all the triples, if you say, oh, I know that this triple is inactive. I don't have to check any other triples. Because I already found one that is inactive, which will break the influence in that path. And we now know this is an inactive path. And we're done with checking this path. Next one. Is L independent of B given T? Well, this is the same reasoning as we just went through. We're looking at L and B. But now this one here is observed. This triple before was the inactive one blocking things. Now thanks to the evidence here, this is now an active triple. So this triple is active now. The other triple along the path is also active. So the entire path is active. And we can not claim any conditional independence. What if instead of T, observe T prime? Does that change anything? No, it doesn't, right? Because the triples between L and B remain the same. And whether this triple here is active or not only depends on whether there is anywhere along-- from there down-- any evidence. It doesn't need to be in T. It can be in T prime. All has the same effect. So also here, we have a path that is active. So we can not claim conditional independence. How about this one? We are still looking at L and B. We observe R and we observe T. Who believes we have conditional independence here? A few people. Why? STUDENT: The LRT triplet is inactive. PIETER ABBEEL: OK, so the answer was, we have only one path. Along that path we're able to find an inactive triplet, namely LRT sitting over here. Once we find an inactive triplet, we know the entire path is inactive. And at that point, if that's the only path we can claim conditional independence. So yes. Here's another example. In this one, sometimes there's more than one path between nodes. Story is it could be raining, which could cause traffic, which could also cause, independently, the roof to drip or not. And then you could be sad either because the roof is dripping or because there is traffic. OK, questions? Is T independent of D? Let's have you think about this for about 30 seconds. Talk with your neighbor and see what you come up with. Who thinks we can claim independence of T and D for this Bayes net? Not a lot of you. That's great. We can't. Why not? T and D. To check if we can claim independence, we have to check all paths. And if we find a path that is active, then we can not claim independence. Well, the path along the top here has a single triplet. That triplet is a active triplet. So we have a path of influence. So we can not claim independence. You might wonder, how about this bottom path over here? Well, that one is inactive. So we have a situation where there is two types of paths, active and inactive paths. But once there is a single active path, it voids our ability to claim independence. So we can't claim it. How about once we condition on R? Well now there is all of a sudden independence. Because the bottom path was already an inactive triplet. And now the top path has got them blocked. Also inactive. Both paths inactive, so we have independence. How about if now we also condition on S? So we condition on R and S. What happens? Well, conditioning on S will activate the path along the bottom. Once that is activated, there is an active path. And we cannot claim independence. OK, now let's go back to the question that came right after the break. Why and how might we investigate all independencies and what do we do with this? So given a Bayes net structure, we now have an algorithm called d-separation, which we step through in our heads and on the slides. But in principle, you can code in a computer program. You could write a computer program that, given a Bayes net structure, and the graph finds all the paths. Along each path checks the triplets and then checks whether or not each triple is active. And then Bayes net decides whether or not there exists an active path. And if there is an active path, then it says, cannot claim anything. If there is no active path, it says independence. So we can do that program adequately now. What can we do with this? What it actually does-- the list that we can generate this way, if we try to look at all of them, is a list of assumptions we make by choosing a specific Bayes net structure. So let's try to do this. Here is a few Bayes net structures. Let's compute all the independencies that we can claim for each of those Bayes net structures. How about the first one? Y common cause to X and Z. What independencies are present there? Only one. It's X independent of Z given Y. How about the next one? Causal chain X to Y to Z. What are the independencies? Also only one. In this case it's X independent of Z given Y. Actually, the same one as before. They have the same independence. And it's the only independence you can claim based on this structure. How about the next one? X and Z point into Y. What are the independencies that we know? Well, when there's no evidence, X and Z are independent. And that's it. How about the last one? Where we have Z to Y, and then Y and Z together to X. What independencies do we have here? None. Why none? Because every node has all the previous nodes in the ordering as its parents. That means we can have an equation for this Bayes net that matches with the chain rule. And we made no assumptions. So here we have the empty set. So one interesting thing already comes up here. If you look at these two Bayes nets, they actually end up with the same set of independencies. What that means is that if you want to represent a distribution over three variables, X, Y, and Z, that represents something in the real world. Whether you use the first one or the second one, you'll be equally capable of capturing that distribution. If somebody represents it with the first one, you can turn that into a new Bayes net structured like the second one that represents the exact same distribution. Because the assumptions you make by using the structure is the same. And so they can represent the same distributions. On the other hand, if somebody gives you a distribution represented by the bottom Bayes net over here, that Bayes net makes no assumptions. So it's very unlikely that if you get a distribution represented by this Bayes net here, that you can then represent it by any of the other Bayes nets. Because the other Bayes nets make assumptions, independence assumptions. And if they're not holding true in the distribution you're given, then you can not capture that distribution. Some might wonder, why do we even have both versions if they can represent the same distributions anyway? Sometimes it's easier to specify one than the other. If the way the world works is really something along the lines of Y tends to cause X and Z, then the probabilities in the table of this Bayes net will be much, much easier to come up with than if you then represent it with some chain Bayes net. Similarly, if the effects are really a chaining effect, then it's going to be much easier to use the second Bayes net to represent that probability distribution than the first one. You still could use the first one. They are interchangeable, mathematically speaking. In terms of practical ability to come up with the numbers in the CPTs, it might still matter which one you pick. So taking a further step back. Let's look at distributions over three variables. And you can think of a distribution as, let's say, a point in the space here. So this space here is any point corresponds to a distribution. Now we can say if we look at distributions that can be represented by this Bayes net over here, X, Y, and Z, zero connections. So all independencies they can possibly come up with between three variables, they all hold true. You can represent only a very small set of distributions with this kind of Bayes net. And that's this kind of small, green thing in the middle here, where a different point in that set corresponds to a different distribution. Like maybe one distribution has probably one half, one half, one half for each of of X, Y, and Z. Another one might be a biased coin flip for X, maybe 0.9, 0.1. And Y and Z still have 0.5 and 0.5. Yet another one could be X 0.9, 0.1. Y 0.8, 0.2. Z 0.2, 0.8. So any such choices will be representable by this Bayes net here and live in that small cell. Now if you take a different Bayes net structure, one shown here. What do you get for any one of those three? You get the same conditional independence assumption. Only one, that one-- X independent of Z given Y. That is a lot less assumptions than you made over here. So it's a larger set of distributions that you can represent if you're only forced to make this one assumption, rather than being forced to make a lot of assumptions. And it's also the same distributions. Any distribution represented by the first one, you can re-formulate as represented by the second one or the third one. It's interchangeable from a mathematical point of view, even if from a practical point of view, you often will still prefer one structure over another one. Here, how about these Bayes nets? These are all Bayes nets that make zero assumptions. So if you use any of these Bayes net structures, you're not forced to make any assumptions. And you can represent any distribution over the three variables. How come there is six of them? It corresponds to the six possible ways of ordering those variables. It's like the chain rule. We have three variables. So we first have three choices, which one to go first, then two choices left for who goes next. So 3 times 2, six choices of ordering. And each of these corresponds to one of the six orderings in the chain rule and in constructing the Bayes net. Again, if somebody gives you a distribution, for example, represented by this Bayes net over here, and you wonder, can I instead use this other Bayes net? The answer is yes. They are equivalent. These structures make the same assumptions, namely no assumptions. But if somebody gives you a Bayes net like this and then says, can you please give it to me in that form factor? Usually, that's not going to work. There might be some coincidence where some very specific choice of the conditional probability tables makes it still work out. That there was a hidden independence assumption that was just there because of the numbers. But typically it won't work out. Because here you need an assumption built to use any of these structures. And so going back to the earlier question, this is what you think about when you build models. You think about, what assumptions am I making? Was the entire set of assumptions I'm making when I choose a Bayes net structure? And also, how can I structure my Bayes net in a way that it's easier to come up with conditional probability tables for it? And so even if you say, this is the only assumption I'm going to make, you might still say, well, among the choices I have available, maybe the first one is the one that makes it easier to think about what is really going on here. Any questions about this? Yes. STUDENT: We don't see any models that are shaped like a V. PIETER ABBEEL: Oh, OK. So the question is, how about a model maybe that looks like this? Y, X, Z. So what are the initial independencies here? Just X independent of Z. Nothing else. So maybe if I had purple or something, we could have another set of models. What would happen now? If we want to draw this on here? This one here, purple, makes less assumptions than the green one. So everything that lives in the green set, purple can handle. That Bayes net can do it. Now if it really lives in the green set, it would be a very non-communicative choice of Bayes net structure, to use this one with two edges in it if you could have gotten away with zero edges. But you can do it. You can represent-- so everything in green can go and will be subsumed in purple. How about everything in red? Well, this is kind of a different assumption here. These assumptions are not directly comparable. X independent of Z versus X independent of Z given Y. So what's going to happen here is that you're going to have a purple set that-- let me just color it in. It will have all the green included. Not this thing here. So all the green it'll have. And then just like the red one encompass the green, but not everything. Because it still makes an assumption. There would be something similar, but we wouldn't want it to overlap with the red one. So it's hard to draw. But let's say we stopped the red one over here and over here. Then the purple one could come out of here and maybe run here. Or something like that. And there will be others. Because you could reorder the variables. Instead of Y at the bottom, you could put Z at the bottom or X at the bottom. And same thing for these red ones. You could but not Y in the middle, but X or Z in the middle. And then you'll have similarly other things coming out. A little hard to draw, but it will have the same kind of effect, where they will capture the green. But they will have none overlap otherwise with the other ones. And then the blue one will always have them all included. Because the blue one makes no assumptions. So it can capture everything any Bayes net with less edges can represent. Essentially, whenever you remove an edge, you make an assumption. When you have all the edges, you've made no assumptions. Any other questions about this. So quick summary. Bayes nets compactly encode joint distributions. And the reason we care about that is because we want to reason about large numbers of random variables. Hundreds, maybe thousands or millions of random variables to do complicated things. And including the joint distribution directly is not going to be practical. By using a Bayes net we introduce some guaranteed independencies that we can read off directly from the graph structure. The algorithm we saw today, d-separation, gives us a way to methodologically check independencies. If you then run it for all sets of variables, you get the full listing of all independencies that characterize what you assume when using a specific Bayes net structure. Last bullet point here. This one's a little subtle. But it is possible to have independencies that your d-separation algorithm does not find. What I mean with that is that d-separation will only find the independencies that are true, no matter what numbers you put in the conditional probability tables. That's what it does. But if you put it in very special numbers in the tables, there might be additional independencies that you cannot read off from your graph structure. Such as if you put one half everywhere. Then there will be complete independence. And you won't be able to read it off from your graph structure. Because the graph structure doesn't tell you there is one half everywhere, it just shows you the structure of the Bayes net. Where does that put us? Last lecture we covered representation. Now we covered better understanding what assumptions we make when using this representation. Next lecture will be probabilistic inference, exact. And then the next next lecture will be, well, to do it exactly might still be complete in some scenarios. So it might need something approximate through sampling. And then after that, we'll start looking at learning the Bayes nets from data. OK, that's it for today. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181127_Robotics_Language_and_Vision.txt | PROFESSOR: Hi, everyone. Welcome to lecture, the 24th lecture of 188, before last one. Let's take a look at what's left. So we have the final contest, which is due tonight. In the final contest, you get to design an agent that plays together with another agent to try to collect food pellets while not getting eaten by ghosts. And there's a few staff agents to compete with, as well as a leader board, against other students. So submissions for that, your last chance to submit are tonight at midnight. And on Thursday in lecture, we'll discuss the results. What else is left? I think there is a project due next week. There is still a section this week. And I think that homework is all wrapped up, but you would still have a self-assessment of your last homework that will be due next week. And then there's a final exam the week after that. Any questions about logistics? STUDENT: [INAUDIBLE] PROFESSOR: You will be graded only on your last submission. So yeah, don't ruin it in your last submission. Any other questions? OK, let's dive into technical materials for today. So today's lecture, as well as Thursday's lecture, will be mostly on advanced applications. The idea behind these two lectures is to look at advanced applications, where we have covered a good amount of the material in the ideas behind those applications, but not necessarily everything. But enough to get an idea of how these systems might work. We will not quiz you on these application lectures, on the final exam, or anything like that. It's more meant to give you more perspective rather than extra study materials for the final. So far, I've looked at foundational methods for search, for acting adversarial environments, for learning to do new things, and for dealing with uncertainty, noisy information. We've seen it all in the context of Pac-Man trying to solve a wide range of problems. Let's take a look at some counterparts in real world applications. So Search is very prevalent in language processing, and we'll cover some of that on Thursday. Adversarial games. We'll look a bit more at AlphaGo, arguably the most prominent result in gameplay in the past few years. Learning to acquire behaviors. We'll look at some real robot behaviors in flight and in legged locomotion. And dealing with uncertainty. We'll look a bit at the current state of self-driving cars. So AlphaGo. Today's state-of-the-art in Go is that there are computer players better than the best human players. But actually, if you went back to March 2016, that was not the case yet. There's a headline from first week of March 2016, a little over two years ago. "Google is trying to make artificial intelligence history and it could happen this week." This was alluding to the match AlphaGo was going to play against Lee Sedol later that week. What happened is AlphaGo actually won the majority of those matches, and, in some sense, became new world champion. And in what it did, we had to update this graph that you see at the very beginning of the course, where we had already checkers fully solved, which means we understand who can guarantee a win. Chess, computers play beyond human level, and Go, now also the case. So how do you make an AI for Go? Let's go back to what we were looking at in lecture on games, MiniMax. MiniMax is about solving games in adversarial environments. You have your own player, but then there's the opponent, plays against you. And you reason about what you will do, what they can do, what you can do, all possible ways the game can play out. For a game like Tic-Tac-Toe, you will find out that you can force a draw, and that means fully solving the game. But for a game like Go, this is actually pretty hard to do, and it's even much harder than chess. And why? Let's take a look at chess. This is an animation made by DeepMind, showcasing the branching factor in chess. It's large. It's much bigger than Tic-Tac-Toe. You're not going to easily compute all possible scenarios in chess. Hasn't been done yet. But it's still somewhat reasonable. For Go, here's what it looks like. The brushing factor is much larger. It's a 19 by 19 board, so there's 19 times 19 positions to choose from in the first move. And then one less, of course, next move. But the branching factor is enormous. So if you tried to run an exhaustive search through this kind of game tree, it's not going to work. We also saw the alpha beta pruning. And the ideal expansion scenario can reduce the size of the tree by square rooting it, making it half as deep. Still too much. You can't work through this, even if you had the perfect ordering in alpha beta pruning. So what else can we do? What we saw is that maybe you can decide not [? this ?] search has deeply by having an evaluation function. Now, evaluation functions can be difficult to design, and what's so special about AlphaGo that the evaluation function was learned. A deep neural network was trained to provide a good evaluation function at various stages of the game. And so when you think about your move, maybe you only have to think two deep instead of all the way till the end of the game. Because after two deep, you look at the evaluation function and it might be good enough. We know neural networks are universal function approximators. In principle, they could represent the exact evaluation function. Of course, that would require a lot of data, and, in fact, data of optimal players, which we don't have. But maybe if you have enough data from pretty good players, you can train a neural network to evaluate the value of a position. Let's say a score between 0 and 1, according to a win probability of y, let's say. But even then, because the branching factor is so large, the evaluation function, sure, if you want to look one step ahead, no problem. But if you want to look a little more than one step ahead, the large branching factor is still a bit of an issue. But many of the moves are not that useful. And so the question you can ask, can we learn another neural network that can tell us how to reduce which moves to consider when we do our depth-limited search? And so policy network can be trained, which assigns probabilities to all possible moves. And then you might only consider the moves with high probability when running your search, or you might just sample from that distribution to run your search. So it learned two networks, of value network, which learns the evaluation function, and a policy network, which reduces the branching factor. So let's look at the whole pipeline. For AlphaGo, there was a bunch of data collected from human play. Then a supervised learning was run, so a deep neural network was trained to predict who will win from a certain situation. So we've seen examples of predicting what digit. This would now be predicting who wins when this game is over. There's two things being learned from that. The first thing being learned is the policy network, which is deciding which moves somebody makes. Once you have that, you can use that pulse network to play against yourself, and you can keep improving your policy from playing against yourself. This gives you a lot of data. That lot of data can then be used to decide who is likely to win from a given situation, which is the value network, that value network. Also, big neural network can then be used with your look ahead. And so when you're testing, that is, you're actually playing with these networks, you will sample from your policy network, likely branches you might go down and your opponent might go down. And then you'll cut it off at a certain depth, use the value network there to propagate the value of [INAUDIBLE] One little additional thing that is pretty interesting about Go, and what a lot of the previous Go systems build upon pretty heavily, and AlphaGo build upon, too, for supervision, is that Monte Carlo rollouts end up being reasonably good at predicting who's going to win. What do I mean with that? Let's say, in a given board situation, and from that situation, you just randomly play. If you do random play, it is likely that whoever is in the better position to win will also win in the random playouts. It's not guaranteed, of course, but they'll typically have a higher win probability. So that means you can self-supervise with pretty poor play. In fact, you can self-supervise with random play to learn who is likely to win from a given situation. These things, combined, led to mastery of the game of Go with deep neural networks and tree search, combined, to bring together two topics we've covered in this class. A year later, just about one year ago, another paper came out of DeepMind. This is October 2017. "Mastering the game of Go without human knowledge." This one is called AlphaGo Zero. And what they did there is they skipped the training on human play. So when we go back to the training pipeline, instead of training on human expert positions, from the beginning, it was just playing itself. And since we know that random rollouts tend to favor who is in a better position from the beginning, you already get some signal about what the right or wrong moves might be, even if your policy is still pretty poor. And then you can do this whole process from scratch without any human input, in terms of what are good or bad positions. And in fact, AlphaGo Zero was then generalized to Alpha Zero, which was used to also learn to play chess and, I believe, Shogi, or some other board game that's strategically similar to chess, but a little more complicated. So this shows that the combination of what we've covered at the very beginning, in terms of MiniMax, alpha beta pruning, and so forth, with deep known networks to help guide the sampling process through the tree and to guide what the valuations are when you cut off the search, can get you really good performance in games. Any questions about this? We'll cover a sequence of different applications one after the other, and this is the part for Go. So quick question. What happens when there is no human input? It's trained by just playing against itself. And so when you play against yourself, initially, you're playing pretty poorly, of course, because you don't know how to play it. It's a random neural network, or a network that's randomly initialized, making decisions. But what happens is that half the time you win, half the time you lose. And so it turns out that in terms of reinforcement learning problem, problems you can phrase as you play against a version of yourself, and that kind of problem is much easier to learn than most other things. Because half the time you win, half the time you lose. That's maximizing the amount of signal you get from data you collect. Oh, the final level achieved is beyond human level, way beyond. So there's some elo scales where they look at for Go. And AlphaGo Zero, which learns from just playing itself, goes at a level beyond where AlphaGo was. Strongest player in existence so far. And the longer it keeps playing itself, the better it keeps getting. There are often a few tricks you need to take into account when you do this self-play kind of thing, which is you don't want to just play exactly yourself, necessarily, because you might overfit to your exact current way of playing. So usually, you might keep around some past versions of yourself and play against a set of past players, and make sure you're good against all of them rather than just good against whatever your current strategy is. And sometimes the human knowledge put into it is looking at past games of humans, and training a value network and a pulse network to predict what happened in there. That gives you a better initialization of the neural network than random initialization. And so the question you're asking, effectively, is, OK, is it possible that that better initialization maybe steers you in the wrong direction and maybe you're going to get stuck in a local optimum that is a lot like humans? But maybe if you learn from scratch, you can find something even way better? I don't know the answer to that. But empirically, it seems that learning from scratch reaches levels beyond the levels reached with the previous methods. But yeah, who knows? There is no final answer yet. STUDENT: Is there any way of knowing if Go [INAUDIBLE] optimal search, or could it be at a local optimum [? moment, ?] another [INAUDIBLE] PROFESSOR: There is something called fictitious play. In fictitious play, you kind of play against yourself in a slightly more complicated setting than this. So I believe it includes this, the result. By playing yourself, you're guaranteed to reach an equilibrium. But it's a subtle way of reaching an equilibrium. Actually, what reaches the equilibrium is the average version of all your past selves. So rather than your actual current network being the equilibrium solution, it's the average of all past. Then there's something called neural fictitious play, which is where you train a neural network to match up with the average version of all your past selves. Because that way, you can do it more efficiently, because you need to keep around all your past selves. That's a lot to keep around, and that's not very convenient to use, especially if you need, let's say, thousands or millions of updates. But if you then train a network to match up with the average of your past selves, then you effectively get the same thing, and that will reach an equilibrium which tends to be local. It doesn't need to be globally optimal. Yes? STUDENT: [INAUDIBLE] PROFESSOR: So AlphaGo Zero-- well, let me maybe see if I can pull this up easily, because I have a slide on that, just not in this deck, and then you can see it. It was still climbing, but let me see if I can pull something up that might showcase it. OK, so here's AlphaGo Zero. The cursor's somewhere here, yep. So AlphaGo Zero is compared with AlphaGo Lee, which is the version that played Lee Sedol. And then AlphaGo Master. So what happened is there was AlphaGo that played Lee Sedol. Then there was AlphaGo Master, which was released online play, and was pretty much, essentially, beating everyone in somewhat faster games that are played online. And then there was AlphaGo Zero, which was more recent. So if you look at elo rating on the vertical axis and then number of days of training on the horizontal axis, this is what happens. So AlphaGo Zero has no prior knowledge. It starts at a pretty, actually, negative elo rating. Loses all the time initially. It's playing itself, but then gets tested against a set of players to calibrate its elo rating. And the green line is we're AlphaGo Lee Sedol was. That blue dotted line, after 21 days, it goes past where AlphaGo Master, was which was an improved version of AlphaGo Lee Sedol. And then it was still creeping up after 40 days. Now, I think what you're asking is like-- I mean, there's some sense, at some point, if you keep going, you will be limited that you will, in some sense, solve the game. Go, even though it's a master game, in principle, has a solution. And so in principle, if you have training working out perfectly, you reach a point where your neural network essentially knows, from every situation, who's guaranteed to win or whether it's guaranteed to be a draw, and then knows how to play that. I don't think this would be getting at that level yet. But at some point, you can reach that level. Once you reach that level, essentially, there's no further to go, because you solved the game. And I don't know what elo rating would correspond to that. Any other questions about Go? Yes? STUDENT: [INAUDIBLE] PROFESSOR: Well, that's a good question. So the question was, do we think we can solve the game with enough compute power? Now, with infinite compute power, for sure. But that's a given. Then you can definitely solve it. With reasonable compute power, it traverses the whole tree, even with alpha beta pruning, I don't think that'll happen anytime soon. It's a really big tree. It's essentially 19 times 19 times 19 times 19 minus 1 times 19 times 19 minus 2, and so forth. And that'll add up to a very large number. In fact, before deep learning became so successful, when people were speculating about how long it would take to get human level performance on the game of Go, a lot of people were thinking in that direction and were thinking, OK, how long will it take us before we can really traverse a very large part of that tree? And that's maybe when we reach human level play. And when people made guesses based on that, they would say, well, maybe 100 years from now, maybe 50 years from now, maybe 20. But pretty much no one would say less than 20, and most people would say 50 or more. If you do a more kind of brute force style approach-- not 100% brute force. It might be even longer. But the more brute force style approach, where you don't have good value functions, you don't have good policy functions, was expected to take decades before we reached compute levels that can do that. Yes? STUDENT: [INAUDIBLE] PROFESSOR: That's a good question. What is the advantage of not using the new human knowledge? So one possible advantage ties into the earlier question. It could be that, because you only find local [? optim ?] when training neural networks, it could be that by using human knowledge, you're in some kind of based enough attraction. For a local [? optim, ?] that maybe not as good as another one that might be out there. I don't know if that would be the case or not, but that's a possibility. It also depends on how much randomness you have in your exploration. So if you have enough randomness, then initialization will have much less effect than if you have limited randomness in your exploration. Generally, if you want to solve a problem, usually you want to bring everything to bear you have. So you'll bring to bear any data from humans, any human demonstrations you can get. Then we'll see some examples in a few other problems settings soon. And then the reinforced learning on top of that to further improve. That's kind of the standard way to solve a problem. From a research point of view, it's very interesting to see where you can get from scratch. So from a research point of view, it's a very clear question. OK, what can we do if we don't have any prior knowledge? How good can this system get? Is it even possible to learn good Go players by just playing against yourself? That's something people did not have an answer to until this experiment was run. OK, switching gears to helicopters. Here's a motivating example. Imagine you want your helicopter to do something like this. It's called a tick tock, after how old school clocks would have something that swings back and forth. This is a very hard maneuver to do. Only extremely capable pilots can do this. I don't think they do it with helicopters you sit in. This is an RC helicopter. How do you get a helicopter to do this autonomously? And by the way, this is done autonomously here, but how do we get to something like this? Well, what does it mean to fly a helicopter? What are the challenges? There's two key challenges. One is tracking where the helicopter is at any given time, because if you don't know where it is, you'll not be able to control it very reliably. That ties into things we saw with HMMs. And then the other thing is to decide what to do, what controls is sent to the helicopter. So what do you get to send to the helicopter? Typically, have a remote control, which has two joysticks. You send controls from that to the helicopter. Then helicopter might listen to those, but you need to decide what to sense, or you might have a computer that has some kind of calculation that feeds into the back of this remote control to send things out. Then the helicopter often onboard has some sensing, maybe inertial measurements unit, which measures the acceleration of the helicopter. So it's a three axis measurement, x, y, z. How much is helicopter accelerating relative to gravity? Free falling is measured as zero. Then usually, there is a three axis gyroscope, which measures around three axes-- your angular rate of your helicopter, and often, there's a magnetic compass which measures which way the helicopter is facing relative to the direction of the Earth's magnetic field. So those are all measurements you get on board. Actually, you might have a GPS, but it might not be that reliable when you fly in such aggressive maneuvers, and they tend to be somewhat imprecise. So it's hard to stabilize a helicopter when you only know where you are up to a couple meters. It's much easier if you have more precise measurements. So for example, we might put cameras on the ground that can position the helicopter reliably. So how do we track the helicopter? None of these measurements are super precise. So we set up a hidden Markov model, where we considered the state unknown. The state is x, y, z, then the three angles, the velocity, and the angle rates. And the measurements are the measurements I just described. So by running inference over time through this atrium forward algorithm, we can, at any given time, know, with some reasonable level of accuracy, what the state is of this helicopter. We also need that transition model for that, which goes from S0 to S1. So there will be some function that says, given current state in action, what's the next state? But only up to some noise. It will not be perfect. So how about, then, solving the MDP itself? What are our controls? We have four control channels, two in each joystick. Let's start with the collective. A collective is the action for the main rotor collective page. It's the average angle of attack as the blade goes through the air, which modulates how much vertical thrust you generate. There is cyclic controls, longitudinal and latitude cyclic controls, which determine the difference in angle from back and left right as the blade goes around. So that way, you can generate a torque that allows your helicopter to roll or pitch based on how much differential thrust you have from back, left to right. Then the tail rotor has a variable pitch also, and that pitch allows you to modulate how much thrust you get from the tail rotor. So that's how you control a helicopter. You cannot directly ask it to fly forward or sideways. If you want to fly forward, you've got to rotate nose-down and then you can accelerate vertically in a helicopter frame. But vertically, the helicopter frame will now be partially forward and partially up, and up compensates for gravity and the forward is what lets you fly forward. So you can build a model for this from collecting data from your helicopter, let's say, and learning a bunch of parameters that predict the next state, given current state and action. Can we solve it at this point, the MDP? Actually, we need one more thing. We need a reward function that describes what we want the helicopter to do. For hover, we saw this. You can have a reward based on distance to target location, x star, y star, z star. And maybe penalizing for non-zero velocity, so you want it to hold still. Actually, we saw this example, in the RL lecture, of a helicopter reliably hovering, which is a very hard problem. But reliably solved through reinforced learning, and, in fact, reliably solved for flying upside down. Upside down is harder. How do you keep yourself upside down? Well, that main rotor can have a negative angle of attack, and a negative angle of attack flying the normal way brings you lower. But if you're flying upside down, it keeps you up in the air. And it's actually more efficient because when you pull in air, you're accelerated. And when you are flying normally, that accelerated air goes over the helicopter body, pulling it down, whereas if you're upside down, that accelerated air does not drag your helicopter body down. Now, what if you want to do something more complicated, though, like tick tocks, and so forth? What are you going to do? Or maybe you want to flip. In the video we watched, the helicopter had already been flipped. The pilot flipped the helicopter, toggled autonomous control for inverted flight, and that's what we saw in the video, autonomous control and inverted. Flight we did not see autonomous flipping. Maybe you could say, well, let's write down a reward function by hand, because we get to choose what the helicopter should do. Let's just ride down maybe a path the helicopter should follow. And think about, what should a helicopter do when flipping? Well, it should be spinning. But if we just pay attention to spinning, it's going to be dropping, so it should also stay in place. But staying in play is actually not possible because when you are spinning and you're not able to generate direct vertical thrust, if you generate that vertical thrust in your helicopter frame, you move forward or backwards in some way. So you got to be a little careful about what you do there. So then maybe you watch a human fly, say, well, what I seen them do is when they're horizontal, they kind of give a big push so the helicopter accelerates up. Then they start the half flip. Then it's falling down. They're inverted. They give it another push, accelerating it up, and do another half flip and repeat. So maybe we decide as a target, for this helicopter, to do this kind of maneuver. But exactly what will be the details? It's not easy. Exactly how much do you push up? How fast do you spin? Difficult things to decide. A few years ago as a PhD student, I tried to design this kind of thing. And so we designed a trajectory. This is me, together with Adam Coates and Morgan Quigley. And then had reinforcement learning, learn a controller to follow the trajectory that we design. We ran many, many simulations, looking pretty good. And then we run it on our helicopter. So this is our helicopter, indeed, flipping, which is great. It's moving more than we want it to move, a lot more than we want it to move. And it went into the trees. So let's think about what happened here. So assimilation, this all worked out perfectly. But in the real world, turned out it didn't work. What happened in the real world is we're trying to follow this path that we design where the maximal reward is, and it had learned a control policy that's [? installation ?] is really good at following this path. But in the real world, this path was not dynamically feasible, because a helicopter cannot fly any path. Only some paths are possible. And so we had asked it to fly a path that's not flyable. So it starts deviating from that path. As it deviates from that path, what it learned in the simulator becomes less and less relevant to the real world. It learned to control around the path, but now it's on this different path. It doesn't know how to control around that path. It started overcompensating. That's why you saw it making these wild motions, overcompensating. It pushed the controls so hard that the engine died. It applied so much control. The engine just couldn't push it, died. At that moment, the blades stopped spinning, or they slowed down. You lose control over your helicopter, more or less, at that point. Then what happened is our human pilot took back control to try to save the helicopter. And believe it or not, they actually saved this. Helicopter incredible. So the human pilot saw the engine died, took control. The blades were still spinning. As long as you have inertia in the blades, you have a little bit of control. Got the helicopter right-side up with the power still left in the blades, the inertia left in the blades. Then got into something called autorotation mode, which is where you use the airflow as you're falling, the airflow going through the blades, to spin up the blades again. So you get new inertia in your blades. Then it went behind the trees. You couldn't see it anymore. But he knew from past experience. He could predict. His dynamics model was pretty good, I guess, in 188 language. His dynamics model was just phenomenal. It's behind the trees. He doesn't see it. He knows kind of what's happening. And right before he lands, he changes the collective control to not absorb energy into blades anymore, but to push it all back out. That kinetic energy that's in the spinning blades now goes into pushing air down to slow down the helicopter and have it land on its feet. It landed a little harder than you want to land, but it landed on its feet and it could be recovered from that, which was pretty crazy. So clearly, human level control, at that point, was much better than our autonomous controller. There's the same video. So we started looking at some human pilots. There's a picture of Alan Szabo, considered one of the world's best human pilots. He has a video out there called Sunday at the Lake. And Sunday at the Lake is a video where he is at this lake and he's flying a helicopter in ways that you could not imagine a helicopter could be flown, and it just works, unlike what you saw with our autonomous flight there. We said, well, clearly, the issue was that we asked the helicopter to follow a path that's not flyable. So what if we collect paths from a human pilot and then ask the helicopter to fly those paths? But it turns out, when we collect paths from a human pilot, they tend to be noisy. They don't do exactly what they want it to do. And also, our tracking of the helicopter is imperfect. So our recollection in the computer of those paths is still often not flyable paths. So we said, let's collect many demonstrations. So we've collected many demonstrations. And for these many demonstrations, we figured, then, this set of demonstrations captures the essence of what it means, what we want the helicopter to do. But not any single one of them is precise enough or is exactly what we want. And just averaging them is not going to work, either, because their time differently and they're oriented differently and averaging is not going to lead to a good path. So how do we, from, this get a specification of what we should be flying? Well, we could learn the trajectory from these as noisy observations. What methodology do we have for that? Hidden Markov models. If we have something we don't know that evolves over time, but we have some noisy measurements of it, we can run an HMM to recover what we actually want. So we want an HMM model that captures this. We have the hidden sequence of states and we have multiple demos, typically five to 10, only two shown on the slides here. Now, one little catch here is that they're not timed the same. So you can just hook them all up one by one and expect that this will work out because they're in different phases of this airshow. So what do we do? We use something called dynamic time warping. So what does that do? You can align two trajectories. So let's say you have many demonstrations. Then you have a hit injectory, which is just initialized with one of them. Then you run dynamic time warping, which aligns each trajectory with the hidden one-- which was just a guess-- but in the process, also aligns all the demonstrations, because they're all aligned with this one reference. After you do that, then you can run the inference and hidden Markov model, just standard based net inference, which, in this case, is an extended common filter/smoother, which is a forward/backward path, similar to what we covered, but done for continuous variables rather than for discrete variables. You run that. You might get this out. But remember, our alignment was kind of noisy because we aligned over time based on some random initialization. So now that we have something a little better, let's actually realign each demo to what we found. Let's then re-infer, through probabilistic inference, what the hidden state might be. Keep repeating this till we reach some fixed point, and that will be our target for our helicopter to fly. What does this look like? Here is, in white, the target found through the hidden Markov model inference, and in color, still the demonstrations. The demonstrations are now time-aligned because that's a side effect of running this inference. We see here is something that's better than any single one of the demonstrations. We want to take a closer look. In black here is the inferred target trajectory, and in color are the others. The color ones are not that great, but the black one kind of summarizes the essence into something that we want. OK, so now we're set. We have collected data to learn a dynamics model for the helicopter. We have collected data to collect target trajectory. We can now penalize our award, penalized for deviating from the target. And then we can run reinforcement learning in simulation, let's say, in this learn simulator to find a good controller and run it on the real helicopter. Now, it turns out we need to do a little more than that. The controller we learn in simulation is still a little optimistic about really following that path. So while we fly the helicopter, we'll do depth-limited search to improve what we have. So as we're flying, we'll run a search over 40 time steps, the next 40 times steps, to see what is the best sequence of controls. And you might say, why only 40 steps? We control 20 Hertz. That's looking two seconds into the future. Really, you want to look further into the future. So what do we do? We have a value function. So really, what we get from running reinforcement learning assimilation is a value function that evaluates how good each situation is at any given time. And we use that value function to be able to look ahead only two seconds, rather than needing to look ahead much further. And that value function tells us, OK, how good is it to end up here? We also have a reward at each time tick. And our search over those two seconds, capped off with the value of function, is what results in the control we apply. So let's bring it together, a bunch of different ideas, a single engine search, learning value functions, and learning models and reward functions through inference. Here's what we get. So fully autonomous. Takeoff, flipping over during takeoff. Hover. So this method can also learn to hover, no problem. Then it goes into forward flight and it's going to do something called split [? us, ?] where you do a half roll, half loop. It's a way to change direction without burning all your energy to friction. A stall turn is a climb, and then a 180 and come back down-- another way to change direction without burning your energy to friction. A loop. I think they only use is to show off your control capabilities. If you want to show if off more, you can pirouette around at the top. Stall turn again, coming out tail first. Hurricanes are fast backward flying circles. The fastest we flew this helicopter was close to 55 miles per hour, so almost highway speeds. The algorithm's only this big, so it's pretty fast for something of this size. Inverted flight. We know how that works. Knife edge fall. And then now are actually some of the hardest maneuvers. Why are these the hardest maneuvers to execute on, stationary rolls and flips? Why is flying 55 miles an hour not harder than this? Well, when you do something like this and hear the tick tock, the hardest maneuver in this air show, the reason this is so hard, when you do things in places, is that you're maneuvering in airflow. You just generate it yourself, which is very complicated airflow compared to when you fly 55 miles an hour. You're just going into new, clean air, which is much easier to predict what's going to happen. And ending an inverted hover. So with this methodology, it was possible to fly this helicopter at the level of the best human pilots, assuming you had demonstrations from those pilots to kind of piggyback on to get to this behavior. OK, that's for helicopters. Any questions about helicopters before I move to legged robots? Yes? STUDENT: [INAUDIBLE] PROFESSOR: So then after this project at Berkeley, PhD student Woody Hoburg took charge in seeing how far we can get without human experts for some simpler things, not for all the maneuvers here. So what Woody did, he essentially set up the helicopter to learn from scratch. So he set it up on the ground and it was supposed to reinforce one from scratch with a little bit of guidance. So a little bit of guidance meant that he would be there, shutting it off if it started doing something weird. So essentially, he would shut it off whenever it starts tilting itself, so it lands on some pretty wide landing gear so it's more stable. But that was the only human input required. He was able to have it learn to hover reliably with the only human input being shut it off when it looks like it might start doing something dangerous. We did not push that further to flying those maneuvers. There is some work. If you look at Woody, Woody was shutting things off. Then recently at OpenEye, there's been some work on robots learning to do back flips. And that was kind of one step further. It wasn't just shutting it off. You would watch your robot try things. And human input would be not specifying a reward function, which is very hard to do for things like back flips, just like it was hard here. But what they did is they said, the human watches it and says, which one is better or worse among a set of them? And then from that, it learns a reward function. So we'll look at here, too, in a moment. It learns the reward function and then optimize it against that reward function, does a few more. The human, again, says, which one's better or worse? Updates its learned reward function. Goes again. And so gradually, over time, it acquires a reward function. And so the human input there was somehow guiding what reward function it learns. And we'll see some examples of that here, too. They did it for back flips. I think the other way to get minimal human input for the helicopter case, if you put the helicopter very high in the sky where it has more time, and if it already has a recovery controller, then you can imagine that, at first, you learn a recovery controller with a little bit of human input. And then once you've recovered control, you just let it learn on its own and let it switch to recovery mode. And Claire Tomlin's group here at Berkeley has done some work in that direction, where they have a safe controller and a learned controller. And the learning controller is learning on its own while the safe controller keeps things in check so the helicopter doesn't crash. Yes? STUDENT: Imagine the helicopter [INAUDIBLE] PROFESSOR: Yes. STUDENT: [INAUDIBLE] Can that [INAUDIBLE] PROFESSOR: Yeah. Actually, it happens to work the other way around. So it uses roughly a fixed amount of fuel, anyway, per time. So it's more that it has less weight to carry as it has used more fuel. So it actually more power, until the very end, of course, when it has no power left. Yeah, the mass properties change a little bit. The fuel tank is pretty central. So it's close to center of mass, so it's not like the inertia changes in weird ways. It just gets a little more power. Maybe a question related to that is, how much power does this thing actually have? This helicopter had inverted slide, where it has more power, 3 Gs. So it can generate three times the power of gravity. Of course, you need to generate one of them to even stay up in the air, but it still had two G's left to do other things with. And regular flight had about 2.5 G's maximum acceleration. OK, let's take a short break here. And after the break, let's do legged locomotion and manipulation. All right, let's restart. Any questions about the first half? Yes? STUDENT: Which method was used for [INAUDIBLE] PROFESSOR: So the question is, which reinforcement learning method was used underneath what I showed you for the helicopter? So what we used there is a model-based reinforcement learning method. So we learned a dynamics models for simulator from data that was collected. To learn a good controller in the simulator, we used something called iterative LQR. Iterative LQR, what it says, it essentially looks at a trajectory and tries to find a linear feedback controller at each time slice. So the parameterization of a controller is a separate linear feedback controller for each time slice. And the way you learn that feedback controller for each time slice is by doing a forward pass to see what your current sequence of controllers achieves, and then you can do a backward pass, which is, essentially, a value attrition pass over that same trajectory to find the optimal sequence of feedback controllers. So essentially, value iteration, but in a continuous space. And a continuous space is always harder to represent things, and so that's why you make a simplifying assumption. We assume that we are just going to use a sequence of linear controllers. And actually, we approximate the dynamics with linear dynamics locally, because a lot of functions are locally linear. So as long as we're locally enough, linear can be fine. And we use a quadratic reward function. If you have a locally linear dynamics and quadratic reward function, it turns out that's the one continuous state action space scenario where you can run exact value iteration, even though everything's continuous. And it gives you a sequence of linear feedback controllers. And then we repeat. We run that sequence of linear feedback controllers and that will follow a new path. Along that new path, we linear the dynamics, because dynamics' not really linear. We approximate it linearly. We do another backward pass along that, and keep repeating this until it's converged. Once we've done that, we have value functions everywhere and we have linear feedback controllers everywhere. If there is no wind, you can actually just run the linear feedback control. It will be fine. But if there's some wind gusts that could throw you off, instead of using the linear feedback controllers, you want to use the value functions and the two second look ahead against those value functions to do the controls. STUDENT: Something like [INAUDIBLE] PROFESSOR: Yeah. Training a unified policy across the entire space might work. At the time that, was not something that-- we did this experiment in 2008. At that time, that was not something that anybody got to work on anything. So definitely not on such a hard problem it could be interesting to revisit that now and see what the current understanding of how to train these networks. Can it be done with one unified network? Likely, it can. It might take some work, exactly, figuring out how to do it. OK, legged locomotion. Maybe a little bit of perspective. When you think about robotics, it's often easy to think about, also, what humans do. So if you think about what humans do, you might say, OK, we can run. We can grab stuff. But we cannot fly. And so you might think, OK well, flying must be the hardest because we can't even do it. But the truth is that while flying is hard and, definitely, helicopter control is not an easy problem, it's in, some sense, easier than walking. And the reason it's easier is because if you're flying, you're up in the air. Everything is the same. The air here, the air there. Unless there's some weird air flows, that's the only thing that really changes, whereas when you're walking, the surface could change all the time. You need to react to that, understand what's happening there. A lot more change in your environment. And so walking tends to be harder than flying for robotic control. Here's an example of how hard this can be. This is a video from 2015. 2015, there was the Doppler Robotics Challenge, which was held in Pomona, just east of Los Angeles. People had two years to work on this. And what did the robot have to do? It had to, essentially, drive a car or walk, but driving the car was recommended. Then get out of the car, walk a little bit, open a door, grab a drill, drill a hole, walk some more, and that was essentially it. So doesn't sound that complicated. But actually, it turned it's very complex to get a robot to do that. And here's some footage from the competition. And keep in mind, these are people who worked-- I mean, those worked on for two years, typically. It's indicative of how hard it is to do walking with robots. So a lot of falling here. And you might wonder, why is all this falling happening, even on flat surface? Often, the robot is thinking it's going to grab something, but then it doesn't grab it. And it's generating a force to pull it or rotate it, but there's no counterforce. And so it did not sense correctly what is going on in the world and so did not react correctly to the world. Now, I was there for the competition. And it's very funny to watch the video, but it turns out at the competition, everybody was really sad for the robots when the robot fell. Nobody was laughing. It was like, oh no, poor robot. Very different. It was very interesting to see how people really connected with these robots and really felt for them and wanted them to succeed. Now, this is indicative of how hard this can be. And why is this so hard? Why did this not just work? Well, if you look underneath of a lot of what was going on in these approaches is they would build a model of the world. They would have sensors. Those sensors would suppose to build a model of the world. Based on that model, you could simulate the world, predict what you should do, not unlike what we did with the helicopter. We learned models of how helicopter dynamics works. Think about what will happen as a function of which actions we take, and maybe have a value function, and then take the actions that lead to the good outcome. The thing is modeling these situations proved even harder than modeling helicopters, because your sensing needs to understand whether or not you're already making contact, and making contact or not. You can be very close, but not have contact. It's a very subtle thing. You don't have contact, you don't get to apply any forces, don't get any forces back. So it's very hard to do this. Now, what's changed recently in the past few years is that through advances in deep learning, it's been possible to better map from raw sensory information to controls you might want to apply. And so actually, you've seen this video before. This is from a little later that same year. It's in simulation, but this robot is, over time, learning to control itself to do walking. And ultimately, it learns to walk. Now, of course, never believe it's as [INAUDIBLE] when it's in simulation than when it's in the real world, so it's definitely always harder in the real world. But what you see here is a direction that enables the system to just learn on its own, to go from sensory inputs. And this given sensory inputs are just joint angles, joint philosophies, [? accordance ?] of center of mass, but to go from those measurements to controlling the robot reliably. And the beauty, of course, with reinforcement learning-- which is behind this, rather than a more model-based approach, as we saw in the videos-- the reinforcement learning algorithm can be reused directly onto other robots and can learn to control these other robots. Now with this in mind, of course, if you look at the number of the amount of data collected, that's still a little tricky. It takes a good amount of data collection before this robot has learned. If it's a physical robot you're trying to train, it gets tricky because you might break it before it has learned enough. And even if it doesn't break, it could take just unmanageably long to get the amount of data you need. But it shows, in simulation, at least, a lot of progress towards learning intelligent locomotion behaviors. Here, the reward function is the closer your head is to standing head height, the better. So sitting is better than lying on the ground and standing is even better. So how about other motions? At this point, it is possible. You run enough simulation to train up a very wide range of skills. So these are results from Jason Pang. He's a student here at Berkeley. And at this point, if you have a motion you want a robot to master, Jason's methodology can pretty much make it happen. So this can work directly for building video games. You build video games, you want your main character to move in a realistic way. You can have it sequence together motions like these and dynamically simulate how they interact with the world, rather than key frame every little detail. Same for animated movies. How about real robots? Let's say you want to get this robot across this terrain. Well, if you think about it, there's really two problems here. You could think of it naively, just as low level control and applying torques at the motors. But then it's a very long problem, and when you solve it that way, you don't have much perspective over what you're trying to do. It could make more sense to think of it as, well, I want to find a path across these rocks. And once I find a good path, I want to control the robot to follow that path. So it is a low level control, which is about placing your feet in the right next position. And there is a high level control problem of, what's a good path to follow? High level control problem, that's actually a star search. If you have a cost function for this terrain, what would the cost function be? Well, what could it be? Where do you want to be? Where do you want to place your feet? Well, you maybe don't want to be next to a big cliff. Because if you're right next to it, then you slip a little bit, you'll slide your foot down and that's not good. Maybe you don't want your feet at very different heights because then you're tilted. It might be less stable. Maybe you look at the three feet on the ground and the fourth one that's moving. The three on the ground-- the support triangle of that-- if you project down the center of mass of the robot, she'll maybe fall within that support triangle, because then it won't fall over. So there's a bunch of considerations you might have. When we thought about this problem, we had 25 features we came up with that we thought matter for the reward function, or the cost function, when you run the search, or the value iteration, which is, more or less, equivalent to find a path across this terrain. But if you choose the trade-off between the features differently, you'll find different paths. So how do we set that trade-off? We tried by hand. So maybe a little more weight on support triangle margin. Maybe a little more weight on height difference, and so forth. Very hard to do. Instead, what you can do is you can actually learn the weights and the features. So reward learning. How you do reward learning, you demonstrate a path across this train. Demonstrating doesn't mean just drawing a line. It actually means choosing a sequence of footsteps that it executes on, and assuming it does well. It assumes you have a low level controller, but that's well understood how to do that. Low level control can be done by just running search in a relatively small space. So you choose the next footstep, next footstep, next footstep. Then you learn a reward function that is such that if you were to run value iteration-- or if you call it a cost function-- if you were to run a star search, that the path you demonstrated is the result of running value iteration or a star search. If you can find that reward function, that reward function is a good explanation of what you want this robot to do, including in new situations, new terrains, where it can then also calculate the reward on new terrains. We use that reward function on a new terrain and then plan on that new terrain with that reward function. And let's see how well it works. So he will watch first his without reward learning. So we said everything is equal reward. It's just about making progress, just as if you're on flat terrain. So it's going to kind of steadily march forward. Let's see what happens. So pretty good so far, but now it's on the rocky terrain. And one of its feet slipped down. Another one is slipping down. One foot got stuck between the rocks. The low level planner is trying to find a way out, make that foot go to the next spot, but that doesn't work. So it just had a bad choice of footsteps, which led to this result. What if we use the learned reward function, the reward function that, if used on terrains where we demonstrated, it would result in our demonstrated paths. Well, let's take a look with reward learning robot in action. The feet will still slip, but they're placed cleverly such that a little bit of slippage does not result in the robot getting stuck, and actually gets across very quickly. Any questions about legged locomotion before I go to autonomous driving? OK, let's switch to driving. So the big push, recent push, in autonomous driving started around 2005. Since 2005, DARPA organized a competition. In that competition, you're supposed to have your autonomous car drive a desert race. It's a time trial type race. There's no other cars that you have to overtake, but you're kind of on your own and try to do it as fast as possible, 150 mils off road. Well, it's on a road, but it's this kind of road that is not like a regular road. So it's pretty hard to distinguish road from non-road, and if you steer off the road, you might lose your car if you go down some kind of ravine or something. So a bunch of teams participated. In fact, there was a race the year before where nobody got more than three miles, or something. But then they did a second iteration of the race, hoping somebody might get further with their car than just three miles. Now, this video actually has sound. I'm not sure if sound works here. I'm not hearing anything. Let's see if this can be made to come out. Headphone. It should work. I'm not hearing anything. [VIDEO PLAYBACK] - The first time it's ever been done, autonomous vehicles. PROFESSOR: Does this work? - As dawn breaks over the desert, robots prepare to boldly go where no robot has ever gone before. - And we have Luke from Stanley, ladies and gentlemen, the start of the Dartmouth Grand Challenge. - After months of tireless effort, there's a lot at stake. - And now Sandstorm. - A vision they all share will now be put to the test. Each one leaves the chute with confidence, a far cry from the first Grand Challenge, where many faltered within sight of the start, and no robot went beyond seven miles. During the first eight miles of the race, Highlander gains two minutes on Stanley. PROFESSOR: It's the rival school. - Behind Stanley, Sandstorm is closing the gap and may pass the Stanford robot. At mile 26, Team [? Dad's ?] laser comes unbolted from the roof. Behind him, Stanley is rapidly approaching. PROFESSOR: So the front car is the autonomous car. The one behind it is a follow car with human drivers that can stop the front car, just in case something goes wrong. At that point, you're out of the race, though. Once you're stopped, that's it. So you don't want to have to press down Stop. - Five hours after leaving the starting line, Stanley now leads the pack, and just five robots remain on the course. To finish, they must wind through a treacherous mountain pass. A blue dot appears in the distance. After driving six hours and 53 minutes at an average speed of 90 miles an hour, Stanley is about to become the first vehicle in history to drive 132 miles by itself. [CHEERING] [END PLAYBACK] PROFESSOR: So that was 2005. Not everything went so well. There was other participants doing other things. Here is a bit of the blooper reel. This was a really hard problem at the time, to get this to work, and was very impressive that, in fact, four cars finished the 150 miles. That's a Berkeley entry. Only motorcycle in the race. Yeah, you were [? not to take ?] a seat in the autonomous cars of 2005. Now, what goes onto those cars? There's a few different things. There is IMU, like right on a helicopter, a lot of computers. The GPS compass. If you have multiple GPS and they're very precise, you can know which way you're facing. Regular GPS to get position. [INAUDIBLE] stop. Lasers, where you shoot out laser beams. And based on how long it takes them to get back, you know how far away the nearest obstacle is in that direction, assuming it's an obstacle that reflects back to where the laser came from. Cameras, radar, control screen, steering motor. The steering control, usually, you would have a high level planner choosing a path and then a low level controller that is good at following that path. How do you decide with path to follow? Often, your sensor readings will tell you if there might be obstacles or not. So your laser would see how far away things are. Now here, there is an obstacle. It would see that the readings are different and decide it needs to steer around that, hopefully. If you're an urban environment, there'll be a lot more obstacles. A lot of readings will be noisy. And so it's typical to run something like an HMM to de-noise those readings into a more reliable-- GPS, [? IMU, ?] laser readings into a more reliable estimate of the geometry of the world around you. So raw measurements might give you 12% false positives. With HMM, you get 0.02% false positives of where there might be obstacles. Cameras are important, too, of course. With a camera, you can often look further ahead. So you might wonder, why do we need cameras if we have already have LIDAR. LIDAR sends out a laser beam, measures how long it takes to get back. But usually, it doesn't work beyond 50 meters. You don't get enough signal back. Also, sometimes if you have a very dark object, it might not reflect back. Or if you have a mirror, it will reflect back in a way that doesn't come back to you. So many reasons to still want cameras. Here's specifically about looking further. You want to look far ahead. A camera will be better at that than a LIDAR. OK, now how do you avoid running up a tree if you use your cameras? Well, you need labels. Somebody needs to tell you what is road, what is not road. Well, the way it could build vision for a car, because labeling is expensive, you could decide that you drive your car and wherever you ended up, the LIDAR can tell you what is road, not road, because it's nearby. And you can, after the fact, back trace what it looked like two seconds before that, five seconds before that, in your video stream what that point looked like, and then automatically label it as road or not road. And that way, your camera system can be trained to look further ahead. It's called self-supervision. And self-supervision is a trick that's very widely used to reduce labeling efforts. So the camera now knows all the red here is road. In urban environments, there's even more need to recognize, not just road versus not road. A lot of progress has been made this is video from 2013. So after 2005, 2005 was a desert race, where Stanley won. 2006, '07 was Urban Challenge. After that, secretly, Google started a project on self-driving cars with many people from the winning teams. That project, that came out a few years later. And here is a video from 2013, the Google Self-driving Car Project. The encoding is a little messed up at the very beginning, but this is a video from the viewpoint of the driver. But it's an autonomous driver here of the Google car in 2013, so very impressive. It can already do a lot of things. This is only getting better. This was before deep neural networks were heavily used for this kind of thing. It's only getting better to recognize what's in scenes, thanks to deep neural networks. Instead of classifying into which categories in the image, you would classify each pixel, as to what's in each pixel. That way, you get a semantic segmentation of what's in front of your car. Now, that video was from 2013, already very impressive. Didn't use any of this, as far as I know, definitely the latest advances in this. So what does it tell us? Well, the devil is really in the details, in the long tail of special events that can happen when you're driving. In fact, when you're driving, typically you wouldn't get an accent. But periodically, you get in an accident, or some humans get into accidents. And it's because something very special might happen unexpected, a lapse of attention, or something really weird, and all of a sudden, you don't react quickly enough. Same for self-driving cars. You can measure progress by just demo videos, which is one way, and it gives you some kind of feel for what's going on. But the 2013 video is already very impressive. So another way to measure progress is to see how are these cars doing relative to human drivers. So left and right are the same plots, but the ride is on a log scale so you can see more detail. It's a number of events per 1,000 miles driven, 10 to the negative 5 events per 1,000 miles driven for humans. Red there is human fatalities. So 1 in 100,000 times when a human drives 1,000 miles, they get into a deadly accident. Then yellow is human injuries, which is about 10 to the negative 3 per 1,000 miles. Then human crashes, which is between 10 and negative 2 and 10 negative 3 per 1,000 miles of human driving. And then in green is the Google slash [? wave ?] mode disengagement. It's when the driver decides they want to take control because they don't trust the autonomous system right now to avoid an accident. And we see that it's going down how often that needs to happen, but still a bit removed from where humans are at. Where does this data come from? If you test in California, you have to report this data to the DMV. Google's not the only one doing this. Early on, they were the only ones, as far as I know, but there's more companies now. And you can look at them on these plots and see which companies, how far along, in terms of how many disengagements they need per 1,000 miles driven. Still, so far, quite far removed from human accident rate levels. Another thing people have been pushing, as a consequence of all this, is lower power neural networks. So for example, Kurt Kuetzer at Berkeley has projects on this called SqueezeNet, where you say when I have neural nets, I need to make so many decisions. If they're gigantic, use a lot of power. That's a problem. Let's see what we can do to build smaller networks to make decisions. What else did we not cover yet? Personal robotics. I want to spend a little more than two minutes on that, so let's keep that for Thursday. That's it for today. Bye. [SIDE CONVERSATIONS] |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181101_Machine_Learning_Naive_Bayes.txt | [NO SPEECH] PROFESSOR: OK, let's get started. So the main topic for today is Naive Bayes models, but the real context here is that today will be our first lecture on machine learning. Up until now, we have been assuming that somebody gives us a model. They give us the search problems. They give us a search problem, there are weights on the edges. There's a game. There's probabilities of the chance notes. We have a Bayes net, and there's a bunch of conditional probabilities that we use to do computation with. And we've looked at how to use that model to make optimal decisions, whether that's thinking about sequencing actions in the first part of the course, or managing inferences over uncertainty in the second part of the course. And now, we're going to shift gears and look at machine learning, which is about how to acquire a model, how to acquire the parameters of a model, from data and experience. So in the end, we want to build good systems. We want to build accurate systems. Where do you get accurate systems? You get them from good models. And good models come from good data, and we're going to look at that last part now. So what are the kinds of things we could learn? We could learn parameters. These are the individual numbers and other details that determine exactly how our model works. For example, the probabilities that live in each conditional probability table of a Bayes net, that's an example of parameters. We can learn structure, so we could learn, for example, given a bunch of random variables, and then a bunch of data showing observations of those random variables, we could learn something about correlations, or maybe even causation between those variables, and use that to build a Bayes net. We can also learn hidden concepts. We can take data, we can cluster that data. We can look for patterns. Neural nets, which are a very big topic in machine learning now, are, in a lot of ways, all about learning hidden representations and hidden concepts. What we're going to do today is we're going to start with model-based classification, and, as an example of that, we're going to work through some details of how the Naive Bayes models work. Then in the next few lectures, we're going to work through a sequence of different takes on machine learning that are going to highlight different subsets of the big ideas on this topic. So today, we're going to talk about classification. We're going to have a couple running examples. One of them is that spam classifier that pulls out all the emails you don't want from your email. Another one we're going to look at is, digit recognition. These are simple classification problems in some ways, right? The inputs are sort of well-understood, and the outputs are well-understood. But we can already see a lot of the big ideas in machine learning, and you could also see how something like spam classification starts to give you a little bit of a window into how other natural language tasks work. And something like digit recognition, we'll start to give you a window into how other vision tasks work, and we'll see more in-depth examples and more structured examples of these kinds of problems later on when we talk about applications. So here's an example of a spam filter. This is an example of a classification system. Classification is not the only kind of machine learning, but it's probably the biggest. So classification has an input and an output. And the whole point is to automate the prediction of the output on the basis of the input. So the input might be an email, and the output is the decision. Is this spam or is this a good email, which people who work on spam call ham. So how would this work? Well first, we need some data. And we'll see today exactly how data kind of goes through the mill and gets turned into a model. But you need data to learn what is a good email, what is a spam email, and so you get a large collection of these. In practice, in the real world, getting the right kind of data is often one of the hardest parts of building and deploying a machine learning system. So where are we going to get a large collection of example emails? Well, we could take some email accounts, get people to agree to let us look at those emails, have humans hand-label them, that's one way. You could also build a kind of an ecosystem in, that naturally generates this data. For example, a lot of you, in your email account, see spam. You mark that as spam, and congratulations. You've just labeled some training data for a classifier. Somebody has to hand-label this data, and that can be expensive, or it can just fall out of the ecosystem that you have around you. And how easy it is to deploy machine learning systems often depends on how natural it is to collect that data or to discover that data, how costly, what you have to do in order to make that data reasonable to produce. We're not going to talk a lot about that in this class, but when you actually get into the real world and you're thinking about deploying machine learning systems, data is often the first thing, and the biggest thing you worry about. So how are we going to make these decisions? Well, we have a bunch of data like what's shown on the right. I'll read you some of these. These are from an actual corpus, collection of data, that's labeled, that people use. This one is relatively small, and it's an older corpus, but this is a corpus of email that people use to test this problem. So here's one. You guys, CS188 classifier, using the parameters in your head. We're going to decide spam or ham. "Dear sir, first I must solicit your confidence in this transaction. This is by virtue of its nature as being utterly confidential and top secret." What you think? Spam? Looks really important. Yeah, it's spam. Here's the next one. "To be removed from future mailings, simply reply to this message and put Remove in the subject. 99 million email addresses for only $99." Spam? I mean, that's like a million email addresses per dollar. Yeah that's bad. You don't want that one. All right. OK, I know this is blatantly off-topic, but I'm beginning to go insane. Had an old Dell Dimension XPS sitting in the corner and decided to put it to use. I know it was working pre-being stuck in the corner, but when I plugged it in, hit the power, nothing happened." Ham or spam? Yeah, so this is ham. Whether you personally wish to receive this email, this is between you and your friends, right? But this is an email that was intentionally sent from somebody who probably knows the recipient here, and you would have to go through and label this. In the case of email, it's very much an individual question which emails you want to receive or not. And so the boundary between what is actually spam, unsolicited commercial email, whatever, and emails you just don't want, this can be a fuzzy boundary. And this is a problem when people are just clicking the spam button. Some people just click on an email they wish people hadn't sent them, even if it's like from their mom. How are you going to make this decision that you all just did so effortlessly? What is it about the top two emails that let you conclude that they're spam, and how could we automate this? Well, machine learning is going to do some amount of work, but something has to power this. There has to be something about those first emails that's going to give you the clues that something's fishy here. And so, what kinds of features can you include? Defining these features is a big part of deploying machine learning systems. So the basic features in a task like spam detection are the words. And so there are going be some words that are big giveaways that we have we have spam on our hands. So probably something about "only $99" that's probably a sign of spam, but it's not like those words couldn't occur in ham. Or, this "utterly confidential and top secret," that's probably a that phrase is probably a bad sign, but it's not like that couldn't occur. And so each of these features is going to be something that you can think of as a noisy indicator. It's going to give you a little bit of confidence-- in the probabilistic sense-- that perhaps there's spam here. But no one of these things is just going to put that to rest. And so what we're going to need to be able to do is build a model that can aggregate that information, combine all of those little bits of weak evidence, manage that uncertainty, and then give us a prediction. So the words present in spam are really important, and that's sort of the basic first cut, because these are documents and you want to put them into categories. But there's a lot else you can look at. For example, maybe any dollar sign followed by some numbers is a bad sign, and that's a feature that abstracts over individual words. And you can either put some effort into coming up with these features, or you can look at even more advanced machine learning techniques that would automate their induction. In this second one, "to be removed from future mailing," one big sign that this is spam here is just all caps. That's a bad sign. That's no individual word being in caps that's a bad sign, it's sort of that aggregate feature, and so that's a feature you might want to add. In practice, for actual spam detection, a lot of the evidence of spam versus harm comes not from the word or even the content of the email in any way, but rather, its relation to other things in the ecosystem. For example, is the sender of this email in your contacts? Well if it is, this is probably not spam, even if it's got some sort of marginal contents. Has this email been widely broadcast within a short amount of time? Obviously, your email account can't tell this, but your email account provider can, because they can see the behavior of similar emails appearing in lots of different inboxes all at once. So you're going to collect some amount of these features, and then some match is going to happen in the middle where we're going to build a model and make predictions, and then out it's going to pop, classifications of either spam or ham. Let's do another example. This example is another classic one. The input's going to be images. Think of them as a matrix, a grid of pixels. They can be black and white, they can be grayscale, whatever. The output is going to be a digit, 0 through 9. What's the setup for this? We're going to have to collect a large number of example images. Each one has to be labeled. This is a seven, this is a two, this is a four. Somebody has to hand-label all this. Or, somebody has to design an ecosystem where the stuff is sort of self-labeling. We want to be able to predict labels of new images that are not the ones we've already seen, OK? So that's actually subtle, but it's super-important. We are not, like this is not-- this is not the sort of the Pokemon collection task here. We have to collect every digit, every image, right? Every image you see of a digit is going to be unique. It's going to be at least one pixel off of something else you've seen. So you can't just collect all the data. You can get data that is similar, but then, in the end, you're going to have to generalize. And so, we'll talk about how that's going to work. What features might you use to detect digits? Well, somebody puts a grid of numbers. Your eyes and your visual processing system is already doing all kinds of processing, and people will think about computer vision, replicate some of that processing. You're doing all kinds of processing when you look at a digit and you say, oh, that's a four. But here, you know, this one might be a zero. We could go and we could label these things. And we're going to try to teach a computer to do the same thing. What's the next one? It's probably a one. I'm not like totally sure. What's the next one? So two. That slash is probably a one. And then, I don't really know what that's supposed to be. And so, this is actually a real issue, right? There's a lot of inputs that are really noisy, and you're training set, they might be hard, expensive to label, because they're noisy. You might get them wrong. You might mislabel them, and then your algorithms have to be robust to a certain amount of noise in your training set. And then at test time, you're going to make mistakes. You're going to make mistakes because machine learning is not perfect. You're going to make mistakes because some inputs are just really, really hard, and they're going to look like this and we're just not all even going to agree on what the heck that's supposed to be. So if you wanted to make a decision that that top thing is a zero and the next one is a one, you're probably going to look at the pixels. Those are the basic features. But if you think about it, that's not really the most invariant representation, because I could take that zero and I could just shift it up a couple pixels to the side, and it's going to be an entirely different set of pixels that are on, but it's still the same number. So people think about computer vision, think about invariances. What are better representations that if the thing gets tilted or it's a little bit lighter or it's a little bit smaller or bigger, it's not the exact pixels being on that we care about. But the pixels are something we could use. We can look at other kinds of patterns. We could look at how many connected components of ink there are. What's the aspect ratio? How many loops are there? It's increasingly the case, especially for problems like this, that we feed in low-level features like pixels, and higher level features like edges, tend to get induced increasingly more as our machine learning methods get better at doing that. We'll talk about that in a couple weeks when we talk about narrow nets. All right. There's tons of classification tasks. It's probably the most widely-used application of machine learning. There's also a lot of applications of other things like clustering. Classification, you're given inputs, you predict labels, or classes, y. Inputs are called x, outputs are called y, and there's tons of examples of this. Medical diagnosis could be classifications. The input is the system, the output is the diseases. Fraud detection could be, think about your credit card company. There's some account activity and you want to red flag accounts that are suspicious. Think about all the different kinds of features you could put in, in order to make those kinds of decisions, both at the individual transaction level and also at the network level. Automatic essay grading, auto grading, this can be a machine learning problem. Customer service email routing. You have a whole of customer service agents that do all different kinds of things. Emails are flying into the system. You'd like to automate the routing of that. Review sentiment. Here's a bunch of reviews of my product. Which ones are good? Which ones are bad? Have they gotten better in the past 10 days since the new announcement? And so on. You can do that with classification. Language identification. Here's this document. What language is this, anyway? You gotta do that before you can do things like translation. Of course, when you do translation, that's not a classification task anymore, that's a much more structured natural language processing task where you actually need to generate new language that means the same thing in a different language. So classification is important. At the end of the lecture today, you will have enough information to go and build a basic classifier. But there's a whole bunch of detail behind all this. First thing we're going to do is talk about model-based classification. Remember we were talking about reinforcement learning? We talked about model-based reinforcement learning and model-free reinforcement learning. There's a very similar distinction in the world of classification as well. In model-based classification, rather than directly learning from errors that you make in the world from experience, think like reinforcement learning model-free, instead we're going to learn by building a model from our data, and then doing inference in that model to make predictions in the world. After today we'll look at the model-free methods. So the model-based approach, you're going to build some model. In this case, we're going to build a Bayes net, and it's going to be a super simple Bayes net called a Naive Bayes model. You build a model where the output label and the input features are random variables. There's going to be some connections between them, and maybe some other variables too that might help you build a better model, either in terms of statistical efficiency of having sort of the representations be more compact, or because that's data that's easier for you to elicit. Now you've built this model in which you've Bayes net and down here somewhere are going to be your features, and up here is going to be your class. And on the basis of the features, x, that you observe, you're going to predict the class y. We can do that with probabilistic inference, variable illumination, for example. So you're going to instantiate your observed features, you're going to query for the distribution of the label you care about, the output, condition on this feature, and you'll get a probability out. Now you can ask questions, like, how often does my model get the right answer? That's accuracy. You can also ask questions like, do these probabilities that are falling out of my model, are they even right? Some kinds of classifiers don't even give you probabilities. They just give you a prediction. So what are the challenges? What structure should the Bayes net have? Today, we're going to give it the simplest structure that could possibly work, and it turns out it often does. And then, the thing we're going to mostly talk about today is how should we learn the parameters of a model from data once we've decided it's structure? So here's what a Naive Bayes model would look like for digits. It's in fact super-simple as Bayes nets go. So what we will do is we will assume that there is one special node, it's going to be the label. Here, that's written out as Y, and that label's domain will be the various digits, 0 through 9. Then there are going to be a bunch of features. What are the features? Well, you might have a feature for every pixel in the grid that's binary valued, that's either on or off, based on whether it's above or below a certain threshold. So the model itself might look something like this, where the class is the cause, and it independently causes each of those features. Now if you look at that you might think, that doesn't sound quite right to me, because there are correlations between the features that are present even when you know the class. And in this Naive Bayes model, the conditional independence assumption say that conditioned on the class, the features are independent. And maybe that doesn't feel right, because if I know it's spam, or if I know it's ham for sure, are the first two words independent? Well, no, they're not. So the Naive Bayes assumption is an extremely radical assumption as far as probabilistic models go, but for classification, it turns out to be really effective and it goes something like this. So if we have a single digit recognition version of this, we might have a feature for each position of the grid. They'll all be binary valued. That means this number one here maps to this feature, so it'll be a big vector of zeros and ones, which is really just the image. But if my features were instead, how much ink because they're on the left side? How many loops are there? Then there would be zeros and ones as well, but they would no longer correspond to the raw image. There's a lot of features. Each one here's binary-valued, and the Naive Bayes model is just what's drawn here. We say that the probability distribution over y, the class, and all the features which collectively basically are your input x, is the prior probability of the class-- that's what lives at the node Y-- times the product for all of the other variables of the probability of that variable given its parent, which is the probability of the feature given the class. And that means when you go to make a prediction, it decomposes into a product of a bunch of different feature conditional probabilities, and we'll see examples of that and unpack the inference for that. OK, so what do you need in general, because that was one example? What do you need in general? A general Naive Bayes model places a joint distribution over the following variables, y, which is your class, and some number of features, which you get to define. You're going to have to write code which extracts them from your input. So if your spam feature is, have more than 10 people received this email in the past hour? You have to write code which extracts the value of that feature, but the machine learning will do the work to connect the probability of that taking on a certain value up to the class. In addition, the way this joint probability is going to work, is you're going to have a prior probability of the class, and then a bunch of little conditional probabilities which directly connect each feature up independently to the class. So this whole network, this whole joint distribution that we're describing, is enormous, right? It's going to be exponential in number of features. But the thing we actually build is quite compact. All we have to have is y parameters, one for each class value, and then, for each feature, we need to have a little description of how likely that feature is for each class. And that's it. So the Naive Bayes model will be linear in the number of features, whereas the joint probability distribution that you're implicitly describing is exponential in the number of features. So basically, the thing we have to specify is how each feature depends on the class, and we're going to get that from data. All right. Couple things to get out of the way before we look at the actual data stuff. One is, it's a model. So in order to do predictions, you have to do inference. Turns out you already know this. There's nothing new here. In fact, it's a simple case of inference by enumeration. So if I would like to compute, the thing I actually want to compute is the probability of y, that is the distribution over all the different class labels, given my features. That's what I actually want to compute. But I know that I can just compute the joint version of that, Y comma the features. What is that? Well if that were a lower case y, it would be a scalar. It would be an entry of the joint probability table. Since it's a capital Y, it's a vector, right? So it's a one-dimensional vector here that has an entry for each value of y. So there's P of y1 in the features, P of y2 in the features, P of y3, and so on. If I had all of these probabilities and I normalized, I would have the conditional probability of the class, given the features. All right, so this is what I'm looking for. I'm looking for this vector of probabilities of feature and class. How do I get them? Well, for each one of these entries of the joint distribution here, I can rewrite it according to my Bayes net, which is going to be a product of all these little local probabilities. It's the product. Each one is the product of a prior probability of the class, which says whether or not before you see the evidence, this is a class that's common or not, and then a product of a bunch of feature probabilities, which is how likely is this feature for this class. How likely is this feature for that class, and so on? All right, so you compute all those products. You normalize them. You're done, OK? That's it. It's just variable elimination. OK, so what do we need? We need an inference method. We've just seen this. We have a Bayes net. In this case, it's super-simple. It's one that assumes all the features are conditionally independent, given the label. We need all of the component probabilities, nothing new. What's new is, now we also, in order to make this work, we need to know what those probabilities are. How likely is it to have the word free in spam? How likely is it that the center pixel will be on for the number seven? These questions can only be answered by going to data. All right. So the things we're going to need to do, is we're going to need to figure out the prior probability over labels, and for each feature-- which is each kind of evidence-- we're going to need to compute a bunch of conditional probabilities for each class. These things collectively, all these probabilities that we use to plug and chug and get our numbers out, are called the parameters of the model. Formally, they're usually denoted by theta. And as I change the parameters of the model, it's still, say, a Naive Bayes model over spam versus ham. But as I change the probability, different things are going to blink in and out as I think this is spam now. I think this is ham now. So those numbers, collectively, are going to determine which predictions it makes. So it has to come from data, which parameters we want. All right, let's see some examples of what these conditional probabilities in a Naive Bayes model would look like. So here's an image. This is the number three. The first thing we need in order to apply Naive Bayes to this, is we need a prior probability over the class. And so in this particular vector, in these parameters, each class 1 to 0 is equally likely. What do you think it is in real, if I went to like a real collection of data? Well, I could get 1,000 examples of the number three, and 1,000 examples of the number four, and 1,000 examples of the number five. Probably, I should just get like a million, or, as many as I can get of each number. If I constructed the data set in that way, well, then all these prior probabilities would be equal, because in my data, 10% of the examples are 1 and 10% of the examples are two. What do you think's real? Which number do you think is most common in real data? Or are they all equally common? So 0 might be common. It depends on the kind of data you're looking at. If you're looking at zip codes in California, what's the most common thing? Nine. If you're looking at lots of round numbers, maybe it's 0. If you're looking at sort of general numbers, it turns out one is particularly common. You can think about why that might be. So these come from data. And this actually underscores the point that depending on how you collect the data, it can actually shape the distributions that you are imagining are going to exist at test time. We'll come back to that as well when we talk about risk minimization. All right. In addition to the prior probability of each label-- which is already maybe a little tricky to get right-- in our data, we can compute things like, what is the probability that pixel 3 comma 1 is on, given each class? This isn't a distribution over on or off. These are just the probabilities-- what I'm showing here-- just the probability of that pixel for each class. And it's going to be some number. So for example, the pixel in that position might be pretty likely for the number six, but pretty rare for the number one. And these probabilities will encode that. OK? And you'll have that for each of the different pixels. So this is what the conditional probability is in a Naive Bayes model look like. Where did they come from? Well, this number came from saying, all right. In all of my examples of numbers that are the number five, how often was pixel 3,1 on? OK, 80% of them. Done. That comes from the data. It works a little bit differently in practice when people look at models for text. So when you look at models of something like an image-- where you have a bunch of these pixels-- but, you know, a pixel here and a pixel there mean very different things. The standard model for text is to say that the features are the words and that the random variables are the words at each position. So you would say something like, well, I have a joint probability over the class-- which could be spam or ham or document classes, or positive or negative sentiment, or whatever-- along with the rest of the document, the words. The standard way to write that there in a Naive Bayes model is to say that's the prior probability of the class. And then the product of-- across each position in the document, the probability of the word at that position given the class. And so again, this is a naive Bayes model, and-- but the random variables now are not something like we're-- they're not-- the presence or absence of individual words, they're the product across each position of the document of what word is at that position. So this is actually interesting, because I have-- in the image case, I had lots and lots of features, because it's a big grid. But each one was either on or off, right? With a document, one, they can be of any length. And two, each position has a large event space. There's a whole bunch of words that can be at position 23 in the document. But for the purposes of detecting spam versus ham, it doesn't really matter whether a word occurs at position 23 or 24, right? When you see print cartridge, I guess two words, at a certain position, it probably doesn't matter what position it's in. And so for these people, look at models which have an even stronger assumption. They assume not only are the features conditionally independent of each other given the class, they also assume that they're identically distributed. And that means the probability distribution over the words at position 23 and the probability distribution over the words at position 54 are the same single probability distribution. And that means for a Naive Bayes model for text, the only thing you actually have to learn is for each class, what is the histogram of words? What is the probability distribution over words in that class? These are tied distributions. This is called a bag of words model, and this is different than the standard case because in the standard case, each feature gets its own distribution. Here, we assume the features are all identically distributed, but there are multiple copies of that feature for the different positions. And this is nice, because when the document is longer or shorter, you don't really need to change your model. Or, it is the first time you've had a document that is, you know, one word longer than the longest thing you've ever seen, you're like, oh man, what does that last word mean? Right? It's the same as it means in any other position. OK. Let's do an example of spam filtering here. In this bag of words model for text, which means the probability over the class in the words is the product of the probability of the class times the probability of each word independently, given the class. And remember, the words are identically distributed. So I just go through, and each word that comes in is going to get a probability for each class. And there's going to be a race to see which class wins in terms of probability. So what am I going to do? I'm going to have to see the words that come in, but I'm going to, in the end, compute two things. I'm going to compute the probability of spam along with all of the words, and I'm going to compute the probability of ham along with all the words. And then, those will be numbers. They won't sum to one. In fact, they're going to be really, really small numbers. You multiply 100 probabilities together, you get a small number, small enough that you would have to worry about underflow in practice. So we're going to compute these probabilities. Whichever one is bigger, that's going to be my prediction. If I want to know the probability that I assign, I would have to re-normalize. I have to take these two numbers and divide them by their sum. So let's do it. Let's see. We're going to compute incrementally, as the words come in, position by position. We're going to compute probability of spam and the words, probability of ham and the words. So here we go. Let's look at the tables first. I will give you this one, actually. Maybe I should have hidden it. It turns out, in this corpus, 2/3 of the emails are ham, and 1/3 of them are spam. How would you say this relates to the emails you get? You get more spam than this, or less? Probably a lot more. So spam has gotten worse since this corpus was collected. And again, this underscores that just because there's some distribution that your data reflects, and then there's the real world. And you want those to be as close as possible. And if there's major, major systematic ways in which your data and the distribution it was drawn from in its construction do not match the distribution in the real world that you're going to field your classifier against, you're going to have issues. One major issue you can have is reduced accuracy. All right, so for these others, if I'm going to build an Naive Bayes model for spam filtering, I need to collect the prior, which is going to turn out to be less important than the other components. And then I need to compute for spam, what is the histogram of words, and for ham, what is the histogram of words? So let's look at the spam one first. What do you think is the most frequent word in spam? I heard free. What else? Any other guesses? I heard the. You laugh, but here's the answer. What's going on here? I thought free meant spam. This is not an odds ratio. This is not words that are much more likely in spam than ham. This is simply, in this model, the probability of word given class. And in spam, the most likely word is the. Somewhere far down that list is the word free. How about for ham? What do you think's the most likely word? Yeah, it's the too. So these, it turns out that if you just look, with a couple of weird exceptions like 2002, any guesses when this corpus was collected? If you just look at the most common words, this is not where the information is. In fact, the information or model like this is in the relative probabilities, because they're going to be this race that I talked about here, that we're going to be accumulating two probabilities. And as we go, the joint probability is going to drop. Because this particular message is not particularly likely under the document for either class. But for one of the classes, it's going to be, relatively speaking, meaning the ratio, is going to be much more likely. And so what matters is actually the relative-- the ratio of these frequencies. So somewhere down there is the word free, which is presumably much more likely in the spam. All right, so where do these tables come from? They come from data. Let's do that example I promised you. So we're sitting here. We're the spam classifier. We're a Naive Bayes classifier, and we're going to produce a joint probability by multiplying in evidence terms as the words come in. Right now, no words have come in, which means we only have one term product, which is the prior probability of each class. So right now, our running total here would be 0.33 for spam, and 0.66 for ham, which means if you asked me now, I would say, that's ham. And if you asked me how confident I was, I would say I predict 2/3 chance of ham. All right. The terms that are going to show up on the left here are going to be the evidence terms for each word as it comes in. The thing on the right here is the product of all of these terms. What, minus 1.1. It's not just going to be the product, because that product is going to get very small very quickly. It is going to be the log of the product, OK? So, so far, we think it's ham because the log of that product is higher. All right, next word comes in. Gary. Remember, Gary is not particularly likely under either distribution. This is a generative model, and that means that we're going to have the probability of the features given the evidence, and Gary is not particularly likely for either ham or spam. But what is it more likely for? It's much more likely in ham, right? Direct address, actually knowing who you're writing to, that's much more likely in ham. But it's not particularly likely there. Why? Most people aren't named Gary. All right. But if you look, suddenly now, if you stopped me and you said, OK right now. It's time to make a prediction. Ham or spam? What would I pick? I would pick ham, because there's a bigger number, right? They're negative, because they're logs of probabilities. And if you asked me how confident I was, I would say I'm much more confident. I'm 10 times more confident than I was before, because I've seen this word Gary. All right. Gary, would. This is one of those words that's actually quite common, but it's not, you know, that asymmetric. It doesn't change my beliefs that much. You can think about these as belief updates. Think back to the HMM. Evidence is coming in, my beliefs update as I multiply in terms of probability. Maybe makes me think a little more ham, because would is one of those nice, harmless, common words that occurs in natural text. Gary, would, you, you is a mildly suspicious word. It's super-common, but it turns out it's a little bit more common in spam. But you can see how, like, it's not like I'm going to delete every email that has the word you in it. So we get little bits of weak evidence that need to get aggregated. In this model, the way they're aggregated is multiplying their conditional probabilities. Gary, would you like to lose weight while you sleep? And then once you've seen the whole email, and you look at the end, you'll notice two things. One, the total probability is very small, because I multiplied a bunch of probabilities. That's fine. That's always true, in Bayes net inference. I have to divide both of these very small numbers by their sum to get the conditional posterior that I want. And if you look at this now, it thinks it's spam. Somewhere in there, somewhere around lose weight it changed its mind. You can see, weight is a pretty strong indicator. Apparently so is sleep. OK. So this is what it's like to be a Naive Bayes model. Features come in, you aggregate all of the weak evidence, and then you output. Another nice thing, even though it's sort of-- you've got to take a step back to see it, is that there is actually conflicting evidence on this example. There was some evidence that it was ham, and there is some evidence it was spam. There's a bunch of examples of both. And this all needed to be weighed, and that's what's going on here in the conditional model. All right. Probability of spam, 98.9. Gary was off to a good start, but went downhill from there. Any questions? Yep. STUDENT: [INAUDIBLE] PROFESSOR: The question is, can you use the log for the final? It's actually very common when you're multiplying probabilities to just add log probabilities instead. In the end, when you want to turn it into probabilities, you do need to sum them. And summing the logs won't do that. You need to do a sort of log sum, which one way to do that is to convert them back to probabilities by taking exponentials. That's actually not the way you would do it. You would sort of shift them by their minimum or their maximum as appropriate so that you don't get underflow. So when you actually do these things, you gotta worry about that. But conceptually, you multiply all the probabilities. You get a small number, and then you re-normalize. Yep? STUDENT: [INAUDIBLE] PROFESSOR: Yep. That's a great question. Here's another model. Here's y, your class. Here's word one, word two, word three, word four, and so on. I can assume that each word depends on the class and also the previous word. What am I-- can probabilities look like? Well, except for the beginning-- and I can get rid of that by having a start symbol-- sort of the workhorse term for this model is what is the probability of our word k, given word k minus 1 and the class. OK? This part without the class is what's called a bigram model. You're predicting words based on the previous word, so you're looking at by two words at a time. And now we've made it conditional on a class. This Is a better model of language. If you started, if you did prediction in this, and you started with the class and you cranked out a pretend document, it would look significantly more like a real email than if you just did a bag of words where it would just be a bunch of, like, English-looking words in some random order. However, will it be more accurate for classification? It really depends. In general, it will be a little more accurate, but at a cost of having significantly more complicated conditional probabilities to estimate. How much more accurate will depend on to what degree the bag of words assumption is dangerous. So if I take a spam document, and I permute all the words randomly, Ive definitely, like it is no longer syntactically-valid English. But it's probably still spam, right? In the same way, like, I take a document that's a sports document, and I've been doing sports versus politics. And I permute the words, and now it's like this whole mess of words. And I can't read it, but it's like, you know, it's like goal, team, win, right? It still hasn't changed. And so if you're looking for a class which is not kind of strongly connected to the actual ordering, Naive Bayes is really good. Otherwise, you add other correlations like this to fix it. OK? Other questions? Good questions. All right. A couple more general slides, and then, we'll take a break. So, actually, I changed my mind. Let's take a break now. We are now into the machine learning section which means we are done with the spooky Halloween time ghost-busting section. But I have candy. So during the break, everybody get up and come grab candy if you would like. This gets applause? All right. Come up and grab some, please. I'm not allowed to take it home. [NO SPEECH] All right, we're going to get started again. Let me say, your candy consuming I would rate as middle of the road. You can come back up to the-- at the end of the class if you would like to grab more. All right, so let's talk about training and testing. So we talked a little bit about we want to build classifiers. We're going to do it on the basis of data, because how the hell-- how the heck am I supposed to know what pixel 7 comma 3 is for the number eight? I've got to get that from the data. How am I supposed to know what is the probability of, you know, a certain word for a certain class? You go to a collection of data, and you find it there. But, there's sort of something missing there, which is why are we doing that? We're sort of doing that because we want to do well on the final exam. What's the final exam? The final exam is, this classifier is released into your inbox and it actually manages to find all of the spam with high accuracy. So there's got to be some connection between what's going on in your data-- which is what you have-- and this future used to which you're going to put the classifier to. And a lot of machine learning theory is really based on trying to say something precise about the connection between those two things. So what we're going talk about now is a couple things relating to training, the mechanics of how you do parameter estimation in this particular model. But really, the reason we're doing this is not just for Naive Bayes, or probabilistic model estimation in general, but to see examples of more general phenomena and tradeoffs and concerns that occur in machine learning more broadly. So remember, the general setup is going to be, we're going to have a training set. We're going to build a classifier, and we're going to unleash it on the world. So the basic principle of machine learning is something called empirical risk minimization, and we'll get into this in a great amount of detail in the next few slides, and then particularly, in the next couple of lectures after this one. The principle of empirical risk minimization goes something like this. We would like to find the model, classifier, whatever, that does the best-- whatever the best means-- on our true test distribution. So maybe we would like to find somehow, get the Naive Bayes spam classifier, which is most accurate at finding spam in real people's inboxes today. We don't actually know that true distribution. Like, we don't actually know what it's going to see at test time, in the same way that when you go to your final exam, you don't actually know what questions you're going to get. You don't know what distribution they're going to be drawn from. So how do you make progress? Well, what you do know is you have a training set. And you can't pick the parameters which are going to do best for your true test distribution, so instead, we try to pick the best model on our training set and hope there's a connection between those two. Finding the best model on your training set is usually phrased, at some level, as an optimization problem. Today, we're going to appeal more to just directly estimation of probabilities, but by the time we get to optimization-based methods over the next couple of lectures, you'll see this more generally. It's usually an optimization problem. Optimize some quantity on my training set in the hopes that that quantity will remain optimized on the test set, or at least nearly so. So what can go wrong here? The main worry, and this will be a little abstract now. We'll see some concrete examples coming up. The main worry is that you over-fit. The main worry is that in picking the parameters of your model-- for example, the probabilities of various words-- on the basis of your training data, you do a really good job of capturing that training data, but it doesn't generalize. This is like you download all the exams from past years, and you optimize. You learn all those answers. And then you go to the final exam, and it's totally, totally different questions that look nothing like those. That's going to be a problem. So you worry about over-fitting. Now, there's ways you can-- first of all, there's a couple of different things that can go wrong here. One thing that can go wrong when you deploy a classifier is the training distribution does actually represent the test distribution, right? So those practice exams were written by the same people for the same course or whatever, as the test exam, but you didn't actually have enough. You just had a couple samples here, and those samples sort of didn't really shine light over the whole space of things that you might encounter. That's a problem of not enough training data. So we try to get as much training day as we can. What else could go wrong? You could have a ton of practice exams, but you could over-fit as a learner. You could just rote memorize those examples, those test questions. You get to your final exam and you're like, wait. It's not one of the ones I memorized. OK? There, the problem might not be with the training set, though. The problem is that the learning over-fit to the training set. How do we limit that? Mechanically, we limit that in a couple ways. We limit the complexity of your hypothesis. We have to say precisely what that means later. We penalize sort of overly-specific models in various kinds of ways, and we'll see some examples of that even today. And then, there are other ways this can go wrong. So one way, so how can you do badly on the final exam? You can have not enough training data, and there's just no way to have sort of seen the whole space. You can have plenty of training data, but you can sort of learn in a way that fails to generalize. And then, you can have tons of data drawn from the wrong distribution. What would this mean? This would be like you study, and study, and study, and study all these practice exams. You spend weeks doing practice exams for CS189, and then you walk into the CS188 final. You're like, something's wrong here. I learned really well. I understand the concepts, but it's just not lining up. OK, that's a drift in distribution. That's where your training examples were plentiful, but they were drawn from a distribution which does not match the one that you're going to see at test time. Machine learning theory has the most to say about the first few things, right? How-- what's the danger in having a small amount of training data, which means high sampling variance? What's the danger in having hypothesis spaces that are either so big that they can over-fit, or so small they can't capture what's going in the data? So things like that. This idea of the test distribution sort of being not stationary against the training distribution is something that's really important in the real world, and it's much harder to say precise things about. OK, let's see some important concepts, and then let's estimate some parameters. So, one important concept is data. These are labeled instances for like emails that are marked spam, or ham. In general, when somebody gives you a big vat of data, you're going to split it into pieces. You take one piece and you say, this is my training set. Right? Someday it'll all be your training set if you want it to be, but when you're doing experiments, you're going to try a bunch of different things. You want to see what works best. Does this Naive Bayes thing work? Maybe a neural net. So, you're going to take a piece of your training set-- most of it, usually-- and you're going to take a piece of your data and make it your training set. Then, you're going to have to have a test set, which is not the real future test use that it's going to be put once it's deployed, but you need something that is not in your training data to check. This is why you might, when you're studying, take some of those practice exams and not look at them until right before the exam, because you need to check your understanding. And so we take our data and we break into training, where we learn our parameters, and tests, where we check our accuracy. If we check our accuracy on the training data, we will find out it's very, very high. But that won't be true for tests. And in practice, there's usually other little shards of the data that you're going to want to have. So, for example, one common one is held-out data. We'll see today and in future lectures what that's for. So you're going to take your big vat of undifferentiated training data and break it into pieces. You're going to have features, which are these attribute value pairs, which characterize the inputs. Classically, these are like little pieces of code you write to detect how many times the word free occurs and whether or not this email contact is in your send email senders in your contact list, and so on. You go through an experimentation cycle, it looks something like this. Get your model and learning already, and then you're going to learn the parameters-- like model probabilities in your training set, which involves scrolling through the training set, counting things up, sometimes making predictions, sometimes taking derivatives and taking gradient steps, whatever it is-- you're going to learn the parameters of your model which starts off having no idea between spam and ham. You're going to go through your training data, and after one or more passes through, you're going to know your parameters. You're going to need-- you're going to have some other things which aren't parameters. Parameters are things like what's the probability of pixel 73 for the number eight? Then there's hyper-parameters, like, do I want to have features for the lowercased version of the words in case I've seen the word, but never uppercased? Right? These are questions about, is this or this or this going to work better? Another kind, where more formally hyper-parameters are things like the amount of smoothing. We're going to see that today. And then you'll-- and you usually select those on the basis of some held-out data. And then finally, once you're ready to see how your experiment turned out, you take that model and you test it on the test data. You don't want to test your classifiers on the data that was used to train them, because they will do surprisingly well. It's like if you go back through, and you try those same practice exams, you're like, wow. I really know this stuff now. It's like, no. This was your training data. You always know you're training data. The question is, do you generalize? This can happen to your classifier too, so you always want to test your performance on data that was not used to train it. However, you as a researcher, are going to be tempted to grab that test data and look at it in great detail and be like, ah, I see. I got this one wrong. And there can be a slow leak of your test data into your training data if you're not careful. So you try not to peek at the test set, and that's another reason why we have held-out data, which gives you something you can peek at. You want to evaluate. I ran 20 experiments. How did they go? Am I doing well? Is this thing good enough to release? You need to have some metric, and there's a lot of possible metrics. An easy one is accuracy. For how many of these emails did I make the correct decision? Fraction of instances predicted correctly, but actually, that's actually not a great-- the accuracy is not a great metric for spam detection. Any ideas why? What's wrong with accuracy? STUDENT: [INAUDIBLE] PROFESSOR: OK, so the answer was the classes aren't balanced. This could manifest itself either as your training set might not match your test set, or somehow, they're not going to be equal at test time, and maybe you don't want to think about the link. Was there another comment? So how bad is it if that offer free print cartridges sneaks into your inbox? What are you going to do? You're going to not read it. It's OK. How bad is it if that email from your boss makes it into your spam folder? It's pretty bad. And so, the actual loss-- or cost-- of different kinds of mistakes may not be the same. It may not be symmetric. And so accuracy isn't always what you want. What you really want, is you want a utility here. You want to know what was my utility, and you should have different costs for these things. And so sometimes people do that. There are also cases like machine translation, where you're always going to be a little bit off, a little word here or there, but there's a difference between being completely off and a tiny bit off. And so there are a lot of different metrics people have for different kinds of tasks. OK. And again, we're going talk a lot today and next time about over-fitting and generalization. We want a classifier which does well on the test data. We don't have the test data, or we don't have the true test data, so instead, we build a classifier that does well on the training data. And then we try to come up with methods where the training accuracy is going to mean something about the test accuracy. Over-fitting means fitting the training data very closely, but not generalizing well. There's also the opposite, which is under-fitting, where you're just like, I don't know what's going on. I'm just guessing randomly everywhere. That's not over-fit. It's not going to work very well. The problem here is not that you're test accuracy is low, but your training accuracy was also low because you didn't learn anything. We'll investigate these things formally in a few lectures. I had a really good question during the break, which I want to answer for everybody, which is, couldn't you just defeat this Naive Bayes spam classifier by pasting the word Gary 100 times to the end of your offer to lose weight while you sleep? The answer is yes, you could. And I don't know whether folks here remember, but, you know, 10 years ago, there was a period of time where all the spam I would get would have like chapters of Pride and Prejudice appended to the end. And the classifiers would be like, oh, this is looking about oh, look, Pride and Prejudice and then it would get through. So spam detection is, in some ways, a very poor example of a canonical classification problem. Because, if you're trying to build a classifier to detect-- if you're building a classifier to detect the number 7, the number 7 is not like trying to squirm its way out from those pixels to avoid detection, right? You can just get better at the task. And if you get perfect at the task, great. Spam is being generated by people who are trying to defeat spam filters. So whatever technique you use, whatever features you use, you're going to capture those kinds of spam emails. Others are going to make it through, and spammers are going to double down on what's working. And so you get Pride and Prejudice at the end when you're looking for language, but then suddenly, you start using methods like primarily looking at sender information. And now you have spammers who want to buy contact information or whatever it is so that they can spoof that. And so if you have features that are like, did the same email get sent to a lot of different people? What do spammers do? They're going to start modifying that email in some templated way. Now you have some feature that detects templates. Now there's sort of an arms race here. And so in that sense, over time, spam classification doesn't actually look like a standard classification problem because it's adversarial. OK. Any questions before we talk about generalization and over-fitting. OK, so in these images, you want to fit the hat right. You don't want it to be too small, because if you over-fit, you're not going to be able to generalize. But you don't want to under-fit either, because then you're going to fail to learn the information that's actually in your training data. Here's an example of this tradeoff. In general, we're going to do discrete classification. But for this example, let's imagine the thing we're trying to do is to fit a curve to this data. So I can pick a model. You've probably done this in other classes. I could decide to fit a line. I could fit a curve. I can fit different kinds of models. So one thing I could do is, I could fit a constant function. What constant function should I fit if I want to optimize the fit? So you say, what is the fit? Is the fit getting as close as possible to the last dot? Let's say the fit is the sum across all the data points of the squared distance or something. So what is my constant approximation to this? Does anybody want to hazard a guess? Let's call it five. Imagine that was a straight line, well, imagine that was a slope zero line. OK, did I fit something about this data? Yeah, I felt something about the data. I fit basically it's mean. Did I capture the major trends? No. I would call this under-fitting. All right, let's try again. Let's fit a linear function. OK. It's close, right? It's a better fit than the constant function. Notice that when I went to linear function, the space of hypotheses grew. Instead of just lines, now it's like lines with slopes and intercepts. OK, so as my spacing hypotheses grow, in general, when there's more hypotheses, I can fit my data more closely. All right. I missed something about it. There is this sort of like dip. So maybe I could go to a second order function, a quadratic, if I can draw this. OK, quadratic. I'm starting to fit my data better. Could I fit it even more closely? Yeah, I could. How about this? Degree 15 polynomial, way better fit than the quadratic. So it's not just about fitting your data. It's about-- you can always fit your data more. It's about fitting your data to the point where the patterns that you are capturing are ones which generalize to test, and that's a tricky balance. This is definitely over-fitting. And so, you can't basically just judge by your training accuracy, which you can drive higher and higher and higher how well you're doing. You need some measure of whether you've gone too far in the fitting process. And in this case, we talked about hyperparameters. A hyperparameter could be something like, what's the maximum degree of polynomial I'm allowed? Right? And I could detect on held-out data as I'm adding more and more terms to the polynomial. My training accuracy or my training error, the residuals here, they're getting smaller and smaller and smaller. But I can detect that on some held-out data, suddenly it's gone crazy. Because there's this point here in the held-out set and you're like, you're nowhere near it. OK? That's the general idea here. But over-fitting shows up not just on these continuous functions. It also shows up on discrete functions. It also shows up, for example, let's imagine in a hypothetical digit classification, we might say, here is an image I've never seen before. Let's use Naive Bayes to classify it. So what would we do? We'd do our running total. We'd say, all right. Well, before I look at any features, the numbers are the numbers two and three, let's say, are equally likely. So let's multiply in evidence terms. Well, for each pixel, for example, this pixel, maybe this pixel is equally likely for a three as a two. OK, so they're still tied. This pixel is much more likely for a 3 than a 2, let's imagine. So now at this point, I'm thinking this is looking like a three, and I would have a bunch of these terms, one for each feature, in this case, maybe one for each pixel. All right, for this one, this pixel being off is much more likely for a three than a two, because a two has that diagonal line. So, so far, a three is winning. But eventually, I'm going to get to some pixel, maybe like this one here. And in my training data, this is almost never on. This is in a corner where there's no number. And maybe it turns out that this number, that this pixel happened to be on once or twice in the data for the digit two, but zero times in the data for digit three. So when I multiply together all these probabilities, which are all roughly reasonable, who's going to win? Two's going to win, because it didn't have that zero. Well, that's bad. This is an example of over-fitting, because this probability versus this probability, that is about the idiosyncrasies of the samples I have in my data, whereas perhaps the fact that this is 0.1 and this is 0.7, that might actually be a more enduring fact that transfers from my training to my test. All right, let's look at examples of over-fitting. Remember we talked about what's the most likely word, given ham? The. What's the most likely word, given spam? The. I can instead ask, when I do these multiplications together, into that running product, which are the words which sort of swing the product most one way or the other? I can look at odds ratios, which is the ratio of the probability in the two. And if the ratio is one, it means it's equally likely. Whether it's common or uncommon, it doesn't affect the competition. It's things that are more common in one than the other that have a big impact on these odds ratios. So let's look at words. What do you think, in my training data for ham versus spam, things with the highest odds ratio for ham would be? These are things that are significantly more likely for ham than for spam. Words like Gary, except when I look at my data, it's actually a mess. It turns out, there are a bunch of words in this data which occur in spam once, and it could occur in once and occur in spam zero. And if you just say that the probabilities in your model are the probabilities in the data, you're going to give probability zero to a lot of things through over-fitting. The probability of southwest, which occurs once in ham and zero in spam. It's not zero in spam. Just like that pixel being off wasn't zero for the number three, it's just zero in my data. It's really dangerous to give things probability zero. That's one of many kinds of over-fitting, where the exact details of which sample points you drew when you collected your data get captured in a way that doesn't generalize. Then, there's going to be things in spam which are a bunch of other things that occur once in spam but never in ham. So, something went wrong here. What went wrong here, is really, we should not go about giving probability zero to things that we haven't seen just because we haven't seen them yet. And the exact mechanics of over-fitting are going to vary from model to model. In Naive Bayes probabilistic models, over-fitting usually shows up as sampling variance, which usually shows up as zeros in your probability table. For other methods, it's going to show up in totally other ways. OK. All right, we actually talked about all of this. So to do better, we need to smooth, or regularize, our estimates. So let's figure out some ways to do that, to just illustrate what it would look like to limit over-fitting. We already know one kind of limitation of over-fitting. We could take that polynomial, and we could limit the degree of the polynomial. That's shrinking the hypothesis space. As you shrink a hypothesis space, you fit less. Using it too much, you under-fit. We can also do that with words. We could say, I'm only actually interested in the hundred most common words. That would be shrinking my hypothesis space. That's one way to do it. We can also regularize, which is, we can try to come up with estimates for our probabilities-- or weights in general-- which are not completely driven by the data, but are also balanced against some regularizing function or smoothing function that makes things a little flatter, selling the hedges. So let's take a look at the distribution of a random variable, just to sort of show why we need to do these kinds of things. How are we going to figure out what a random variable is? We can do elicitation, right? You can ask a human. You can go to a doctor and say, hey, I'm building a classifier. What fraction of people with meningitis will present with a fever? And a doctor can give you a guess, right? It may be qualitative. You could also do that empirically. You could use training data. You could go collect a bunch of records of patient treatment or something like that. And this is basically what learning does. You take your training data, you take the trends out of the training data. The simplest version of this is for each outcome to look at the empirical rate. So, for example, if I am a jelly bean-counting robot, and I am trying to figure out in this vat of jelly beans, how many reds versus blues there are, and I draw three jelly beans and it's two reds and two blues, well, what can we do? There's the maximum likelihood estimate-- or relative frequency estimate-- which says, OK, the probabilities are just the counts in the training data. That means the maximum likelihood probability of red for this data is 2/3. Is that right? Well, the more samples I draw, the more accurate that's going to be. In general, we don't have as many samples as we want, so we're going to want to do something to prevent things like zeros in these estimates. OK. Why is this called maximum likelihood? So, here's my corpus. I'm going to call it D. There is my corpus. There are a bunch of different probabilities I could say for red. I could say they're 50-50. I could say it's 100% red. For each probability I assign to red, and one minus that goes to blue, I can compute the probability of D. OK. This is something you could try writing out for yourself. Of all of those probabilities, the one that matches the frequency of the data is the one that maximizes the probability of the data. So, that's a thing you can do, and it's totally reasonable. But in practice, you need some smoothing. So I guess we have some Halloween ghosts after all, even though we're now into the post-Halloween lectures. But we want no surprises to our model. We want our model to assign probability to events it's never seen, so that one errant pixel or word that is rare doesn't completely torpedo an otherwise very nuanced balancing of evidence. All right, so what was the maximum likelihood estimate? The maximum likelihood estimate, basically, you have to work this out, right? Maybe we can just go back and do it real quick. OK. So let's say r is my probability of red, and one minus r is my probability of blue. What is the probability of this data? Well, it's basically I got an r, and then I got another r, and then I got the other thing, which is one minus r. So as I change the probability of red, this term, which is the likelihood of the data, is going to go up and down. And the balance, the point where that's going to be maximized you can sort of, if you set it up carefully, take derivatives, find the extreme point, you'll get the relative frequency answer out. So the maximum likelihood estimate says, find me the parameters, probabilities, which make the data most likely. Well, that's fine to make the data likely, but I actually don't want that. That's not what I want. What I want is to consider the problem of, given my data, which parameters are most likely? Sounds like it's basically the same thing, and it almost is, except for one term. So what I really want is something like a posterior estimate, a maximum-based posterior estimate which says, I would like the theta which is most likely, given my data. What's that mean? Well, we can use a Bayes rule to write that. That means, find me the parameters which maximize the product of this, which is what we were doing before. This denominator here doesn't matter. It's just some constant. It doesn't change as I change theta. But there's this extra term, p of theta. And so what this says, is it says, if I want to know what parameter or what probability is most likely, I need to weigh the likelihood of the data against how likely I think that parameter is in the first place. And if I think zeros aren't very likely, then there's going to have to be some balance that is struck. OK, so this doesn't have a closed-form solution without giving you more information. I'll look at CS281A to learn a lot more about that. But here's a basic idea of how you might approach it. This is actually, due to Laplace, hundreds of years ago now, who's a philosopher who kind of worried about things like, well, how do I estimate the probability? Like, what is the probability the sun will rise in the morning? Every morning so far it's risen, so probability one. But I know that can't be right, because at some point, perhaps quite far out, the sun will once not rise. So I know that this estimate is wrong, and I need some way of mechanically incorporating the fact that there are events which I haven't seen, but which I know to be possible, or at least that I'd like to model as being possible. Why do we want to do this? Well we'd rather not have our robots just like totally freak out at unseen events, though, I suppose actually if the sun doesn't show up, that's probably grounds for freaking out. But here's Laplace's procedural estimate. Laplace said, well, basically, it's a pretty good idea to take into account the probabilities in your observation, but you should hold out an extra observation for everything you didn't see to reflect it potentially happening at some point in the future. So basically, add one to all your counts, including the ones that are zero. So the maximum likelihood estimate for red, red, blue, if I say, what's the probability of red, comma, probability of blue? 2/3, 1/3. Laplace plus would say, instead of saying there's two of one and one of the other, which normalizes the 2/3, 1/3, add one to each. So instead of two reds, there's three. There's those ones, plus my pretend red. And instead of one blue, there's two, because I have my pretend blue. Now what do I get? I get 3/5 and 2/5. Red is still winning, but this distribution has gotten flatter. And if there had been zero blues it would no longer be given probability zero. So pretty reasonable. We can do better. We can say we imagine you see each outcome k times, instead of one time, because maybe one time is too many, or maybe it's too few. We can adjust formally, if you derive this, this comes from adjusting the strength of the prior. And you can imagine wanting to do more or less than adding one. And so if I add zero, if I take Laplace's extended method and I add zero, then I just get my 2/3, 1/3 estimate from red, red, blue. We know if we add a count of one and we pretend, well, there's one phantom r and one phantom blue, we'll get 3/5. But if I add 100, there is now 100 of these, there's 100 of those r's, and there's a whole bunch of blues, too. And there's 100 blues. Now how many reds do I have? Well, I do my computations as if I had 102 reds and 101 blues. And suddenly, even though there are still more reds than blues, in my posterior estimate here, it's pretty close to 50-50. So as I crank up k, I have a stronger prior, and I fit less. If I crank down k, I fit more, and so I now I have a dial which can trade off the amount of fitting against generalization. It is certainly not the case that this is the only way to estimate probabilities or that estimating probabilities the only kind of machine learning. We will see a whole bunch of other things in the next few lectures. But what is important is that in general, there will be knobs you can turn, which cause you to do more generalization or less generalization, and that can control fitting. I'm not going to talk about the conditionals. I think I'm going to skip this, too. All right, so let's go to real Naive Bayes. So in a real classification problem, you have to smooth if you're going to use Naive Bayes, and so, for example, I can go into my spam, and instead of computing odds ratios on the maximum likelihood-- or empirical relative frequency estimates-- I can instead do some smoothing and see after that smoothing, what has the biggest odds ratio? And suddenly things that only occurred once, they don't percolate to the top, because they haven't occurred enough to overwhelm that flat prior that I'm associating them with. So this is the top of the odds ratios for ham on the left, and favoring spam on the right. Some of these maybe make sense. Like, there it is. Money. Free is probably in there somewhere. If you see money, that's a good sign that it's spam. Or capital order, or credit, presumably credit card, I don't know. There are some things that indicate ham. This looks like general English text. What is going on there? Helvetica vs. Verdana. Spammers use Verdana. What is this? This reflects the default fonts that were in use at this time across different platforms. And so one of the things you find in machine learning is, you know what you think the features are going to be, or rather, which features are going to be useful. But you might be wrong. Sometimes things surprise you, and that's why it's always good to like actually look into your model and see what has been learned here? Is there something that I can learn about this problem from what the machine has learned about the problem? All right. Let's see. We talked about tuning. So let's say I build my Naive Bayes model for spam, for digits, whatever. I've got my features. Let's say they're mostly words and pixels, though on your projects, you'll see you can do better. And I have some tuning to do. Like, again, this is for Naive Bayes. Every method is going to have a different incarnation of this. But I can change the strength of my priors. If I crank k to zero, I'm not going to smooth at all, and I'm going to fit my data very well. If I crank k to a million, I'm definitely not going over-fit, but I'm also not going to learn anything. So somewhere in between is the right amount. How do I figure out the right amount? Well, I can look at my training data and see where am I more accurate? But the k that's going to be most accurate on my training data is zero, because that's what actually fits the training data best. That's the maximum likelihood estimate. And so what you need to have, is in addition to your parameters-- which are these kind of basic counts and things like that-- that you're going to set on your training data, you need to have some held-out data that you can do things like, all right, given these counts, how much smoothing am I going to do? What's that right balance between fitting enough, but not too much? So we learn our parameters from the training data, and what is actually the parameters is going to spend on the model. We tune them on some different data, like some held-out data, because otherwise, you'll get crazy results. And then eventually, you're going to take the best value, do some final tests, test run. I'm going to say a very little bit about features, because I think it's important for when we start to get to neural nets, where the story here is going to change. So in general, your model is going to make errors. We're talking a bit more about this starting next lecture, but your models going to make errors. Here's some examples of errors that the quick Naive Bayes system I whipped up makes on this training set. Here's one example. You're a Globalscape customer. We've partnered with Scan Soft to offer you something. And then there's this other one that's also an error, which is, to receive your $30 Amazon promotional certificate, click on this. OK, one was spam that should have been ham, one was ham that should've been spam. These are tricky cases, and it's actually very hard to tell from the words which one's which. And in fact, in this case, it might actually just be noise in the data. So what are you going to do when you make errors? Well, one thing is, in general, you're going to need more features. So in spam classification, we found out that it wasn't enough to just look at words, you've got to look at other sort of metadata from the ecosystem. For digit recognition, you do sort of more advanced things than just looking at pixels, where you look at things like edges and loops and things like that. Try to do things that are invariant to rotation and scale and all of that the vision folks think about. You can add these as sources of information by just adding variables into your Naive Bayes model, but we'll also talk in the next few classes about ways to add these more flexibly, and also ways to induce these. All right, I'm going to stop there for today, and as you go, please come up and grab some more candy. Thank you. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180920_Markov_Decision_Processes_MDPs_Part_22.txt | PROFESSOR: OK. Let's get started. Today, we're going to talk a little bit more about MDPs, Markov decision processes. But before we do that, I have a quick update on Mini-Contest 1. How many of you here participated in Mini-Contest 1? OK. What? Was there a question? Well, you're in good company. There are 200 participating teams of about 250 students, so thank you all for participating. There will be a sequence of these culminating in a final contest, which I think is pretty awesome. And today, I'm going to give you just a little bit of a view into what some of the top scoring teams did in Mini-Contest 1. Before I do that, I always love to see what names people come up with. So you guys just like type some random name, but like, we're actually there looking there to see all those cool names. So we've got a pretty good assortment of cool names. Almost every year, there is a Waka Waka Waka, and as soon as I see that, I cannot then get that sound out of my head for the whole rest of the semester. So thanks, Waka Waka Waka for getting that in my head. MorePointsPlease, you'll have to report whether or not the name actually got you more points. There's this bully expression which reminds me a lot of the requirements for the course, except with professor names put in. And then, thank you whoever decided to stress test our string processing with this little emoji guy. Some bugs were discovered. STUDENT: That one is actually written in [INAUDIBLE].. I recognize it. PROFESSOR: Yeah, well thank you for stress testing our system in any case. On to the meat of it, which is, in third place, we have team Winnie the Pooh. Is Winnie the Pooh here? Are Winnie the Pooh here? OK. That is Phillip and Winnie, the Pooh, perhaps. Congratulations in third place with score 1,193. You'll notice these are all very tightly clustered in score. Their bot used BFS to find the nearest food, and re-planned whenever another agent ate the bot. So I'm going to play this here, so probably, those of you doing the contest are used to seeing these in a much higher rate, but watch all of these Pac-Men collaborate. Notice they're not sort of like-- sometimes there will be like one sad Pac-Man in some solutions, that follows another Pac-Man but never gets a dot. But these ones seem to coordinate pretty well. Good job. [APPLAUSE] Good job, Winnie the Pooh. In second place, we have team Jason L. Are you here, Jason L? Congratulations. Score of 1,200. Bot description, lots of caching of pre-computed solutions to reduce redundant computation re-planning, as in the previous one. But also, something that we saw a lot in many of the agent submissions on the leaderboard, which was a sort of prioritizing different directions, some kind of a diversity so that the Pac-Men would spread out. So let's watch this go. There they go. Can you guys see this? Yes, you can. OK. Those poor dots in the corner, they're always last. OK. Congratulations, Jason L. [APPLAUSE] And in first place, team [? Yusheng. ?] Are you here, [? Yusheng? ?] All right. There's also Ryan. Are you here, Ryan? Excellent. OK, well, I'm going to call you guys team [? Yusheng ?] Ryan. You had a score of 1,201, barely edging out the second place team by one point. What did you do? I have no idea. What did you do? Do you want to say really quickly? STUDENT: Yeah. We modified the [INAUDIBLE] in three parts. The first part is [INAUDIBLE]. PROFESSOR: Cool. So again, this theme of reducing computation and also kind of giving the agents away to specialize, which is always important in multi-agent systems. So let's see it. Let's see if you can see the difference of the one point. All right, there they go. In future contests, it's going to be team versus team, and then it gets intense. All right, those last dots. All the Pac-Men headed that way. All right. Congratulations, [? Yusheng ?] and Ryan. [APPLAUSE] Good job, everybody who participated, and here's our final leaderboard. You can see those top three are really, really closely clustered, but really there's just a whole bunch of teams that had really strong submissions. So congratulations to everybody there. A couple others to eyeball so you can get a sense of the ideas that kept coming up in some of the top submissions, a lot of pre-caching to make computation fit in into the time. A lot of re-planning, a lot of prioritizing keeping agents apart. Some kind of diversity function, dividing food pellets up, kind of splitting goals, that kind of thing. So all of these ideas, very successful. So congratulations to everybody who participated, and I look forward to seeing Mini-Contest 2. So congratulations to everybody. [APPLAUSE] Any questions on Mini-Contest 1? All right, MDPs for real this time. All right, we're going to keep talking about MDPs. And today, we're going to talk about methods that focus on the policies that we generate in MDPs so that we can start to solve them a little bit more efficiently, and also understand a little bit what's going on inside algorithms like value iteration when we run them. So I can't say this enough. Grid world is both a really important running example because it's going to show up in your homework, it's going to show up in your projects, it's going to be all over lecture, it's probably going to show up in exams. At the same time, it is just one MDP. Most MDPs do not involve robots in a grid. They do not involve walls, and they do not involve the actions north, south, east, and west. But this one does. It's basically a maze, the agents in a grid, and there are walls blocking the path and then there are various exits. The actions are north, south, east, and west and there's noise. So usually, when you take an action, it does what you expect, and sometimes it does something different. We talked last time about the details. 10% of the time, you sort of go 90 degrees to the left of what you expected. 10 percent of the time, you go 90 degrees to the right of what you expected. And if there's a wall in your way, you stay put. This MDP has rewards that we talk about as a living reward, which is possibly 0, tiny little reward that you get step by step every time even though the game hasn't ended, and then a large reward at the end when you take the exit action from a terminal square. And here, the big rewards are the plus 1 and the minus 1, and the living reward is some value, maybe like minus 0.01. That terminology of a living reward, that's grid world terminology. In a generic MDP, there's just a reward function, and every step, you get a reward. Maybe it's 0, maybe it's not. It's just some reward, R. The goal, in general for MDPs, is to maximize the sum of rewards. And last time, we talked about this idea that rewards further out in the future maybe should be discounted, either for algorithmic convenience or because it actually reflects that rewards further out in time are worth less to the agent. To recap, MDPs in general, not just grid world-- even though grid world will often be our running example-- MDPs are nondeterministic search processes. So like search, there's a set of states, S, and it's fully observed so you know what state you're in. Later on in the course, you may not even know that. But right now, there's a set of states, S. You know what state you're in. There's a set of actions, and you know what actions are available to you. What's different from search is there's now a transition function. Instead of actions having a successor, meaning you take this action and you end up in this state, there is now a distribution over successors given by the transition function. So whenever you see [? T ?] of SAS prime, you think, that's the probability that if I'm in state S and I take action A, the result will be S prime. Now, it's nondeterministic in the sense that from SA, I don't know what S prime I'm going to land on. But it is known in the sense that I know what probability each outcome has, and that's important. And that's going to become even more important towards the end of today, and especially next week. There's then a bunch of rewards, and for every SAS prime, you get a reward. So in a state S, taking action A, you'll get a reward. That reward, in general, is going to depend not just on S and A, but also on what happens. So if your action isn't successful, you may not get the same reward as if it is. And then there's a start state. We also talked about certain quantities that are going to show up over and over again in formulating and solving MDPs. They're also going to show up over and over again when we talk about reinforcement learning, so it's good to get really clear on what they are today. So these are quantities that we can define mathematically, and then we can produce algorithms for computing them either incrementally or from each other, as we'll do today. So the important one that we talk about when we think about taking an action is the policy. A policy is a map of states to action. So I give you a state, you tell me what to do. That's the policy. Search had a plan, and you could have a plan because you knew what was going to happen. Everything was deterministic. Now we have policies. Whatever state you end up in, it tells you what to do. It might be an explicit policy that lists the actions like we've seen in grid world. It might be an implicit policy that requires some computation like running expectimax. In addition to policies, which map from states to actions, we had a notion of utility. The MDP has a notion of reward, so that function R there is a reward. Every time step, you get a reward. It might be big, it might be small, it might be 0, but every time step, you get what's called an instantaneous reward. A utility is the sum of all of those rewards, or the sum of a discounted series of those rewards. So it's a possibly discounted sum of rewards. That's the utility. All the agents we have are maximizing their expected utility, and utilities here are the sum of discounted rewards. We talked about values. This is important, and I'll expand this in the next slide. Values are a function from states to numbers. So in that sense, they're a lot like a utility. What is a value of a state? It is the expected utility that you will get from that state. And you say, well, doesn't the utility sort of depend on a bunch of things? Doesn't it depend on what I do? Yes, but when we talk about the value, it's the value under acting optimally. You might not know what that is, but that's the mathematical definition of value. But then you say, also, won't the rewards I get from a state depend not only on the policy I act according to, but also on what happens? Yes, and the value will be an average over everything that might happen, where that averages the various outcomes of the actions, and we'll see that. And in an expectimax, that's what the chance nodes do. There's also Q values which, I think, are the least intuitive of these quantities. A Q value is the expected future utility, not from being in a state, but from being in a Q state, which is a chance node in an expectimax tree. So whenever you think about an MDP, I think this is just the most useful diagram you can have in your head. In general, you're at some state S, and that's where you are right now. And when you're in a state S, you get to choose an action A. So any time you're thinking about these actions, you're never going to pick one at random. You're going to pick the best one. You're always going to be maximizing over A, unless you're told a specific A to evaluate. OK, so we max over A. When I'm in a state in an action, I get to a chance node. This represents having committed to the action. I'm in the state. It's too late to wish I was in another state. I've picked an action. It's too late to pick another action. But I'm not sure what S prime is going to happen yet, because I don't know which of the possible outcomes is going to obtain. So there's this range of S primes you could end up with, and in general, we will average over those. And so that basic kind of thought that you're in a state, you'll choose an action, and then some S prime will result according to the transition function and giving you a reward for that time step, that's the basic cycle of an MDP. And then once you get to S prime, there's going to be some future. And in general in these algorithms, the future will plug in some quantity as a placeholder for the future, because if we expanded the whole thing out, it would be this like big nested equation that would be really difficult to handle. So we have these optimal quantities that we are going to be writing down expressions that either define them in terms of each other or define procedures which compute them. These quantities may all start to blur together. If they haven't blurred together already, they may blur together today when we start talking about variations of them. It's important to keep them-- especially their definitions, it's important to keep them separate. So let's pull up the grid world, and we'll look and see what values, Q values, and policies look like. And now here, you can see there are stars. Whenever I talk about values, if I want to be extra clear that I'm talking about the value of the state, meaning the average utility, the expected utility from that state, if I want to be clear that I mean under optimal choice of action, I'll put a star. So V star is the optimal values, Q star is the optimal Q values, and pi star-- there are many policies pi. Some tell you to do wise things, and some tell you to do unwise things. Pi star indicates an optimal policy. All right. Let's bring up the demo. All right, so what does this demo do? This demo is basically your project three. And what this has done is it has run value iteration, the algorithm from last time which I'll recap quickly today. It's run value iteration for 100 iterations, which is basically enough to converge on this MDP. So what's going on, this is a grid world instance with a plus 1 and a minus 1 reward at the end in those two upper right squares. And then in addition, there is a discount factor. Here, let me move this so you can see all the details. OK, so the discount here is 0.9. That means every step the reward is pushed into the future, it's going to be worth 0.9 times as much. And so if it's going to take you 10 steps to get to that 1.0, it's going to be worth less to you. And then, there's also a noise here. And in this case, I think there's not a living reward. So there's a noise of 0.2 as before. So what are these numbers? These numbers represent values. So here, that's easy. If you're in this square here, the only possible future you have is the one where you get a 1, and then the game ends. What does this 8.5 mean? Well, there's a lot of things that can happen. So let's say the policy is the one that's indicated here with the arrows, which is the optimal policy. If I act, most of the time, I go into the exit, and I get that discounted 1. Except, 20% of the time, something different happens. Sometimes I'm still in this square, and then I go into the exit. Sometimes I slip to the hazard square that says 0.57 here and so on. And so all of those possible futures, each has a weight. Because I know what actions I'm going to take, I can figure out that average. That's what that value is, it's the average utility you will get under optimal play, which in that case, is the policy that's shown. Let me bring that back for a second. Those are the values of the states. And remember, states also had four actions for the states that aren't exits, and each of those actions corresponds to a Q state. So these are the Q values for those states. And if you remember from this-- I don't know whether you can see my cursor well here, but this one that was 0.85 that's just to the left of the 1.0 square, you can now see that value 0.85 that came from optimal play from that, is associated with the action of going east. There are other actions that belong to that state, so there are three other Q states from that state, and those all have lower values. So Q values, some are going to be high, some are going to be low. In the same way that some states are good, and some are bad, and you should probably avoid the bad ones, some Q states are good and some states are bad, and you should probably avoid the bad ones. OK. The difference is you have more direct control over what Q states you pick because you get to pick A directly. You don't get to pick S except by arranging a sequence of actions to get into that state. OK. All right. OK. Last time, we talked about the Bellman equations. Actually, there's a whole bunch of things that can be called Bellman equations. Bellman equations are basically any equations that write down these quantities from MDPs, values, Q values, policies in terms of others of those quantities, usually in one time step ahead look ahead. And they usually look like a little tiny fragment of an expectimax calculation. It basically boils down to this. If you want to do optimal things-- because we've been talking about V star, V star, V star, that's the optimal value from a state. How the heck am I supposed to calculate how many points I'm going to get playing optimally if I don't even know what the policy is? The whole point of these algorithms is to find optimal policies in most cases. So what the Bellman equations do is they break down that notion of optimality, which is this nebulous thing, into a one step mutual recursion that lets you nail down a property that optimal values would have into a system of equations you can then solve. And they look like this. They basically say, whatever the optimal value is, it's going to look like doing the right thing where I plug in for the future other optimal values. Of course, I don't know them, but you know, that'll be a problem for the algorithm, not for the math. So that definition of optimal utility that we have via sort of this expectimax computation gives a one step look ahead relationship. We talked about this last time, so I'll go through this relatively quickly, but I think it's good to have it fresh for today because you're going to see variations of these equations. So what is the value of being in state S? What's that? You think, what I really want to know is what action should I take? OK, we'll get to that. I promise. What is the value for state S? So V star, the optimal value in state S, is what I will get from state S on average if I play optimally. Well, we know what that is. That's a max over these chance nodes that are right underneath. Well, what's the value of those chance nodes? Well they have values. Their values are Q values. Q of S comma A. And so we get this relationship between values and Q values, which is really simple. It says the optimum score from a state is going to be the score of the best Q state leaving that state. What does that mean? That means you maximize over all the actions, and you pick the best Q underneath. Now of course, that doesn't really help you identify what V star is because you don't know what Q star is either. So you need to define Q in terms of something. Well, you'll do that in terms of the next layer of the tree, because the next layer of the tree is, again, max nodes. These are values, but they're values of different states. So we can get this relation of states up at the top to states at the bottom, and that relation of states at sort of some time step to states one layer deeper in the tree, that's what the Bellman equations say. So what is Q star? Well, this one is probably worth writing out quickly. So Q star-- so the value, the expectimax value of a chance node is going to be the average over all the possible outcomes of that action from that state. So we're going to have to average over all the outcomes. Each outcome has a weight. That weight is T S A S prime. That's the conditional probability of S prime given S and A. Now, for each S prime, I need to know what score I'm going to get if I land in that S prime. Well, right that instant, we're going to get a reward S A, R of SAS prime, and then I'm going to land in S prime. What's going to happen then? Well, of course, I will play optimally. What does that mean? I have no idea, but I have a symbol for it. And the symbol for it is V star of S prime. OK, and there's a gamma in there to discount things in the future. So what this does, if you inline them-- let me make my marginal handwriting go away and be replaced with beautiful [INAUDIBLE].. So we can define V in terms of Q. That is the recursion where you write the value of a max node in terms of its child chance nodes. You can write Q in terms V. That's where you say that a chance node is the average of its children in expectimax. And if you inline that, you get the standard form of the Bellman equation. You'll notice the Q's are gone. That's the variable that you inline. So this here is the standard form. What does it say? It says, if I want to compute optimal values, which like, a minute ago, I wasn't really even sure what they were. Now I have an expression for them. It says the optimal value has the property that it's the best of the actions that you can take from that state where each action has a value that is an instantaneous reward plus optimal future discounted, blended together by your averaging function. So that's the Bellman equation for optimal values. It relates V stars to V stars. And you say, does that help me? Because I'm trying to find V stars, and if I need V stars to get V stars, how am I ahead? But this is just how systems of equations work. This characterizes the optimal values. It is not an algorithm for computing it. But we're a step ahead of where we were before. We have a system of equations we're trying to solve, and that's a precise thing because now, I have techniques for solving systems of equations. OK. Any questions on that? That's the core bit. OK, so we have this algorithm. Value iteration. I'm not going to go through an example, but I want to show how it relates to the equation I just showed. So Bellman equations characterize the optimal values. They say, V star has this property that it's sort of equal to this expectimax fragment with other V stars plugged in for the future. So we're defining V star in terms of V star, right? That is a system of equations. Can I solve this one? Well, it's not the easiest system of equations because of that max. Without that max, it would be a linear system. But with that max, it's kind of hard. Value iteration, which we talked about last time and ran an example of, computes these values, and it computes them using an update that looks exactly like the Bellman equation and is called a Bellman update. So the equation has the same quantities on the left and right. The update does not. So in the update, what we do is we say, well, I don't know how to solve that system of equations directly, but if I imagine I had an approximation V K to the values of all the states-- maybe it's a bad approximation. Maybe it's 0. In fact, V 0 will be 0. If I had an approximation to the values of all the states, I could get a better approximation, or at least a new approximation to the values of all the states, by running from each state, a one step, one ply expectimax search, plugging in my old approximation as the future cost. So here, what I would do is I would say, well, I want to know the value of this, so I'm going to run expectimax. I'm going to max over the children. For each child, I'm going to average over the children. And then, when I have to put in something that represents the whole future, I'm going to plug in my old estimate. Right? And we also had these subscripts K. K represented not just the iteration of the algorithm, but how many rewards were taken into account in that computation, so V 0 takes into account 0 rewards. You produce V 1, which is equivalent to a depth 1 tree being computed over. Depth 2, depth 3, depth 4. And as you run this iteration over and over again, you get successive approximations, and I told you but did not prove to you that they will converge. OK. Any questions on that? Yep. STUDENT: So are we going depth one, depth two, and depth three, or [INAUDIBLE]. PROFESSOR: Yeah. So the question is, are you going sort of down the tree kind of like expectimax? This is sort of like a lot of dynamic programs, everything's been flipped around. We compute V 0 which represents all of the possible depth 0 trees. Right? V 0 of this state, V 0 of this state, and so on, and you compute all the V 0's. And so now I know what any depth 0 expectimax will be, which is 0. Then I compute all the V 1's, which is all the depth 1 trees. And then I compute all the v 2's. Now, V 2 trees would start to get expensive, except I'm just going to plug in my V 1 answers after one layer. So each layer of this is like one ply of expectimax from every state, and it'll just be grabbing things from my cache from the previous approximation. So as I go, I am getting the values from deeper and deeper trees. But it sort of feels like you start at the bottom and work your way up. OK? Good question. All right. So value iteration. Last time, we talked about these time limited values. I'm not going to get into that again. But from here, I can actually just look at this and say that was a system of equations, and I am solving it using a fixed point solution method. I take my values from my variables, I push them from one side of the equation to the other, and then I do it again and again and again, and if I find a fixed point, I'm solved. Now of course, with fixed point methods, we don't know it's going to converge, but this is a fixed point method and I'm claiming to you it will converge. OK? How do we know it will converge? I just so happen to have a slide on it. OK, so I'm going to sketch a proof. All right. So if you're totally clear on value iteration, great. If you're still trying to kind of figure out, what is this update, all you really need to know is that V K, a vector of values for each state, is a sequence of approximations to the values which may or may not converge. OK, and the first one is going to be 0. Moreover, V K represents the result of running expectimax over a depth K tree or equivalently the expected utility for K steps starting in a given state. OK. That's what you need to have in order to follow the sketch here. So how do we know that these vectors of V K are going to converge, provided they are indeed computing these depth K computations? Case one, the actual tree of the MDP has a maximum depth. Well then once I get to that depth, I actually have untruncated exact values, and I'm done. But that's a little weird. That only works for MDPs that actually, when you unroll them from every single state, they unroll into a tree that terminates within M steps every single time. That usually only happens if you actually have a timer in your MDP and you have some time limited state space. Case two, this is the general case. And in this case, in order to show that it's going to converge, we assume that the discount gamma is less than 1. And the argument goes something like this. Well, what does V K compute? Well for any given state, V K computes the result of running expectimax to depth K. So K times steps in the future. V K plus 1 from each state looks at K plus 1 steps into the future, so the tree is one layer deeper, but they're otherwise identical. OK. So we think, well, how different can it be if you take a tree of depth K and another tree of depth K plus 1 that's equivalent for the top K layers? How different can they possibly be? Well, we can think of this one on the left as actually being depth K plus 1, because we can pad out the bottom with what value? STUDENT: 0. PROFESSOR: 0. OK? So the trees are the same, except the bottom layers are different. On the left, there's the value 0 on the bottom layer. On the right, what's the value? Who knows? It's whatever the rewards in the MDP are, but at worst, it's sort of R min, some lower bound on the rewards, and at most, it's R max. So there's a range, there's a bound. OK? So at worst, it's one. At best, it's the other. Now, up here, things are undiscounted. Down here, things are discounted pretty heavily. Down here, things are discounted by gamma to the K. So all those rewards down there, they're pretty small when you add them together with the stuff up above. OK. And so when I look and I say, what pops out of this calculation? Well there's this max and the average in the max, but all of those things that I'm maxing and averaging and maxing, in one case, have a 0 at the end, and in the other case, have one of these other terms at the end. So the difference between them is that most, bounded by the difference between R min and R max here, discounted gamma to the K. So these two values of the root can't be that different, because they're the same until the bottom and that bottom is heavily discounted. So as K increases, that difference between the values gets pushed further and further, which means it has more and more layers of gamma, and therefore, the difference between successive rounds of this algorithm's approximations are going to shrink in this way. OK? That's a sketch. That's not all the details, but that's the basic idea. It's a contraction argument. OK. All right. That was louder than I expected. Sorry. Policy methods. Let's talk about policy methods here. So what have we talked about so far? We talked about important quantities in MDPs. We talked a lot about what the value of a state is and how I can compute it in terms of values of other states, which then gave us the value iteration algorithm. Except this is weird in a couple of ways. One, I don't think any of you woke up this morning thinking, I really want to know how to find the value of a state. In the same way that when you run minimax, you don't really care the minimax value of the root. You're running minimax to get the action at the root, so we've got to like anchor this back to policies, because we're trying to figure out how to act. The other thing is, value iteration, as we saw last time, can be pretty slow, and so we're going to talk about better methods to do that. So we're going to talk about methods that look at policies, evaluating policies, improving policies, and extracting policies. And then we'll be in a position to move on to reinforcement learning. There's a question. STUDENT: Yeah. What's the proof if, like, when gamma's 1? PROFESSOR: What's the proof if gamma's 1? If gamma's 1, they may not converge, because you can easily have MDPs where the values diverge. Even the values themselves may not be finite, so all kinds of things. Now there are other cases that you can imagine, like if you can guarantee an absorbing state or something, it becomes analogous to the fixed depth. It's a great question. Yeah, it's a very important point. When gamma's 1, you don't really have these convergent guarantees. All right, policy methods. First one is policy evaluation. So up until now, we've been talking about calculating quantities that are optimal quantities. What is the value of this state if I act optimally? OK, policy evaluation's simpler. In policy evaluation, someone has given you a policy. It may be good, it may be bad, it may be optimal, it may be terrible. All you want to know is, how good is this policy? Tell me for each state what my score will be if I do the thing written on this map. OK, and the answer might be, all right, it's bad, or it's really good. How are computations different if you have a fixed policy? Well it turns out they're much, much easier. And the short reason why they're easier is all those maxes go away. You don't have to think about what's the best action. Someone has told you the action. All you have to think about is the different possible outcomes, and that just forms some big linear system that's a lot easier to work with. So in expectimax, or a general MDP when you're computing an optimal quantity, which is what expectimax does, you form a computation tree that considers all the possible actions so that you can max over them and thereby choose the best one. OK. So what do you do? Well you take your state, you max over all the actions, and then for each of those possible actions, you compute the value of that action, which is the Q value. OK? That's what happens when you're trying to do optimal things. This is the computation of V star of S. That's what it looks like. And then, of course, it continues down. What if you had some fixed policy? Well, now you don't have to act optimally anymore. You just have to act the way pi tells you. You just have to follow pi. So if pi were telling you what to do, we wouldn't be trying to compute V star of S, the score or utility under optimal action, which is hard because we don't know what optimal action. Now we're in this much easier world where all we have to do is figure out what our average score will be if we do pi. We know what pi is. It's given to us as input, so we only have to evaluate this one policy. And that means that when you're at the equivalent of a max node here, and you're computing the value of a state, it's a value according to pi. There's only one edge going out of it. There's only one action. Pi is telling you to do pi of S, so you do it. Now of course, there's still the same-- some you have to think about over all the possible outcomes. The policy won't tell you what will happen. It will just tell you what to do. So it's the max nodes that get simpler. All right? So let's think about utilities for a fixed policy. In some ways, these, I think, are easier to think about than the optimal ones. So another basic operation with MDPs is to compute the utility of a state, not under optimal play but under a fixed presumably non-optimal and general policy. So let's define the utility of a state under a fixed policy of pi. V pi of S is going to be the expected total discounted rewards starting in that state and following pie. That's again, given by this tree, except now there's no max nodes anymore. There's just pi nodes and then chance nodes, so we can compute that out with the same kind of recursive relation. So I can say, well I don't know what V pi of a state S is, but I can look at my fragment and I can say, luckily, I don't have to max over A anymore, but I still need to average over what's going to happen when I do pi of S. So I whip out my trusty average. I average over the possible S primes that could happen if I did pi of S from state S, and I take an average. Well the probability that S prime will happen if I take pi of S as my action is that transition function. And then what score will I get? Well, I'll get a reward right then. It'll be the rewards corresponding to being in state S, taking pi of S, and then landing in this S prime, and I'm going to consider them all. Plus, remember the utility is the current reward plus discounted future rewards. So I discount my future, and then I plug in my future rewards. My future rewards are from state S prime, and the future value is not the optimal value from S prime, but presuming I continue to do what pi says, V pi of S prime. So this looks a lot like the equation before, except you'll notice the max over A has been replaced by A equals pi of S, so it's a little easier. There it is. And this is actually-- if you look at this now, this is now a linear system of equations. So in the back of your head, you just think, even if I totally space this algorithm I'm about to learn, I can always just solve linear equations by sticking them in MATLAB or something. All right, so let's think of an example. Here is one of our robots in a grid world, and this robot has been handed the always go right policy. This is a policy. Is it a good policy? It's a pretty bad policy. How likely are you to actually make it across the bridge? Pretty unlikely. You have to like-- the robot's going to continually try to throw itself off the bridge, and like, maybe it fails over and over again and gets to the other side. This is a bad policy, OK? Here's another policy. This one's a presumably optimal one of always go forward. OK, so here are two policies under the same MDP. Each state is going to have a value for the first policy and each state is going to have a value for the second policy. In general, the values are going to be higher for the second policy because you accumulate more rewards. But every policy has a value function, the values are just sometimes bad. OK. From these scenarios that I showed you here with the different policies, here are the actual values of those policies. So of course, the exact numbers depend on what we associate with falling off the cliff. Here, it's minus 10. Getting to the edge is 100, and then what's the discount? What's the living reward? So don't worry about the specific numbers, but for some reasonable setting of this MDP, you can see that always go right has a really, really bad-- so V of always go right is really bad unless you happen to be in the exit square, because you have no choice but to get your reward. It's like, kind of not too bad from this state, because there's a decent enough chance that you'll fail to throw yourself off the cliff that you get a couple points. OK? Always go forward, what's that V? It also has an identically shaped value function. Each square has a value, except now the values are much better, because for most states, your expected discounted rewards will be higher. It's a better policy. So one way to find a good policy is to enumerate all the policies, evaluate them all, and pick the one where the numbers are the highest. It's not a good algorithm, but the point here is you can think of the policy as a thing you search over, and we're going to start doing that right now. Any questions? All right, policy evaluation. One trick we do over and over again with these Bellman systems of equations, or should I say systems of Bellman equations, is we take these equality equations that we don't know how to solve, we turn the equalities into updates, and then, provided certain conditions hold, we know that those updates will converge to the right fixed points. So how do we calculate V's? Well, one idea is we can turn these recursive equations into updates. It's just like value iteration, except it's easier. We start off by saying, if you have 0 time steps left, my first approximation is every state gets 0 rewards. That's easy. But then I say, what will I get on average after K plus 1 steps of following pi from state S, and I'll do this for every state. I say well, I can figure out what happens next. The next thing is going to be I'm going to take action pi of S. And then, well, chance node kicks in. So S prime will be my next state. I don't know which one, so I have to average over all the states that can happen from state S and action pi of S. So I average over all the S primes. For each one of these, I will get a reward of S pi of S S prime, and then a discounted future. Well what's the future look like? Well, I have K plus 1 steps. I just took a step. That means my future only has K steps. It's of state S prime, and it's a future from following pi, not a future of star which is optimal action for K steps. So here you go. Magic [INAUDIBLE] appear. OK. So here, now, you have an update. If this arrow were an equation, and the K plus 1 and K were gone, this would be the Bellman equation. I take the equation, I turn it into an update. It's now a fixed point solution method that represents a dynamic program for solving this. You would look at this, and you say, that looks like value iteration, except instead of maxing, I just take the action I'm stuck with. That's right. It's got a name. It's called policy evaluation, but it is just value iteration where you don't do a max. OK, so you run this, and you run this, and you run this, and you'll eventually get a vector where, for each state, you have a value that tells you your average score from that. This only makes sense when the number of states is manageable, and the reason for that is your efficiency here is S squared per iteration. So I have to do this thing S times per iteration. I have to visit each state and compute its new approximation. And then for each state, I need to loop over all of the possible outcomes of the action I've been told to take. Right? So that's another factor of S. And you think, wait, but each state probably doesn't lead to every possible other state. That's right. Most MDPs are not fully connected, and so sometimes the branching factor here is much less than S. This is better than value iteration because value iteration also had a max in here, and that max gave rise to another factor of A, which is much more expensive. OK. And again, as we talked about before, if you didn't-- now that the maxes are gone, there's nothing nasty here. It's a system of linear equations and we know how to solve those. OK, so now we have-- what was that? What just happened? Policy evaluation, input a policy, mapping from states to actions. Output a vector of values. Not optimal values, pi values. If pi happened to be optimal, they'd be optimal values. But in general, you've just evaluated some completely arbitrary policy pi. Now it's time for the opposite. Policy evaluation takes a policy and produces values. Now we're going to take values and produce a policy. So this is like, you sit down, you're going to play the grand master at chess, and you have a secret. You, through some magical means, have access to the value of every configuration of the chessboard. Great. Does this help you? How do you turn values into knowing what action to take? Because you sit down, and you're like, checkmate. Right? That's not really how the game works, right? You need to actually look at the board and pick a move. So how are we going to turn scores of states into moves? And so you think, well that's maybe not too hard. I'll do a look ahead, I'll see what I can do, and then look at the values. That's basically the idea, but let's dig in a little more, and we'll see something that's actually very deep that will show up next week as well with reinforcement learning. How are we going to compute actions from values? OK, let's imagine somebody gives you optimal values, which is a big deal, right? These optimal values may be intractable to compute. You have optimal values. How are you going to extract a policy for them? Or equivalently, what policy do these values imply? Well here's a little grid world. And in this grid world, there are values on every state. And these are the optimal values that have been computed through value iteration. So let's think, how should we act? Well, let's imagine you were in this interesting state here, the 0.89. OK? And shown on this slide is the actual optimal policy. So in this case, what are you supposed to do from here? You're supposed to go west and do the shimmy thing. So this is a setting of the grid world where the right thing is to do the shimmy thing. But if you just looked at the values and you didn't see that arrow, what would you do? You'd look at it, you'd be like, 0.89. I guess that's OK. What should I do? Well I'd like that 0.98 please, except you don't have an action that gets you to that square. You've got north, south, east, and west, and they do a variety of things with noise, right? And so, even though you might wish you were in a state, you don't get to do that. You just need to decide what action is best. So you think, well, how good is north? Like, is it going to get me to the 0.98, or is it going to drop me in the pit, or what? And so in order to figure out how to act, you need to unroll all of your actions far enough in the expectimax tree that you can plug in these magical optimal values that you've been given. So acting's actually sort of a pain in the butt, and it looks like this. We need to say, all right, I need to know the optimal action for S. I have the optimal values, but unfortunately, I don't have the optimal actions yet. So, let's start doing an expectimax until I find a place to put these optimal values that I've been so luckily given. So what I'll do is, I'm going to have to do some computation for every A, because I'm going to have to look at all my actions and see which one of these actions actually achieves this value that I know to be the value of this state. Right? And so what I'll do is, I'll look at all the actions A, and for each one, I'll say, well, OK, for that action, some S prime is going to happen. For that S prime, I'm going to get a reward. The reward's going to correspond to the reward from state S to action A, landing in S prime, plus a future value, which luckily I know for every S prime. Can't forget my discount. I can't forget the fact that S prime was not deterministic so I have to average over all the possible S primes. So I have to do this computation here. What is this computation here? That's a chance node. That's a Q value, right? I have to compute that again, but I'll plug in V star. OK? Now what do I do with A? I'm going to compute each of these Q values, and I'm going to say, all right, north was the best. I'm going to do north. That's almost a max over A, but a max over A would be a number. It'd be like 7.6. I don't want to know that 7.6 is the answer. I want to know that north is the action which has the maximum value. The way we write that, which many of you have probably seen in some other context is arg max. So I say, I want the arg max over A, the A which gives you the maximum value of this unrolled chance node computation. So that's sort of a pain. Even though I have the optimal values, I still need to do a layer of expectimax to figure out what actions give rise to them. Choosing actions from values is annoying. OK? All right, there it is pretty. This thing where I give you values and you extract a policy from them, it is called policy extraction and it gets the policy that's implied by those values. If those values are the optimal values, it will extract for you the optimal policy. If those values are some other values, it'll extract some other policy, but it'll extract a policy that's sort of driven by those values as a one step look ahead. All right. On the other hand, these Q states, which are weird and that's why we don't have a name for them in common terminology, they're actually really nice for this purpose. Because if instead of giving you the values, somebody gave you the optimal Q values, it would be really, really, really easy to select actions. Because if you were in this state here, where we were trying to figure out how do we get that 0.89, well it's really easy. That 0.89 lives on the Q state corresponding to that square and west. That square and north has a different value. It's 0.76. So if you have the Q values, you can just look around at all the actions, and you know their Q values have been computed for you already. So how should you react? It's trivial. You take an arg max of the Q values surrounding you. The moral of the story is that actions are way easier to select from Q values than values. This observation is basically what unlocked modern reinforcement learning, and we'll talk about that next week. OK? OK, so, let's see. We'll take a two minute break now, then we'll talk about an algorithm that combines these two ideas into something called policy iteration, and we'll get our first taste of reinforcement learning. OK? So two minutes. There's a request to go back to the previous slide. I will do that. By the way, these slides are always up before lecture if you want them. Actually, if you really click around, the lecture is up before lecture, because we have past lectures recorded. OK. All right. Policy iteration. Once you have policy evaluation, which takes a policy and produces values, and you have policy extraction, which takes values and figures out what policy they imply, policy iteration's actually a really simple algorithm. You just alternate those two. So why do we even do it? Well, there are some problems with value iteration. Let's take a look. Let's take a look at value iteration happening here. All right, so this is going to be our favorite grid world, and every time I hit the button, I'm going to get a round of value iteration. And on this grid world, it's like super fast. OK? So what you'll remember is-- you can think it in the back of your head, two things are happening. One is we are doing an iterative algorithm that will eventually converge to the true optimal values. Great. It's also the case that if I run this for K iterations, say 7 iterations, I will have sort of the values that represent the MDP if it were truncated after 7 more rewards. So after 0 rewards, my approximation is 0. And then, I update. Some of them turned non-0. And I update and I update and I update and I update and I update. And then, for a while, things are changing. But you'll notice now that I'm up to iteration 19, and maybe some numbers will change, but the errors are pretty much done. So if I do this for a long time, with enough degrees of precision, some numbers are still changing, some infinite series are still slowly accumulating, but there's just not that many futures from any given state that are that long. So they add up to value, but they don't actually add up to a lot of change in the policy. This is a very common thing. So a common thing with value iteration has basically got a major problem, which is S squared A is really slow. That means you have to visit every state, and sometimes that's just a deal breaker. We'll talk about approximate methods starting next week. But even if you did that, you would then visit each state and each successor to that state, which we write as S squared, but usually it's not as bad as S squared. And then, for each one, you have to look at every possible action, so it's slow. However, the max of each state almost always doesn't change. So if you saw I went like 100 iterations into value iteration, and the numbers were changing. So those chance nodes that compute those sums were giving different results, but the max, the one that decides which child is the one that actually supports optimal behavior, they weren't changing. The policy was fixed. So for every action other than the one that we had already landed on, that computation was wasted. Also something you saw, which is very related, is that the policy has often converged long before the values. And once the policy converges, every branch of that expectimax tree that doesn't correspond to an optimal action is wasted computation, so you're doing a factor of A of wasted computation. OK, so how can we fix that? These are in there so you can see it yourself. OK, let's do policy iteration as an alternative to this. So an alternative approach for optimal values looks like this. Step one, you're going to do a policy evaluation. You will compute the utilities, the values, not for the optimal policy, but just for some policy. On the plus side, this is going to be pretty fast, because policy evaluation is a factor of A faster than value iteration. The downside will be you have values for the wrong policy. Life's full of trade offs. Step two is you will improve your policy. Now that you have values, not of the optimal policy but of random policy, you will do a one step look ahead improvement round where you actually consider all the actions again, and you extract a one step look ahead policy against those values. OK, that is just as slow as value iteration, as an iteration of value iteration. It is an iteration of value iteration. You can repeat those until the policy converges. That's it. So it's policy iteration. It's still optimal, because step two alone repeated would be optimal. It would be value iteration, but it can be much faster. You evaluate the policy you've got for a while, and then every now and then, you go back in, and you consider the other actions, and then you start evaluating this policy for a while. OK, so let's write that out in math. But it really is just a synthesis of these others, so these are going to look exactly like the equations from before, that is a feature, not a bug. So evaluation, we fix a policy pi, presumably a bad policy, and we find its values. And we iterate the following until it converges. OK, what is this equation? This is the equation that says the values are an average of the results according to pi, where I plug in future pi values as my truncation function. Great. I run this for a while, I don't have to max over actions. I'm just going to follow pi, and now I know the values of all the states for pi. Now I do improvement, I do one step look ahead. This is the one that does all the work. I say, I want my new policy, I want policy I plus 1, pi I plus 1 of x. Well now, I take an arg max. From S, I consider all the actions once again, just like in value iteration or expectimax. And for each action, well, I do that same thing where I average all the possible outcomes and plug in a truncation function. The truncation function here came from the previous round of the policy I just evaluated. Now you think, am I going to get the same policy PI out? You're not, not in general. Because although you're plugging in values from V pi, you're plugging them in inside an admittedly small expectimax tree. And remember when we talked about minimax? We had these evaluation functions for a chess position. We bury it deep into a minimax tree or an expectimax tree. And even though that evaluation function isn't very good, if you bury it under enough layers of look ahead, it starts mattering less and less whether that approximation is correct. And so this approximation V pi is being buried under a layer of look ahead and a gamma, so it's being discounted. And so this process will improve your policy, and then you go back and forth. That's policy iteration. Any questions? Let me make the red go away. Any questions about this? OK. A very, very common state to be in right now is some mix of symbol shock-- like so many V's, so many pi's so many Q's, and all possible configurations-- and the feeling that you have just seen the same equation like 17 times with minor variations. You have just seen the same equation 17 times with minor variations. It all comes down to just one layer of expectimax, either relating a max node to a chance node, a chance node to a max node, that's V's to Q's or Q's to V's, or maybe V's to V's. You can write out a Bellman equation for Q to Q if you want. It's all the same thing with just kind of different starting points and ending points in that expectimax tree. The other difference is whether you max over your actions, or you just take action pi. And that's the difference between computing pi values, the evaluation of policy pi, or optimal values. And so really, it is the same core of a one step look ahead expectimax, starting in different places, ending in different places, with different assumptions about optimality. So let's compare. All right. We have value iteration, we have policy generation. They compute the exact same thing. They take an MDP. They then compute, for each state, the optimal value. That's what value iteration and policy iteration do. In value iteration, each iteration you consider both all the values and also the policy, because that place where you maxed over the A's, if you remember which one was the biggest, that's the policy. You don't track that policy, but when you take that max, you're recomputing it every time. In policy iteration, in contrast, you usually keep your policy fixed and do a bunch of tracking of value changes under that policy, and every now and then, let the policy consider other actions. If the policy's changing all the time, you're wasting your time. If the policy is changing rarely, you save a lot of time. And that's the trade off here. Usually this is faster. OK. Here's the summary, and then we're going to branch into a little reinforcement learning. Actually, before I summarize, any questions? Yep? STUDENT: [INAUDIBLE]. PROFESSOR: So for the policy update, how many rounds of that do we have to do? This is actually-- it's a great question. When you improve your values for a fixed policy, you might do this a lot. You might do this, you know, until convergence. You might do this 100x. When you do a one step look ahead, you do this once. You do a one step look ahead. STUDENT: [INAUDIBLE]. PROFESSOR: And then you go back and you do 100 times with this new policy. So there's a cycle here, which is, take your policy, evaluate it. That means you run these faster updates until convergence. STUDENT: [INAUDIBLE]. PROFESSOR: Yep, and then once you get a new policy, you go up and you evaluate it. And so you're spending most of your iterations evaluating policies, and every 100th iteration, you actually update the policy. STUDENT: [INAUDIBLE]. PROFESSOR: So the question is, does this take a lot of time? It can. I can, but in general, this is a factor of A faster. And you don't actually-- one difference in reality, this is actually a great point. One difference is that when you do this in practice, you don't start the evaluation over at 0, you start it at your old values. So if the policy hasn't changed much, you're pretty close to begin with. There are ways to speed this up. In general, it is faster. You can think of it just as value iteration where sometimes you, instead of looping over the actions, you just recycle the last round's preferred action, and then it's easier to see why it would be faster. It's a good question. Any others? Yeah. STUDENT: [INAUDIBLE] PROFESSOR: Yeah, it's a good question. In general, whether it's value iteration or policy iteration, when you iterate these things, when do you stop? You have to have some notion of convergence, and that's generally based on the size of the changes, and there's different criteria. You can also do fixed iterations. But when you've got this view of an embedded optimization, the question is, how much polishing do you want to do of V pi if you're just going to change pi? And so, in general with these methods, you don't want to over optimize intermediate quantities that are just going to be discarded and turned into other quantities. And so, in general, it is not the case that you start the evaluation step at 0, and it's also not the case that you have a strict convergence. You might just run it five times for every one improvement step. These are all questions for which the answer is, there are trade offs, and there's no hard and fast answer. Basically, as long as you-- you don't even have to visit every state every time. As long as you visit every state and every action from that state infinitely often, you'll eventually converge to the right thing. These methods are very robust to being sort of juggled around the orders in which you do a limited or complete, simplified or full Bellman updates. OK. All right. Summary, and then we're going to talk about reinforcement learning a little bit. What if you have an MDP and you would like to compute optimal values? Use value iteration or policy iteration. OK? If you want to compute values for a particular policy, you use policy evaluation. It's faster. If you have a policy that you wish to turn-- if you have values and you wish to turn them into a policy, use policy extraction, which is just a one step look ahead where you plug those values in. If those values are really good, that one step ahead policy might be really good. If those values are 0, you just did a level one expectimax. Don't expect much out of your policy, but maybe it's already better than moving randomly. OK. You look at these and you say, these are all the same. Like I said before, these are basically all variations of the Bellman updates, and they're all just one step ahead fragments. That point where you realize they're actually all the same and there's sort of a core piece that you then pin down in various ways, that is both a point of high confusion conceptually and also a point where, once you get through that, everything starts to be a little easier to remember because the arbitrariness starts to be determined by the purpose. All right, reinforcement learning. Let's play slots. OK, this is CS188 slots. Imagine you are a robot. We will pick actions for this robot. And there are two slot machines, blue and red. The blue slot machine, every time you pull the lever, gives you a dollar. It's a pretty good slot machine. Real slot machines aren't like this. I would love this slot machine. OK. The other slot machine, when you pull the lever, it either gives you $0 or $2. OK? Now, what should we do? Well, of course, we're going to formulate it as an MDP. OK, except it's a really, really simple MDP. It's even simpler than what's on the slide, so if anything, the thing on the slide over-complicates it. You have an MDP, in which your actions are you can pull the blue lever and get your dollar, or you can pull the red lever and have a little bit of excitement and get either $0 or $2. In this particular formulation, I've split up two states. There's really only one state. There's really just-- I'm in the state, what do I do next? But if you actually look at the MDP formalism, the reward I get depends on whether I win or lose, and so S prime needs to be different if I win or lose. So here's, if anything, an over-complication of this MDP. And if you look at it, you see that from either state, the blue action takes you-- you're a winner. You are an instant winner for a dollar. And the red action, from either state, 75% chance of giving you $2, 25% chance of giving you $0. All right, let's avoid infinite rewards. So there will be no discount, because they'll just make this messy. There's 100 times steps, so the answers aren't all infinity, and we know the MDP. We know both states are going to have the same value, so we can sort of collapse that notion. Let's look at this MDP and think really hard and decide what is the optimum policy. OK, well what's going to happen if I choose the policy play blue? That's a policy. Let's evaluate it. The states are the same, so what is the value of the policy play blue, presuming that's going to be 100 times steps? How much money will I make if I always play blue 100 times? STUDENT: $100. PROFESSOR: $100. OK, how about play red? That's a policy. Let's evaluate that policy. What am I going to get? Well you look at it and you're like, I don't know what you're going to get because there's a slot machine. Right? And remember the value is the expected discounted reward, so it's the expected reward. So I'm going to go 100 steps, and on an expectation, an average, what am I going to get? I don't know what I'll actually get, but on average, what am I going to get? STUDENT: $150. PROFESSOR: $150. OK. Here we are. You just did offline planning. We did not play slots. You looked at the MDP. You thought and you thought and you thought deep and profound thoughts, and you came to the realization that play red has value $150, play blue has value $100. What is the optimal policy for this MDP? Play red. All right, great. OK, you knew the quantities of the MDP. You determined values and policies, all of that just thinking offline using math, and the input values from the MDP. Great. Let's play. OK. I hope you solved the MDP correctly, because it's real fake money now. OK? So we're going to actually play. What should I play? I want to be optimal. STUDENT: Red. PROFESSOR: Red. OK, cool. $2. OK, but I thought I was just going to get $1.50. OK, that's an expectation, the actual thing that happens is going to be a sample. OK? $2, now what? What [INAUDIBLE] should I play? Red, $2. Now what? Red. Now? Red. Red, red, red, red, red, red. I can't stop. OK, we just played 10 times. How'd we do? We got $12. On average, what will we get? On average, we would have gotten $15, so even like a little unlucky, but not catastrophically unlucky. What we did offline when we thought about the averages and compared them and did math, that was solving an MDP. This is actually playing the real fake game. OK, we actually got samples back. They may or may not adhere to the average. The average is just an average. OK, and that was actually playing the game. Important distinction even though, yes, they were both just on PowerPoint. OK? Are you ready? Rules are going to change. You walk in to the new and improved CS188 casino and the rules have changed. There is still the $1 slot machine. There is now the $0 or $2 slot machine just like before, except you don't know its payoff probability. You don't know how likely you are to win. So it's the same MDP in structure, but I no longer know the probability that I'll get the $2. Maybe I'll always get the $2. Maybe I'll never get the $2. I don't know. OK, so I just took something away from you. You don't know the MDP anymore. You know that there is an MDP, that is a useful formalism and a useful way to think about the world, but you don't know what the MDP is. You don't know the parameters. OK. Everything is different. You are now in a totally different world. Time to play, right? You cannot think deep thoughts now and figure out the value of the red policy. Right? You don't have the necessary information. How are you going to get the necessary information to figure out how to act optimally? We have to actually act. So let's play. Which lever should I pull? STUDENT: Red. PROFESSOR: Red? Who wants red? Who wants blue? All right, we'll go with red. OK, red. I got a $0. Must be a dud, right? It's just a sample. OK, who wants red, who wants blue? Who wants red? Who wants blue? OK, $0. Who wants red? Who wants blue? Nobody is wanting blue, but there are fewer and fewer red hands. OK, red. Red. OK. We played four times. Who wants red? All right, yeah. They're starting to come back. Red's seeming a little better because we finally got some money out of it. OK, we'll play red. Who wants red? Who wants blue? OK, I'm going to play red a bunch. Red, red, red, red, red. OK. All right. Now what should happen? We're going play again. Let's play 11. Who wants red? OK, we've got some die hard red fans. Who wants blue? Everybody wants blue, so I'll play blue. What's going to happen when I play blue? I'm going to get a dollar. What are you going to want after that? Like, more blue because nothing's really changed. So one interesting thing is, once you switch to blue, there's really not a great reason to switch back on the surface here. OK, but what just happened? Let's take a step back. What just happened? You did not take an MDP and solve it, because I didn't give you the MDP. What did you do? You interacted with the real fake world, and then you took those observations, and you used them to figure out a little bit more information about the MDP. And as you gathered information about the MDP, you started to have different opinions of what you should do. And eventually, you decided that red appears to be sort of a loser. Although, in fact, it could be amazing and you were just unlucky. But from your experience, you know? All right. That was not planning. That first time when you calculated averages and then you just went for it and you pulled the red lever forever, that was offline planning. This was learning. You did reinforcement learning. There was an MDP, but you couldn't solve it with computation because you didn't know the parameters. You needed to actually act to figure out the parameters by seeing samples of the behavior of the system. You also saw basically every idea that is kind of core to reinforcement learning right there, even with that simple case. So one important idea in reinforcement learning is exploration. You have to try unknown things in order to get information. So red was, in fact, not a very good slot machine. But you didn't know that, and you had to try it. And everything that's unknown, the only way to figure out what those parameters are is to actually try it. That's called exploration. When you take an action and your payment is not like slot machine dollars coming in, what do you get paid in when you pull that red lever? You got paid in knowledge. You got paid in experience. And that experience helps you make better decisions in the future. So exploration is you have to sometimes do things for the experience rather than for the yield. Exploitation, on the other hand, is eventually, you have learned all you care to know about that red slot machine, and you're done. You're like, OK, enough exploration, it's time to pull the blue lever. It's time to exploit the knowledge I have in order to get return, right? You also discovered the concept of regret. OK, this does not mean exactly what it means informally. Regret is the idea that even if you learn intelligently and you do an optimal job given your uncertainty of trying things out, you will not do as well as if you had actually known the MDP to begin with. Regret is the difference between what you experience and the best that you could possibly have gotten sort of in retrospect. You also ran into the idea of sampling, because there's chance. You can't just try the red thing once and be like, oh, it's a $2, it's a 100% payoff. You have to do things over and over again, and that has consequences. When you do reinforcement learning, you've got to keep trying things over and over and over again. And the more complicated they are, maybe the more times you have to try them. And trying things in the real world-- this isn't simulation. It's like you actually like send that helicopter up in the air, it crashes, and you buy a new one. You might have to do that a lot. Right? Experience can cost. OK, and sampling means you have to try things repeatedly. Also difficulty. This is like the simplest MDP you could possibly imagine, and solving it in the case of the known MDP was trivial. You didn't need value iteration. You didn't need anything from the past few lectures. You just needed a description of the problem and your basic mathematical intuitions, and you were done. As soon as I take away that probability, suddenly it's hard. How many times should I pull that lever? Should I switch back? Should I ever switch back? Suddenly, these questions that were very easy are very hard, simply because learning is much harder. It's much harder to learn an MDP than it is to solve a known one. Any questions on any of that? OK, next time, reinforcement learning. We will think about how you should act when there are MDPs, but you don't know any of the parameters, and the only way to learn what's going on is by interacting with the world. So we will start that next lecture. |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20180906_Constraint_Satisfaction_Problems_CSPs_Part_22.txt | PROFESSOR: OK, let's get started. So today, we're going-- I'm loud again. All right. I'll try to control my enthusiasm, but it's hard, because we're going to talk about CSPs again. So today, we're going to continue talking about CSPs, and, in particular, we're going to talk about ways to solve them efficiently that extend and go beyond what we did in the last lecture. And we're also going to talk about a related but more general topic of local search methods that apply not only to CSPs but also to the other kinds of search problems that we've seen so far. So let's do a quick recap. We're going to do more solutions-- solving of CSPs, so let's remind ourselves what a constraint satisfaction problem is. Well, first of all, what are the ingredients? First of all, there are variables. The variables usually represent some quantity or abstraction that we are trying to reason about. And so in our running example of map coloring, the variables might be countries on a map. Each of those variables has a domain, which are the set of values that it might take on. And for map coloring, that was red, green, and blue. This is a great running example. It illustrates all kinds of things, but it's really important to remember that almost all CSPs are not map coloring, and almost all domains are not red, green, and blue, and almost all constraints are not inequality constraint. So you're going to see a lot of these in this example, but CSPs, arbitrary domains-- arbitrary information hidden inside those constraints. So it's the variables, the domains, and the constraints. And when we talk about constraints in a CSP, they come in two flavors in terms of how we specify them. There's implicit constraints, and an implicit constraint is basically a little snippet of code that you run in order to tell you whether or not the variables in its input have an OK assignment, because remember, constraints look at one or more variables and say thumbs up to that assignment or thumbs down. That breaks one of the rules. Each constraint is a rule. So an implicit constraint is a snippet of code that you have to execute. That's great, because it's general. But it's bad in a couple of ways, too. You actually have to run them to see what's going to happen, so there's a lot of static analysis that you can't do pre-computation, and it can also be slower, because you often have to call out to these things in practice. The other form of constraints are explicit constraints, and in an explicit constraint, you actually literally enumerate for, say, these two variables that are being constrained. Here are the tuples that you're willing to allow. And so this is Country A, and this is Country B. I could have an implicit piece of code that looks at the two values, runs equality, negates that, and then returns whatever. An explicit constraint would say, for A, comma, B, the legal tuples are red, comma, green, green, comma, blue, and so on, leaving out the illegal ones. That's how you specify these, and in these constraint graphs, you kind of-- the line in the constraint graph is going to indicate the presence of a constraint between two variables. But that information of what's buried inside-- you have to look that up in either an implicit or an explicit constraint. And the same idea is going to come back later with [INAUDIBLE],, where the graph structure will tell us what the dependencies are between the variables. But to actually know what the nature of those conditional probabilities are, we're going to have to look under the hood and see something that's not present in the graph. When we talk about constraints, they come in different [INAUDIBLE]. So a unary constraint is really just a restricted domain. It says this country can only be red or green, for whatever reason. There are binary constraints. This is different from a binary domain. A variable with a binary domain is either true or false or some other of two values, but a binary constraint is a constraint that lives on top of two variables. And there are higher order constraints. You could have a three area constraint, a four area constraint, and, in general, higher unary constraints. Some of the algorithms work with them, and some of them do not. What's the goal of a CSP? In this class, the goal is to find a solution. So when I present you with a CSP and say, please solve this, what I'm asking for is for you to produce an assignment of values from the domains to the variables that satisfies all of the constraints. So we're looking at find any solution. There are other things you could imagine doing with a CSP, like finding all the solutions. That might be bad news, because there might be exponentially many of them. Find me the best solution. Maybe there are some weights involved. Tell me whether or not there's any solution in which this node is green. These are all questions that you could ask of a CSP, but the algorithms we're talking about today and last class don't. They're just, give me a solution. As soon as you find one, stop. We had a basic algorithm, which we're going to extend, and then find an alternative to today, called backtracking search. And we did a bunch of demos of these. And basically, backtracking search takes a CSP, and it takes a search formulation of that CSP, where the state of the search problem is partial assignment. So at the top of that search tree is the empty assignment, where no variables have any values. And then as you go down the successor function in that search problem, takes variables, and adds a value. And so it extends the assignment, and then at the bottom of the tree are all the complete assignments in which every variable is assigned. So you can imagine there's this tree, and up here is the empty assignment, and down here are all the complete assignments. And of course, this tree doesn't look like this. It looks like this or something. It gets exponentially big exponentially fast. So although the actual implementation that we talk about is a recursive implementation, this is still depth first search, and it's going to behaviorally act just like depth first search from last week, where you could imagine that there is a fringe, and you explore the deepest thing. That fringe isn't embodied in this code as an actual cue. It's embodied in the sort of the call trace, but it does the exact same thing. So what was this algorithm that we had? Well, if you have a complete assignment, great. You return it. Otherwise, you pick a variable. That's the next variable you're going to assign as your work your way down the tree. And then you're going to loop over the values. So you pick a value. If that doesn't work out, and you recurse back, you're going to pick the next value. So you're going to have both the opportunity to choose what variable is next and the opportunity to choose what order to try the values in. And if the value is consistent, meaning adding it to the assignment does not yet break any constraints, then you're going to recurse and extend and extend. If it doesn't work, you're going to backtrack. So this is our basic backtracking search. It's not very fast. In general, CSPs-- solving CSPs is an empty, hard endeavor. And that means whatever we do today, whatever cool algorithms we come up with, there's going to be some CSPs that are just going to be, to the best of our knowledge, really, really hard to solve. It is an empty hard problem, but there are often instances which have certain properties that allow us to solve them faster. So on top of that backtracking search, last time, we talked about several general purpose ideas to improve things and to solve the CSP faster. So this is different than, for example, a star search, where the heuristic was specific to a single problem. I have my problem. I come up with a heuristic using my own creativity to figure out what might be a good lower bound on the costs. These are general purpose ideas that will work for some CSPs but not others. And there are three classes. We talked about two of them last time. We're going to talk about the third one today. The first class of ways to speed up a CSP is by what's called filtering. And that's what we talked about last class. So we want-- when we make an assignment, it may be that things look good so far. I haven't violated a constraint so far, but in some sense, this whole subtree of the search is doomed, because there's an inevitable failure that's a consequence of the decisions I've already made. Filtering did that sort of propagation and look-ahead to check to see whether there were consequences of the assignment that I just made. And we looked at forward checking, which isn't very good, and enforcing our consistency, which is much better as a filtering mechanism. It's still not perfect, because these are still NP-hard problems. But it's a lot better than forward checking, though it comes with a computational cost. So that's filtering. Every time I make an assignment, I look ahead to see if there's any sort of doom on the horizon, and I backtrack if there is. Another way of speeding up CSPs in general were ordering methods, so asking questions like, which variable should I assign next as I do my recursive search? And there's an important distinction here, because you are eventually going to have to do every variable. You can't luck out and not have to assign some of the variables. And since you're going to have to do them all, there's going to be easy ones and hard ones, where we can make that formal by talking about how big their remaining domains are. You might as well do the difficult ones first. This is called fail fast ordering, and this is so you work on the tricky parts of the CSP early so that when you backtrack, you're not buried deep in the tree. You don't have to backtrack across exponential stuff and have to redo a whole lot of work. You want to do your backtracking early, locally in a sort of tightly coupled way so that you figure things out, and then move on to another part of the tree. So that was which variable should be next. The minimum remaining values is a proxy for the hard parts of the problem, and that's where you want to go first. When you pick a variable, you sort of, like, charge right into the danger. But once you've picked a variable, you have to decide what order you're going to try the values. And this is a very, very different story, because you don't actually have to try all the values. If the first one you try works out, great. You don't have to try the rest. So we want to be sort of pessimistic when we pick a variable, but we want to be optimistic when we pick a value, so we're going to pick values that appear to be most promising. How the heck do you operationalize that? One way to operationalize that is what's called the least constraining value. It's a little tricky even to do this, and there are more complicated things people have come up with. But the idea here is you tentatively assign a value. You run some filtering, and you check to see whether elements are vanishing out of domains left and right. And if so, this is a constraining value. But if this assignment doesn't really impact anything else and leaves most of the domains intact, this is probably a value that's likely to work out. Not all the time-- none of these are guaranteed to speed things up. But these are often very, very effective in practice, especially when you can combine them. The last thing that we're going to talk about today is how to exploit problem structure. That is how to look at the graph structure of the constraint satisfaction problem and detect whether either there's a special purpose algorithm that might run faster than the general purpose ones or maybe some way to improve the graph structure. And we'll talk about these kinds of techniques today. In order to do what we need to do today, we're going to have to extend our discussion about arc consistency. So we're going to start with arc consistency, which you will remember is this idea that you look at a pair of variables. It's one variable at the tail of the arc. That's the trunk of the car. And then it points to another variable. You are the arc consistency police. You will pull over that arc. You will check out all the values in the tail, and you'll delete them, if necessary. And we'll do a couple of more examples of this. That was the notion of making an arc consistent, and we use this to build forward checking, and we also use this to do filtering based on enforcing arc consistency of an entire graph. We're going to do a third thing today. So let's first remind ourselves mechanically exactly what it is to enforce consistency of an arc versus a graph, because this distinction is going to be very important when we talk about tree structured and-- tree structured algorithms and consistency of higher orders. So let's do a quick example. You'll remember we had this running example of the Australia coloring graph. Here we are. We are in our backtracking search. All kinds of filterings still have backtracking searches. The thing's still NP-hard. You're still, in the worst case, going to have to check everything. It's just after every step in the backtracking search, you filter some stuff in the same way that at every step in a star search, you run that heuristic. You might have to run the heuristic on every node in the state space. You might not actually gain anything, but you might gain quite a lot. So here we are in the middle of our backtracking search. So right now, this is the state we're in, where we've assigned Western Australia to red, Queensland to green, and remember, adjacent countries can't have the same color in this example. You will also see on the right here that we have an assignment to w, a, and q-- red and green-- but we also have filtering running on the other variables. So the Northern Territory, New South Wales, Victoria, and South Australia, they're all unassigned. But they have had 2 of their domain values crossed off by previous filtering. So what we're going to do now is we're going to go around and visit arcs, making them consistent. So you're like the arc-consistency police. You're going to clean up any violations you find. And we're going to see what the consequences are. So for example, we might start with s arc. And remember, when we look at an arc, it's very myopic. You look at these two variables. And even worse, you look at them directionally. And you only look at from, in this case, V, pointing to New South Wales. You say, in the tail, in V, is there anything here that is sort of doomed, guaranteed to fail? Meaning there is no choice at New South Wales that can be assigned as an extension without violating a constraint. So we go through, and we say, red at V that's fine. Because I'll choose blue at NSW. Green at V, that's fine. I can choose whatever I want at New South W. Blue at V, well, that's fine, because I'll choose red. So in this case, this arc is already consistent. I can look at other arcs. So here, SA to NSW, they also share a border. So I can look at SA and say, well, there's only one thing in its trunk, which is blue. Is there an extension to NSW that is legal? Yeah, red. You say, what about blue? Well, blue is really about the other direction. So in this direction, the arc is consistent. Everything is fine. But if we visit the other direction, we might get a different answer. And so here, when we look at NSW pointing to SA, this arc is not consistent. And the reason it's not consistent is because if I pick red at NSW, I can extend to that by choosing blue-- my only choice-- at SA. But if I pick blue, I'm toast, because blue and blue conflict. And so the way I make this inconsistent arc consistent is I delete stuff from the tail until it's happy. And in this case, I delete that blue. And I can continue this. So remember, this arc was nice and consistent before. But that argument of its consistency was based on the presence of blue at NSW. Blue's gone. That means some of my reasoning from before needs to be rereasoned. And so in particular, red used to be OK at V because I could pick blue at NSW. That's no longer OK. And I'm going to need to eject red here. Now, the arc is consistent again. For any choice at V, there is a legal extension at NSW. So here's the interesting case if you remember the example from before. The interesting case is between SA and NT. Neither has been assigned, yet we know from existing filtering that, if anything, they have to be blue. But they're adjacent. They can't both be blue. This arc is inconsistent. The only way to make it consistent is to start pulling things out of its trunk. I can delete that blue. But as soon as I do that, I have an empty domain. And as soon as you have an empty domain, you know that the CSP has no solution, and you have to backtrack. Why? Because as we filter, we delete values. And we have a sort of outer bound of what might be legal at that node. If you delete something, it's definitely illegal given the current assignment. But if something is present, it might still be illegal. You just can't really tell yet. But as soon as something is empty, there's no secret fourth color. We're going to backtrack. OK, so that's our consistency of an arc. An arc is either consistent or not. And there's an algorithm. You can call it removing consistent that makes it consistent. If you do a bunch of arcs, you basically do them to exhaustion, that is the algorithm AC3 for enforcing arcing consistency and making the whole graph arc-consistent. And as we saw here, you have to visit arcs over and over again. That's actually OK, because every time you visit it, it's because its head has fewer values in its domain than it did before. And so you can't visit it in an indefinite number of times. You can only visit it size of domain number of times. So arc-consistency detects failure earlier than forward checking. If x loses a value, all the neighbors incoming to x need to be rechecked. That can trigger cascades of repeated visitation of arcs. But that's OK, because it will all terminate. Another important thing to remember is after you do all this filtering, you're going to do an assignment. And then you start the filtering again. So this isn't something you run once. This is something you run once for every single node in your search tree. We talked about limitations. We're going to address some of these limitations today, though, in two very different ways. After enforcing arc-consistency, meaning taking your graph, deleting things as necessary until all the arcs are consistent, you can be in a bunch of different states. You can have one solution left, and you can greedily assign, and everything will be great. You can have multiple solutions left, for example, like the top graph here, where there's two solutions here. One of them has to be blue. The other is green. It can go either way. You can also have no solutions left and not know it. We didn't talk about how to solve this yesterday, but we're going to talk about how to solve it today or at least a way to think about it. And in this case, every pair of arcs is fine. But all together, the three arcs aren't OK. You need to look at triples. Arc-consistency is only about pairs, and so you can't detect this problem. That's actually OK, because there's still a backtracking search which will detect this problem instantly as soon as you assign one of the nodes. That's why our consistency still needs a backtracking search. OK, those are consistency. Any questions on that? Yes. STUDENT: [INAUDIBLE] PROFESSOR: Yeah, so why does arc-consistency sometimes detects failure earlier than forward checking? Remember, failure is when you see the consequence of your existing assignment is that some node you know has no legal assignments. The more arcs you check, the more likely you are to detect such a configuration. And in particular, in arc-consistency, you sort of chain the information throughout the graph, and you can detect remote failures. Forward checking only detects failures that are right in front of you. And in fact, you were going to find them in the next step anyway, at least under certain orderings. So forward checking doesn't do a lot. It's sort of the minimum amount of filtering you need to be able to flash your filtering badge and say, I do filtering, which is important, because a lot of things, like minimum rating values, require a full-term computation to even be defined. Arc-consistency's better. Let's talk about k-consistency. So arc-consistency is a really powerful notion, which is basically the notion that for any two nodes in a graph, if you assign one, you can extend to the second. What happens at the third node? All bets are off. But at least for every pair of nodes, you're OK. We talked about arc-consistency as being like the CSP Police pull you over and make some modification. Let's imagine, instead of being pulled over by this little guy, you get pulled over by mega Robocop here. That's the basic idea with k-consistency, where instead of just enforcing that all arcs are following the rules, we're going to enforce that all triples or quads are also following the rules. This is an expensive thing to do, but it is also powerful. And there's a trade-off between computation in the filtering and the amount of backtracking you're going to have to do. And there's no way to know in advance for sure for an arbitrary graph whether that trade-off will be in your favor. So k-consistency is a little bit of mathy concept. I'm going to go over this now. We're going to talk about, in general, how to think about this kind of consistency. And then we're going to talk about specific algorithms that work for specific graph structures. OK, so the smallest kind of consistency is so silly it's not worth talking about. It's 1-consistency. It's sometimes called node-consistency. This says that every node's domain has at least one value that meets that node's constraints. This basically just means you enforce unary constraints. And when you get a CSP, you can easily just enforce the unary constraints right off the bat once and be done with it. So one consistency, we don't have to think too hard about. Two consistency-- this is actually what arc-consistency is, but I'm going to state it in a particularly weird way, so that we can extend it to higher orders. Arc-consistency phrased this way says, for any pair of nodes-- so I grab two nodes in my graph-- if you have a consistent-- meaning does not violate constraints-- assignment to one, it can be extended to the other. So that means, for any assignment to the tail, there is an assignment to the head that extends that tail assignment without violating a constraint. And that's just what we talked about all along here, which says, for any assignment in the tail, there's an extension to the head that doesn't violate a constraint. If you have that property, your graph is arc consistent. And then we had an algorithm for making it so. K-consistency is a generalization of this. It says for each k-node-- so instead of picking up two, which is an arc, a pair, I pick up three or five or 30-- k-consistency says that if I have k-node, and I manage to assign k minus 1 one them without breaking any constraints, that there is guaranteed to be an extension to that kth node that does not violate any constraints. It says, if you can get to k minus 1, you can get to k. Arc-consistency says, if you can get one assigned, you can get two assigned. We can talk about higher orders. As k goes up, this gets more and more expensive. Because you start talking about looking at not just pairs of domains-- triples, quads, arbitrary numbers of domains. This gets exponential in k very fast. So higher k are more expensive. In this class, you only really need to know about k equals 2, which is arc-consistency. But the general idea of the higher orders are important. So when I talk about consistency, it's sort of mathematically a little weird. Because it assumes that you can get to k minus 1, but who says you actually can? There is a stronger notion called strong k-consistency. which includes all the lower orders as a package deal. So if I say it's strong 5-consistent. That also means it's 1-consistent, 2-consistent, 3-consistent, and 4-consistent. So here's a claim. And then we'll talk about how this is useful or not. The claim is that you have strong n-consistency-- where you have n nodes in your graph-- means you can solve without backtracking. Why is that true? Well, it's strong n-consistency, which means if I pick up a node, we're 1-consistent. That means there's a value that I can assign that node sitting in its domain that will not violate constraints, which just means the unary constraint. Good, lock it in, OK? We're not going to backtrack. Pick up a second node. We'll, there's an arc from the first node to the second node. And because that arc is 2-consistent, arc-consistent, no matter what I picked at node number 1, there is a legal extension to node number 2. So pick one. Lock it in. Pick up the third node. Because its 3-consistent, any assignment to the first two that has not violated, a constraint can be extended to the third. So if you happen to have this by induction, you can show that you can solve the entire thing without backtracking. This isn't super useful. I mean, it would be great if we didn't have to backtrack. But why is this not super useful? How do you know this is not super useful? Well, you know this problem you're solving is NP-hard, right? This is an AI class. Everything is NP-hard, right? So since you can solve it very quickly without backtracking, establishing strong k-consistency must have been really hard, and it is, in general. But this basic idea that if you have the right kind of consistency, you could just kind of go forward, extending without worrying about messing up your assignments so far is actually at the core of an algorithm that we can run. It just doesn't work for arbitrary graphs. And we'll talk about that really soon. So 1-consistency is free. 2-consistency is arc-consistency. And we have an algorithm for that. N-consistency is intractable, but it would be awesome if we could get it. So there's this whole middle ground in between that you could imagine doing. In particular, 3-consistency has a name. It's called path-consistency. You imagine three nodes form a line, form a path. By the time you get to 4-consistency, things are just getting really expensive. Any questions about any of that before we talk about structure, which will let us actually exploit these ideas to come up with algorithms that are guaranteed to be efficient? Yep. STUDENT: [INAUDIBLE] PROFESSOR: Are there examples of graphs that are in a state that are k-consistency but not strongly k-consistency? Yes, there are. However, one thing that's important to kind of think through is, what does arc-consistency do? You just have your CSP. Is not arc-consistent, and you go and you wave your arc-consistency wand, and you make it arc-consistent. What has changed? Values have been deleted from domains at nodes. So some domains that were fully populated, a bunch of them blink out. I was going to make an Avengers joke, but that may be a spoiler still. What about 3-consistency? That's really tricky. So when I enforced 3-consistency, what it says is that any two things can be extended to a third. And that means that what you leave in your wake when you wave your 3-consistency wand is actually not unary shrinking, unary constraints. You leave binary constraints. You introduce all kinds of constraints to the graph. When we talk about higher order k-consistencies, you actually have trouble drawing these examples. So it's a little hard for me to throw in on the PowerPoint. Good office hours question though. Let's talk about structure. So you are remember the CSP detective. And it is your job to solve this giant CSP case. And you look at your nodes, and there's a whole bunch of them, and they're connected in this web of constraints. But then you see something. And so in this particular cartoon, you see there's the big boss in the center. And you have this intuition that we should be able to exploit this. Maybe you go after the one in the center. Maybe you start at the edges and work up. And this is the basic idea behind structure. It's that just looking at the structure of the graph gives you ways to make the problem simpler if you can recognize certain syntactic patterns inside the CSP's graph itself. So let's start with an extreme case of exploiting structure. So remember the Australia graph here. We keep forgetting about poor Tasmania. It's an island. But that's actually good for map coloring, right? Because it's an island, you can color it whatever you want. That's because it's an independent sub-problem. And in general, if you're doing map coloring, the different continents or however things are separated on your map don't interact. And that means they're independent problems. You can solve them separately, and they don't interact. And there's no constraints between them. Independent sub-problems are easily identified from the graph, because you can look for connected components. How do you find that? You start at some node. You do a search, and you see what's reachable CS-70 style. So you can find the connected components of a graph. You can solve them independently. This is incredibly powerful. If you have a big graph made out of a bunch of small, independent components, it can make all the difference. So imagine we have some graph of n variables. So let's draw this. Here's this graph of n. But it turns out, you can break this thing into a bunch of little problems. We're going to hit the limit of my artistic ability. You can break this into a bunch of little problems where each one is of size c. So instead of one giant thing of size n, you have a bunch of problems of size c. How many of them do you have? You have n divided by c of them. Looks a little bit like a danish. How much work are we going to have to do to solve this thing? Well, CSPs, we have a lot of cool algorithms. But in the end, you basically have to be prepared to enumerate all of the assignments and check them all one by one. That's what that research is going to do if there's no solutions. It's going to sweep through the whole thing. It's going to check everything. You're going to backtrack till millennia pass, and then it's going to say no solutions. So these things are pretty slow when there's no solutions if there's not some way to detect that easily. So in the worst case, well, in the full problem, we would have to say, I'm going to assign all n variables each of their d values. It's d to the n. It's exponential here. And as n grows, it's really bad news. But in this Danish problem here, we only have little problems of d to the c. We just have a lot of those problems. And so if you run the numbers, this is the difference between, say, n is 80, and d is 2, and c is 20. And you make some assumptions about how many assignments you can churn through per second. That's like billions of years for the whole problem. That's not even a very big problem. Billions of years is a long time. But it's just less than a second under the same assumptions if you can divide it up into bunch of independent problems. So independence is great. You solve the problem separately, divide, and conquer. How useful is this on a scale of 1 to 10? 0. 1. Why? You will never in your life run into a CSP that has independent problems. Because the whole point of posing a CSP is to say, here's a bunch of variables that interact. Figure their interactions out. Often, in the formulation of the problem, you've already broken up the independent sub-problems. But it's a good mental exercise. And I guess, in map coloring, it could really happen. And you could just not know it, because you haven't done the analysis. But it's a good exercise to realize that the structure of the graph could shine light on ways to just solve it much, much faster. All right, let's look at some problem structures we can actually use. One of the main ones is you can look at your graph and notice, not that it's in pieces-- that almost never happens-- but that it's sort of not highly connected. So an extreme case of that is that it's a tree structure. So you look at your graph, and you notice that there's a bunch of constraints. But the way they connect up the variables forms a tree. So here is a simple example of a 6-node CSP that forms a tree. And there's a theorem, which we're going to show by giving an algorithm, that says that if the constraint graph has no loops, then the CSP can be solved in polynomial time, in particular, linear in the number of variables. That's way better than exponential number of variables. And again, there's the small term where it's quadratic in the size of the domains. That's much better than the d to the n that you get in general. And it's not crazy to think you'll find graphs which are tree-structured or close to tree-structured. And we'll talk in a minute about how to take something that's almost a tree and tree-ify it, arborize it. I don't actually know what the word is. But you can do it, and then things are efficient. So this property is also going to apply when we talk about the base nets and probabilistic reasoning. It's sort of a deep example of how you can have syntactic or structural restrictions on your problem which change the complexity of doing reasoning within that problem even across different ways of posing the problem itself. Here's the algorithm for tree-structured CSPs. We'll see how fast it is, and we'll see if it works. And if it's fast and it works, we will have proven our theorem. So step 1, order. So you have your three. You give it an order. That means you pick an arbitrary node to be the root. Because the CSPs aren't directed to begin with. You pick a node to be the root. You pick the tree up by the node. You shake it like this. Pick it up by the ankle. This induces an ordering on everything else. That is the first node, and everything else follows according to the ordering along the topological ordering of the tree. There's not just one ordering. But it doesn't matter which one you pick. Just pick one. Linearize the thing. So here' s a linearization of this graph. Notice everything's gone directed, because it's now been linearized, and I'm talking about the ordering here. So the underlying CSP is not ordered. But this directed linearization is. And it starts at A. I could have started it anywhere. The same algorithm would work just fine. So let's do an example. Let's imagine this was, again, map coloring. And so all the constraints that you see are inequality constraints. Remember in general, they won't be. And let's assume, perhaps because of unary constraints, that the colors that are shown here are the only ones that are allowed. And so we can take those domains, and we can draw them underneath each of the nodes. So here, A has to be blue or red. B has to be green or blue and so on. OK, so that's ordering. Here's the algorithm. Once you've ordered it, we do a backwards pass. So we start at F, and we go leftward. And for each node in this pass, we are going to make the arc pointing to that node consistent. So let's do F. We're going to do F. That means the arc that points to F is D to F. So we're looking at this arc right here. And we want to make it consistent. Maybe I should pick a color that's not one of my magic colors, like purple. And so I look at D to F. I pull it over. I look in the trunk. The trunk is at D. And I'm going to cross off anything in D which can not be extended to F without violating a constraint. Red's OK, because I'll pick blue. Green's OK. Blue is not. And so I will eliminate that, because that's what it means to run this removing consistent values. Next is E. I look at the arc that points to E-- that's D to E-- and I make it consistent. Is it consistent already? Look at it. Red works. I can pick anything I want. Green work, because I can pick blue. So that one's already consistent. Now I'll go to D. Incoming to D is the B to D arc. Is there anything that has to go, or is it already consistent? Already consistent. Now we do C. B to C-- is there anything in there that needs to go? Now B to D left B untouched. But B to C will not, because if you assign green at B, you don't have an extension to C. So we'll delete that there. So we processed C's incoming arc. Now we'll process B's incoming arc. That's the last one. We look at A to B. And we realize that blue just does not have a future here. And now, we've done this pass. Who knows what has happened? I executed the algorithm as specified. I have removed inconsistent values going from right to left. Now I kept saying things like the arc pointing to F. How do we know there's not seven arcs pointing to F. Let me add some arcs. How do I know there's not nodes somewhere pointing to F. Yep. STUDENT: [INAUDIBLE] PROFESSOR: Yeah. it's a tree. So this is the first time we've used the tree property, as we know that there's going to be one arc pointing to each thing. All right, now it's time to do step two. We've pulled over everything. We've pulled everything out of the trunk. Our arc-consistency phase is done. And now, we get to do the fun part, which is assigning forward. So we're going to start at A, and we're just going to go on an assignment spree. We're going to pick stuff that doesn't violate our choices to date. So let's do it. For A, let's pick one of the remaining values. Easy choice. B, we have to pick our remaining value. How do I know there even is one at B? I picked something at A. Maybe I picked the wrong thing at A. No worries, because A to B is arc-consistent. That arc is consistent. And because A to B is a consistent arc, whatever I pick at A, there's something that's going to work out at B. Doesn't mean everything's going to work out. But something's going to work out, and I'm going to pick one of the things that works out. Could I be messing up my whole future? Maybe. Let's not stress out about that yet. All right, time to pick C. I look at C, and I say, is it guaranteed that I can take my existing assignment and extend it to C? Well, C's parents is B. I picked something at B. And B to C was consistent, which means whatever I pick at B, there is an extension to C. In this case, it's green. OK, finally, there's one where there's actually a choice. So it's time to assign D. D has values. Will they all work out? We don't know. But we know we made an assignment at B. And we know, because B to D was consistent, something at D is a legal extension. In fact, two things at D are legal extensions. Which one do I choose? Whatever you want. Who wants red? Who wants green? Well, that's surprisingly lopsided for green. Go, green. OK, so we're going to pick green. Oh, you're making it hard for me. That's why you want green. So now, we go to D to E. And thank you, class. You made it hard. So when I look at D to E, there are two values at E. One of them doesn't actually work. But D to E was consistent, which means even if I pick green at D, something is going to work. And in this case, the something is blue. And then when I get to F, I only have one choice. And because of consistency, it's fine. So it sure looked like, in this case, I could go from left to right doing assignments. And I wouldn't have to backtrack. It doesn't mean I can choose arbitrarily. There are still things here that won't work. It's just that as long as I consistently extend one step at a time, it seemed to work out. And that's going to be true in general. Run time on this-- I'm going to step by step right to left, and then step by step left to right. It's linear in n. There's no complication in n. And then where does the d squared come from? That comes from the fact that whenever I visit an arc, I have to look at all the things in the head and all the things in the tail, and check them against the constraint. And all those pairs, there's d squared. Every time you see a d squared in CSPs, it's usually that you're doing some checking of a cross-product of a domain. So there we are. Let's prove it. Let's prove that this works. So the claim is, after the backward pass-- so remember, we enforced the consistency of these arcs in a various particular pattern. It wasn't like arc-consistency for the whole graph, where we do it, and we do it again and again and again and again and again. We just did one pass. The claim is that after you do that, all root-to-leaf arcs are consistent. Why is that? Well, each arc was made consistent at one point. We know that because we visited each one. And when we visited it, it was consistent when we were done. It is entirely possible that after that point, we screwed things up. So for example, let's take B to D here. We visited this arc when we processed D. We made it consistent. It was consistent for that brief moment in time. And then we did some computation. But all the computation we did was on this side. So what could make this arc that was once consistent, what could ruin it? What could ruin the beautiful consistency of B pointing to D? Well, consistency says, for everything in B, there's an extension to D. So deleting stuff from B is not going to make anything worse. That's just going to make our life easier. The problem comes if we delete stuff out of D. But we won't, because we're headed in the other direction. So provided we do them in this order, it will be consistent when you visit the arc. And that consistency will never be messed up, because we will never delete anything else from D. Claim two-- if root-to-leaf arcs-- as we just saw that they are-- are consistent, the forward assignment will not backtrack. Well, we're going to have to do an induction on position. And I'm not going to do it, but let's look at why it's true. So we assign something at A. We know we can extend it to B. Whatever we assign to B, we know we can extend it to C here. And that's because each thing, whenever we go to some node-- to let's say we go to F here. D to F is consistent. Provided we've made it all the way up to D and E, whatever we did at D has a consistent extension to F, and therefore, we can assign to F. That's the core of the induction. But how could this be? Why doesn't this work on an arbitrary graph? Where did I use the fact that this is a tree? Because all I really need to know to know that F is going to be OK is that whatever I assign to D, because of the consistency of D to F, whatever I assign to D extends to F. Let's say it wasn't a tree. It's not a tree anymore. F has two parents. It's still fine, right? Whatever I assign to D, there's an extension to F. That doesn't break the constraints. And whatever I assigned at C, there's an extension to F. So I'm good, right? STUDENT: [INAUDIBLE] PROFESSOR: Yes, that's exactly right. Let me repeat that. I'm going to repeat it and modify it. But it is exactly right. So yes, the consistency of these arcs tells me that whatever I assigned at D has an extension to F. Whatever I assigned at C has an extension to F. But there's no guarantee that they have an extension in common. So this consistency only lets me go without backtracking if there's only one parent. Because if I had two parents, they might not have a joint extension. A joint extension would be 3-consistency for two parents, and we don't have that. So for a tree, you only need r-consistency. If you have more parents than that, you would need higher order consistency, and we don't have that. So we just said why that is. And we'll see the same thing with base nodes. There's an equivalent algorithm for tree structure for base nodes. All right, so we talked about two kinds of structures that are helpful. There is independent sub-problems, which is extremely helpful and correspondingly rare. There's tree-structured CSPs, which are not that rare, and they're still efficient. But most of the time, you don't have a tree either. You've got something that's tree-ish. And so most of exploiting structure is looking at the graph you have and figuring out if there's a good way to turn it into one of the efficient patterns. So we'll talk about two ways to do this. One way to deal with things that are nearly tree-structured is to make them tree-structured. And we do that by deleting nodes until the thing is tree-structured. How do we delete nodes? If we could delete nodes, we'd just start deleting nodes. So there's an algorithm called cutset conditioning. Here's how it works. Conditioning in a CSP is when you instantiate a variable. So in this case, I can take my full CSP on the left. I can instantiate southern Australia to some value. It now has a particular value. As soon as I give it a value, I can then consider its effect on its neighbor's domains. So maybe I assign it red. And that means there is going to be an impact on all of its neighbors as a consequence. But once I have assigned SA, and I've seen its impact, the remaining problem looks like this and is now tree-structured. Is it the same problem? No, because I just assigned SA to red. In the original problem, I didn't know what to assign SA to. So what we can do is we can pick a node, and we can assign it, not once, but in every possible way and then solve the residual graph. And any of those solutions concatenated with the cutset assignment is a solution to the original problem. Why is this helpful? Well, let's imagine you don't just have SA here. But you've got a cutset of size C. There's going to be C nodes that were going to delete from the graph by instantiating them. Well, when you instantiate them, you have to be prepared to instantiate them in all ways. Because you don't know which is going to be the one that leads to a solution. So if I want to delete C nodes, I have to do D to the C instantiations and solve D to the C residual graphs. Is that bad? It kind of depends on D and C. But mostly, it's OK if C is small. OK, so cutsets that are small can give you a fast algorithm if the residual graph, for example, is a tree or is independent. And so you can look at your graph, and you can delete nodes until it's either independent or a tree or something you know how to handle efficiently. That problem is efficient, but you have to solve it over and over again once for every value of the cutset. You actually don't have to do every value. Because if the first value you try works, then you don't have to do the rest. OK, so this is fast for small C. And so that means you ask questions, like great, can I find the smallest cutset that will turn my graph into a tree, like NP-hard, right? This is AI class. Everything we do is NP-hard. But if you happen to have a cut set that's small that achieves this, you get to speed up. So here's the algorithm. You actually already know the algorithm. But let's lay it out. You choose a cutset. In this case, a good cutset choice would be SA, because when you delete it, the residual graph is a tree. It's, in fact, a chain. You instantiate the cutset in all possible ways. So instead of having one problem, you now have three problems or, in general, D to the C problems. But if the residuals are really efficient, that's OK. Because you compute the residual graphs. And then you solve them crazy fast with your special algorithm, like with a tree-structured algorithm. So you have more problems. But each one is simple. That's cutset conditioning. Quiz-- I know I told it was NP-hard in general. But find the smallest cutset for the graph below. All right, how about this? Take a two-minute break. And when you're back, we'll begin with two letters. OK, smallest cutset. I guess I need a quadratic clicker or something like that. Anybody want to throw something out? I heard some ABs. Who thinks AB? All right. That's at least the majority opinion. That's also correct. OK, good. So if you delete AB, what's left? Well, remember, A's got a constraint against G. So something will happen when you instantiate A on G. It'll delete some values on G and so on. But yeah, once you delete A and B, you're left with a nice tree-structured graph. It's kind of a trick question. I guess the smallest cutset is the empty set. But that is the smallest cutset that gives you a tree. So thanks for doing the pragmatics. All right, I'm going to tell you something a little more advanced that we're not going to go into as much depth or which you'll be responsible for in the material that takes a little further this idea of how you can take something that's almost a tree and make it a tree. So cutset conditioning is a way. It's delete nodes until you have a tree left. That is one way. But another way to do it is to do grouping, to look at your graph and say, it's almost a tree. Maybe there's nothing I want to delete. But I can decompose it into something that is tree-structured over larger variable. So the idea here is that we're going to create a tree-structured graph. But it's not going to be the same variables. They're going to be mega variables. And the variables here are going to be little clumps or clicks of variables in the original graph. So each mega variable is going to encode part of the original CSP. And the sub-problems, the outer CSP is going to be something that coordinates them. So for example, here is, again, our favorite example of map coloring in Australia. One thing I could notice is if I'm going to solve that whole graph with all of its inequality constraints, I also will have a solution to the smaller CSP, which just describes what's going on the WA and T and SA nodes. This is a little fragment of the graph. So a solution to the whole thing is a solution to this. Another thing that I have to solve to solve the whole thing is I have to solve the NTSAQ region of the graph and so on. So I can break this graph into little pieces and say, instead of solving the whole graph, I'll just solve the little pieces. Great. I have independent sub-problems. I lied to you. They're all over the place. It's all tractable. So what's the problem with solving these independently? What's the problem with solving, let's call this one, A and this one B. What's the problem with solving A and B separately? I'll give you A. I'll give you B. Yeah. STUDENT: [INAUDIBLE] PROFESSOR: Exactly. To repeat that back, if you solve them independently, the solutions to the sub-problems may not actually be coherent. So you might, in problem A, color the Northern Territories red and, in problem B, color it green. So you can't just solve them separately. You need to solve them separately subject to some constraints. It's going to get meta, right? So the constraints are going to say do these sub-problems, but make sure that any variables they have in common are assigned in the same way. So here's mega variable one, which represents this part of the problem, and mega variable two represents the next, and so on. They're going to be nodes. These nodes have domains. Except what are the domains of a mega variable? It's the assignments to that mega variable, the legal ones. Well, what is that? That's all the legal assignments to that piece of the CSP. It's going to be triples, which are OK. It's going to be an enumeration of the solutions to the sub-problem. And then you go to M2, and it's going to have its solutions numerator. And then there's going to be for M3 and M4, these are the mega nodes. And then there's going to be constraints between them, which say they have to agree on shared variables. And that means that if they were explicit, they would say things like, WA-- red, SA-- green, NTB and NTB, SA-- green, q-- whatever. That is OK, because it's consistent. And then if you solve this, you'll have a solution to the whole thing. Because each little piece is happy. And they're all coherent, which means the whole thing is happy all at once, and you have a solution. I more or less implied that this mega graph is now going to be a tree. But it's only going to be a tree if you set it up right. And we're not going to talk about the exact details. There's a property the pieces have to have so that it will be a tree and that consistency is actually guaranteed. And it's called the running intersection property. And in particular, it has to be the case that, say, if Victoria appears here and it appears here, it appears everywhere in between. So it can't change its mind partway through. But provided you set this up in the right way, this mega problem can be efficiently solved. But it's sufficiently solvable over variables that are very, very big. So again, there's no free lunch. But you can sort of push your lunch around the plate as some of us like to do. All right, something totally different now. All of the things that we've talked about so far are ways of solving CSPs that basically boil down to search. And the search got smarter and smarter and smarter about look ahead in that search-- that's what filtering was-- and about structuring the exploration. That's what ordering was and so on. Iterative improvement is the first time we're going to see a randomized algorithm. And it works in a very, very different way and actually generalizes to local search methods in general. So local search iterative algorithm for CSPs is our first example of a local search. These methods work with complete states rather than partial assignment, partial assignment. And then at the end, if you've survived that long, and you have a complete assignment, an iterative algorithm starts with a complete state. For example, here's a 2-node graph coloring problem with an inequality constraint where they have both been colored red. It is a complete assignment. It is not a legal one. But that's OK. Local search methods don't necessarily have legal assignments, but they are complete. That's different than what we used to do, which was legal but not necessarily complete. Now, they're complete, but not necessarily legal. If we want to apply this kind of algorithm to a CSP it works like this. You take an assignment with unsatisfied constraint. You're going to have operators. Instead of a successor function, which gives you the next state, you're going to have operators, which reassign variable values. So you change the state in some way. It goes from one complete state to another. So I might go to the state. This state's a lot better. It's complete and it's legal. And these methods have no fringe. You live on the edge. There is no backup plan. Normally, when you do search, there's a whole backup plan of all of these things you're going to try if your current plan A doesn't work out. That backup plan, that queue, the fringe can get exponentially large. Here, it does not exist. You've got your current thing, and you're going to keep tweaking it until it works or you give up. Here's the algorithm. While you have not solved the problem-- in this case, it's a CSP. So there's a notion of solution. You're going to pick a variable. And here, you're going to select something that's conflicted. Don't mess with something that's already working. So you pick a variable that's already conflicted. Meaning it participates in a violated constraint. Then you're going to reassign its value. There's something called the minimum conflicts heuristic, which says choose a value that violates the fewest constraints. So get rid of as many conflicts as you can. Basically, this means hill climb on the function of the total number of violated constraints. So for 4-queens, for example-- or in any queens problem-- you would have the queens just lying on the board in a bad state, where everybody is threatening everybody else or whatever the random state you start with is. And the operators might be, pick a queen and move it within that column or within that row. The goal test would be there's no attacks. And then the function you would use to count the number of constraints violated might be the number of attacks. So that's it. That's the whole algorithm. Let's see it run twice. Here it is running with n queens. There it is. OK, it's five queens now. Here is the initial state. Each of these queens is conflicted. So I'm going to pick a conflicted variable. Not the least conflicted one, not the most one, just pick one at random. I'll pick this one. And then you pick a minimum of conflicts, which in this case, is to reassign it over here. This is my new state. It's a local search procedure. This is my only state. I'm going to do it again. I'm going to look at my successors informally. But I'm going to look at the ways I can modify this state. And I'm going to pick a conflicted variable, and I'm going to put it in a place that minimizes its conflicts. Then I'm going to pick a conflicted variable. I'm going to put it in a place that minimizes conflicts. I'm going to do it again. And actually, I broke something. But that's OK, because I have no memory. I'm going to keep doing this until now I solved it. That's it. This was our map coloring example on the big problem. Remember, we did filtering to make sure we didn't mess anything up? You see all the domains. Let's switch to iterative improvement. Boom, complete assignment. It's terrible. Look at all those greens. It's not a good assignment, but it's complete. What am I going to do now instead of assigning new things? I'm going to pick up a node that participates in a conflict. And I'm going to assign it in a way that minimizes its conflicts. So here's the next step. OK, I re-assign that one. Our conflict disappeared. Step 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, done. That's it. This works really well sometimes. Yeah. STUDENT: Do you risk introducing a new conflict problem? PROFESSOR: Do you risk introducing a new conflict? Yep. Can this thing run forever? Yep. Will it give you an optimal solution if there are waits? No. Do you have any guarantee whatsoever? No. But it's very fast very often. So how do you feel about that? Yeah, it's OK to have complex feelings about that. All right, in practice, these things are really fast. Min-conflicts, sometimes it's amazing. So given a random initial state, you can solve-- remember we were like, we can solve n-queens of size 5, of size 20, of size 300 with these other methods. You can solve basically n-queens of size anything with this method in basically constant time. Because you randomize it and most of the board is fine. And you fix a couple problems, and you're done. That's great. So far, it's looking pretty good. There's going to be some bad news, right? OK, let's hear some more news It also seems to be true that for almost any randomly generated CSP, you can solve it instantly. Here's an important quantity-- the ratio of number of constraints to number of variables, broadly speaking. If there's a lot of variables but almost no constraint, this ratio is small. And would you describe this problem as easy or hard? Tons of variables, no constraints. It's easy, right? Compared to the variables, there's barely any constraints at all. You just assign whatever you want. And therefore, iterative improvement works great. That's over here. What happens over here? Over here, there's just tons of constraints. The variables have constraints all over them. Is this easy or hard? Think. It's a trick question. So the answer must be-- easy. Why is this easy? Well, if you have a solution, and there's constraints everywhere, there's probably not a lot of wiggle room. And it turns out that very quickly, you can chase your solution down really quickly. Now of course, there's this middle ground here, where there's just the right balance of constraints and variables, where it's hard to find solutions, but it's kind of hard to tell if you're on the wrong track. And there's this spike in how long you have to run these iterative improvement algorithms. So here's a map. There's this whole world of places where CSPs will be solved instantly. And then there's this bad area of hard problems, where the critical ratio is like the right balance, where it's not over-constrained and not under-constrained. Is that good news? Where are you? Where's the problems that you see in daily life? Right there. That's it. So it turns out that this constraint for hard problems that we actually want to solve, very often, you are in this bad ratio here. And queens is a little weird, because it's pretty under-constrained. OK, in summary, and then we're going to do a quick intro to local search. In summary, CSPs are a special kind of search problem. The states are partial assignments, and the goal test is defined by constraints. Because the goal test can be broken into pieces, we could smear that goal all the way through the backtracking search. And that gave us a faster way of using something like DFS to find solutions. We sped it up by being smart about the order that we attacked the variables in the problem. We sped it up by filtering and looking at the consequences of our assignments before we made the next assignment. And then we sped it up by looking for or creating special structures within our CSP in order to have specialized algorithms or better performance from our existing algorithms. We also have an iterative algorithm called min-conflicts that is very effective in practice for many cases, though, you have almost no guarantees. That's CSPs. We're going to talk about something very, very related that uses basically the same ideas. We're going to talk about local search here. You can think about local search as you're this robot. You're in a mountain range, and you're trying to find the top, because you're an optimizer. You're searching. You're trying to find the best solution here. So you want to go up there to that golden summit. But what does it look like to you? Well, you're a robot. And you can see about this far ahead of you. So you go uphill, and you can see about this far ahead of you. And so you go uphill, and you go uphill, and you go uphill. And you get to here. And then what happens? You look around and everything's downhill. You do your little victory dance. Because as far as you can tell, you've just won. You said, what about this thing up here? What about that much better solution? You've no idea that's there. In local search, you have an operating point. You look around you and things either look like there's an uphill or there isn't. And if there's uphill, you go uphill. Iterative improvement was an example of that. More generally, to phrase that, tree search keeps a queue of unexplored alternatives. And that's what ensures its completeness. The reason why things like breadth for search and a-star, and uniform cost, and all of that are complete is because if what you're doing right now doesn't work out, you've got this possibly exponentially large list of other things you're going to try one by one until finally, something works. And in the worst case, you're going to try everything. So it's going to be complete. These algorithms tend to be complete. But they're going to be pretty slow, because you're going to try everything in the worst case. Local search, you start with a single option. You try to make it better. And when you can't make it better, you stop. Then what happens? You're done. Is it a good solution? Who knows. You're just done. So the new successor functionality is that you make local changes to an existing state. And this is usually much faster and more memory efficient. But you have no guarantees, no completeness, no optimality. We talked about the general idea. Start wherever, move to the best neighboring state. If nothing is better than your current state, then you stop. We already talked about why it's bad. It's not complete. It's not optimal. But what's good about it is it's really easy to apply in a lot of problems. It's really fast to come up with a solution. It may be a bad solution. But maybe that's OK. Depends on your problem. It will at least be a better solution than the ones around it. So you can feel relatively good. So that's hill climbing. Here is basically the diagram of what your life in cartoon when you hill climb. Why is this cartoon? In general, any space research is very large and very high dimensional. High dimensional spaces are very hard to visualize. They have all kinds of corners and high dimensional effects. And they definitely do not look like line drawings on a PowerPoint slide. But here's a line drawing on a PowerPoint slide. So schematically, you might be searching. Here's your current state right here. And you look around, and your operators might be nudged to the left or nudged to the right. Let me find another color. So you can go downhill. You can go uphill. And so, at that current state, you'll go uphill. And you'll go uphill again and uphill again in hill climbing. And in general, you will stop when you get to a point that looks like this, where everything to your left and right or whatever operator you have is downhill. That's called a local maximum. You've almost certainly heard this terminology used informally. Up here is the global maximum. What is the sign that you have a local maximum, instead of a global maximum? No sign. They look exactly the same. Only from looking from the outside or knowing something special or comparing two maxima can you tell that this is not, in fact-- it's like you're climbing Mt. Everest. You get to here, and you think you're done. You're not done. You barely even started. But you can't tell from your local environment. Global maximum, local maximum, you can sometimes see these flat situations-- either a flat local optimum or a shoulder-- where things have leveled off. But they're not going to level off forever. But you can't tell. This thing could go off to infinity as far as you know. This is what it looks like to be hill climbing. Here's a quiz. All right, starting from x where do you end up? b, OK. I agree. Starting from y-- where is y? There's y. Where do you end up? d. That's sort of a bummer, because you feel like you should at least roll. That's not a thing. That's a metaphor that you might get from some combination of two dimensions in physics. There are notions of momentum, where you keep trying things and you keep moving in directions that have been successful. So I shouldn't say there's no such thing. But in general, with simple hill climbing, you do just stop whenever you can't see which direction is uphill. Starting from z-- that's here-- where do you end up? e. And how do you feel about that? Well, you feel about the same as when you stop at b. But if you had multiple restarts, and one stopped at b and one stopped at e, you can certainly tell them apart. And so that's why when people do local search, they often have many restarts running in parallel. Couple other ideas that are out there. Hill climbing is super powerful. Restarts on top of hill climbing where you start in a bunch of different places and you just swarm, that's pretty powerful. There are a couple of other ideas that I'm just going to cover very briefly the ideas. One is simulated annealing. You may have heard of. So simulated annealing basically says it's basically local search. It's designed for problems that look like this, where you're here, and you look around you. And hill climbing would say, to my left is downhill, to my right is downhill, I'm done. Simulated annealing is like hill climbing search with a lot of caffeine. So it's going to bounce around. And sometimes it's going to bounce out of these local optima. That's the core idea. In particular, the algorithm goes like this. You start with an initial state. And then you do the following forever. You're never actually done. You just decide to shut it off at some point. You get a temperature schedule. What's a temperature? A temperature is just a physics analogy. And there's some mathematics to back it up. But it's a physics analogy that has to do with how much you're bouncing around. And over time, you bounce around less. If your temperature hits 0, you just stop. Otherwise, you look at a successor. So instead of picking the best successor, you just pick one at random. And you say, you, am I going to do it, or am I going to not do it? And then you look at the change in value, and you look at the temperature. And so you compute the change in energy. And if it's better, you just do it. Go in that random direction. But even if it's worse, you do it with some probability. And the probability you do it has to do with how much worse it is. The worse it is, the less likely you are going to do it. And the temperature-- the hotter it is, the more likely you are to do it. And so at the beginning, you're just moving around randomly. If the temperature is really high, you're just totally moving randomly. If the temperature is very low, you're doing hill climbing. And in between, you're hill climbing with some caffeine. And so you can often bounce your way out of little local optima. And that can be effective. Yeah. STUDENT: [INAUDIBLE] PROFESSOR: Oh, what is annealing about this? That has to do with a physics analogy of metal cooling. I won't push the analogy on this or on genetic algorithms or neural nets or anything like that. They're techniques. They have math. Their names often come from some analogy to something else that I, in general, will not push too far. Amazingly, this is one of the few search processes that has a guarantee. It has a theoretical guarantee of optimality. So it basically says that if you decrease your temperature slowly enough, you will eventually converge to the optimal state. Why is that? Well, it's a really interesting guarantee, right? Why is that? It says, you're bouncing around, and you maybe bounce less and less over time. What do you do when you're bouncing around? You get to this good part of the state, and you bounce around. And eventually, you bounce out of it, and you bounce in some less good part. And you bounce back to the good part. And you bounce back to the less good part. And the higher the hill is, the more time you spend there in the limit of infinity. You'll spend a lot of time there. You'll spend less time on the smaller hill. And the taller the hill, the longer you're going to be there. So if you run this thing forever, you can actually turn this into a guarantee of optimality. But it's not magic. In reality, that guarantee does not hold on any actual finite time scale. And basically, it boils down to the more downhill steps you have to take to escape a local optimum, the less likely you are to do them all in a row. And so you're pretty much going to be stuck unless it's just one downhill step or two and then you're escaped. So people think really hard not just about how to jitter around, but how to create what are called ridge operators, which let you jump around the space in better ways. One example of a very extreme ridge operator-- to give you an example of what it would mean to do more than just jittering around-- would be something like a genetic algorithm. There's an evolutionary metaphor here that I won't, again, not push too far. It's really just a method here. Using the natural selection metaphor that has basically two parts. You have a bunch of candidates. They have fitness. But you're keeping a bunch of candidates, like, a bunch of restarts. They have fitness. So you select some of them. And some of them duplicate because they're good. And some of them didn't get selected. And so they're off. They're gone. And then of the pairs that are left, you do something called crossover. This is the important part. You take partial hypotheses and you recombine them. So what would that mean for real? Well, it might mean something like this. I have two pretty good n-queen solutions. Neither is actually a solution. But they both have a small number of conflicts. So I'm going to slice my board down the middle, and I'm going to take part of a and part of b and slam them together. I feel like it's trying to tell me something. All right, let's restart now. Just kidding. So why does this make sense? Well, if you formulated the problem just right, your left quadrant and your right quadrant have the right number of queens. And maybe one's good on the left, and one's good on the right. And when you smash them together, they'll work. Or maybe not. Maybe you'll get the worst of both. And you'll get some terrible thing with lots of conflicts. And so you do need to still do a bunch searches, a bunch of restarts, and a bunch of rounds of these. But this is an example of not just nudging your thing locally, but just taking entirely different ways of traversing the space, which is why I bring it up. OK, that's it for today. Next time, you and Peter will talk about adversarial search, which is how you think about planning forward in computation when not just you, but also other agents who are trying to ruin your day, are taking action. [AUDIO OUT] |
UC_Berkeley_CS_188_Introduction_to_Artificial_Intelligence_Fall_2018 | COMPSCI_188_20181018_Bayes_Nets_Sampling.txt | [SIDE CONVERSATION] PROFESSOR: OK, let's get started. So we've been talking about Bayes nets, which are representations of probabilistic models over a domain of variables. And in particular, we've been talking about how to answer questions about various quantities, queries, given evidence, and how to do that in as efficient way as possible. Today, we're going to talk about an entirely new kind of approach for doing inference in Bayes nets, which involves sampling. So mostly, we're going to take a huge step back, forget everything we know about variable elimination, everything we know about actually constructing joint distributions, and instead, talk about a completely different way to do inference in the distribution represented by a Bayes net. So let's remember what a Bayes net actually represents. So it's a representation of a probabilistic model over some domain of variables. You have a directed acyclic graph. You have one node per random variable, and living inside each node is a conditional probability that specifies the probability of all the outcomes of that node, given an assignment to the parents. This encodes a joint distribution. And we've seen this before a bunch of times in recent lectures. But today, we're really going to need to remember what things you have to multiply together to actually get the probability of an event in this joint distribution. So a Bayes net implicitly-- explicitly, it's a bunch of little local conditional probabilities. And implicitly, it encodes a joint distribution over all those variables that's given by the following formula. So if you give me a complete assignment to all the variables x1 through xn, the probability under the Bayes net is the product of all the conditional probabilities for each of those x sub i given the parents. Now, where do these all come from? Each node of the Bayes net representing a variable has those tables buried in it, OK? What we wanted to do with Bayes nets was answer questions, like, hey, what's the probability of this variable given some piece of evidence we observe? What's the probability of this disease given these tests? What's the probability that my battery is dead given that this light is blinking in my car? What's the probability of this insurance parameter, or whatever it is, given the known quantities? And for large Bayes nets, it wasn't practical to construct the whole joint distribution because it's exponentially large. And even if you had it sitting there in memory, it wouldn't be practical to answer queries because most queries involve a small number of variables, which means you would have to sum out over all the various combinations of settings of the other variables, and that would take exponential time. So we have this algorithm variable elimination, which involved alternating between inflating little bits of the Bayes net and summing out variables you don't want, where we sum things out aggressively before the factors that we build have a chance to get exponentially large. How big do they get? Well, in general, if there are d elements in the domain, and there are k variables in a factor, you're going to have factors of size d to the k. That means k can't get too big. What happens in an arbitrary Bayes net? Sometimes there aren't any orderings that are going to give you only small factors. And it can still be very expensive. The ordering can matter a lot. We saw cases where there were networks that had good orderings and bad orderings. There's also networks that only have good orderings. And there's also networks that have no good orderings. Sometimes, there's a good ordering and a bad ordering, but it's really hard to find the good ones. But in general, variable elimination was better than inference by enumeration, even though in worst case, everything was exponential in the size of the Bayes net. And we saw why that has to be because Bayes nets can compactly encode satisfiability instances. So we have this algorithm variable elimination, which, basically, is going to add a ton of entries of the joint distribution together in a, hopefully, efficient way by moving some of the sums in the expression. But it might still be slow. Today, we're going to do something entirely different. Today, we're going to do an algorithm that's really, really fast. But the answer might be totally wrong. And this is different than how we normally approach problems in this class. Normally, we have an exact answer. And we're going to find it. And we're going to try to make it as efficient as possible, either in a way that we can show the worst case computational complexity isn't too bad, or maybe the worst case complexity is bad, but it's still, in practice, feasible for lots of problems. Today, we're going to talk about sampling-based methods, which you can run them as quickly as you want. But if you don't run them long enough, your answer may not be very accurate. And so here there's a trade-off between compute and accuracy, and that's not the normal trade-off we have. So what is sampling? There's actually two entirely different reasons you might sample. So sample is a lot like doing repeated simulation. In fact, we've already done some sampling in this class. We did sampling when we did reinforcement learning, where we were like, hey, what's the value of this state? Or what does this action do? And so we'd try it, and then we tried again. And we tried again, and again, and again, and again. And if we tried it enough times, those probabilities would start to converge to the right values. That use of sampling is learning. So in that case, you don't actually know the distribution. Like, we didn't know the distribution of the slot machine. And we had to keep playing the slot machine, so that we could learn how often it pays off. So that's a case of learning. You don't know the various probabilities. But you're going to interact with a real world to discover them. That's learning. We're going to use sampling today for a very different purpose, which in a lot of ways is a little counterintuitive. Today, we're going to use it for inference. And that means, we're going to go to our Bayes net, in which we know all of the probabilities. How do we get them? Maybe, they were handed to us. Maybe, we actually got them from learning. But right now, there's a Bayes net right in front of us. And it tells us that the probability of this variable, given the settings of these other variables is 0.37. OK, so we actually know all the probabilities, but we're going to sample anyway. When you sample in a network that you already know, it looks like simulation. You walk along the network, and you say, hmm, well, if this happened, let's see what would happen at this variable. And you flip some coins. And you get a sample out that is a sort of probabilistic simulation. And in this case, the reason you're getting a sample is not because you don't actually know or are not able to compute the underlying probabilities. It's because sampling turns out to be faster than computing the right answer through brute force. So here's the basic idea in sampling, regardless of whether you're doing it for learning purposes or for speed purposes. The basic idea is we're going to draw n samples, where n might be very large, from some sampling distribution s. And we get to define the sampling distribution. We'll see several different examples today of different sampling distributions that are given by the different algorithms we're going to cover for sampling. But there's some sampling distribution that hopefully we can characterize. You're going to draw n samples from that. Inside those samples, which are all events that look like outcomes of s, and you're going to compute an approximate posterior probability. Whatever query you're trying to answer, you'll compute it over your samples. And that's efficient because it scales in terms of time with the number of samples you take. If you want it to be more accurate, you take more samples. And now, you might think, there can't be really a free lunch there. There's got to be some trade-off between accuracy and time. And if I have a really tricky distribution, maybe some queries need a lot of samples in order to get accurate. There are going to be trade-offs like that. But you can always stop it and get the wrong answer. What you want to show is you want to show that as n grows, as n goes to infinity, this procedure is going to converge to the true probability you actually want it to compute. And so every time we introduce a sampling algorithm, we need to argue that, if I draw a bunch of samples from this, I'll actually get the right answer. Even though, of course, if I draw a small number of samples, it's the luck of the draw. All right. So we're going to have an atomic building block when we do sampling. And that is, we need to be able to sample from big distributions and Bayes nets. And there's going to be algorithms for that where you walk along the network going variable by variable in an appropriate way. But before we can do that, we need to understand how to sample from a single multinomial distribution. So let's say, for example, we have this distribution here. There's a random variable c. It's a color. It's red, green, or blue. And I have right in front of me the probabilities. Now, remember, I'm not doing learning. I know the probabilities. I know that red happens 60% of the time. But I'm going to draw a sample from this distribution. So I'm going to flip some kind of coin. And I'm going to basically flip-- conceptually, I'm going to flip a three-sided coin that says, red, green, and blue, where the red side comes up 60% of the time, and the green side comes up 10% of the time, and so on. But how am I going to get this coin? How am I going to actually get this single sample? Well, what I'm going to do is I'm going to manufacture a sample over this distribution from a primitive sampling distribution over, say, the real interval from 0 to 1. So for example, if you call random, you'll get a uniformly distributed number between 0 and 1. And then step two is to take that sample over the 0, 1 interval and convert it into a sample over the event space for c. How do we do that? We divide up that real line into subintervals where the size of each subinterval is equal to the probability of the outcome. So for example, I might grab my number between 0 and 1 and say, well, if it's between 0 and 0.6, I'll call that an instance of red. And if it's between 0.6 and 0.7, I'll call that an instance of green. And if it's between 0.7 and 0.1, I'll call that an instance of blue. OK, so I could do that. Let's say, my random my call to random returns 0.83. I would look, and I would say, well, that's not in the first bucket. That's not in the second bucket. That's in the third bucket. So I'm going to say, I just sampled the value blue. Now, why would I ever want to do that? I could draw a bunch of samples. If I sampled eight times, I might get five reds, two blues, and one green. Do those seem plausible given these probabilities? They're plausible. Why would I ever want to do this? Well, I basically wouldn't, right? I already know the probability of red. Drawing 10 samples, and this time I got seven, instead of six. That's really just introducing variance into the process. So for this one dimensional single variable, I wouldn't actually have much use for sampling from this known distribution. However, in a Bayes net, there is a use for sampling from the entire network because listing all the outcomes is too expensive, even if I can create them given infinite time. All right. So here's what we're going to do today now that we have this building block of being able to sample from any given single multinomial distribution. So we'll think of that as a primitive now. We're going to look at four different algorithms for drawing samples, and think about what they do, and what we can understand about them. Yes. Questions before we start. STUDENT: [INAUDIBLE] PROFESSOR: Wow, that's the first time I've ever been asked to turn my mic up. How's that? All right. Any other questions? Any other volume requests? OK, so we're going to do some prior sampling. So I like to think of samples as snapshots, assignments to random variables that come along-- come across a conveyor belt. And in the prior sampling algorithm, we're just going to draw samples from the distribution of interest. So this is the basic building block. Here's a little Bayes net. This is a Bayes net that says, half the time the sky is cloudy, that's P of c. And then there's a sprinkler on my lawn, and there's a dependence between the cloudy and the sprinkler. So what does this look like? If it's cloudy, the sprinkler is usually off. If it's not cloudy, half the time the sprinkler is on. There's also a dependence between cloudy and rain. So when it's cloudy, it usually rains. And when it's not cloudy, it usually doesn't. And then in addition to sometimes it's cloudy, sometimes there's a sprinkler on, and sometimes there's a rain, sometimes my grass is wet, OK? So how does that work? Basically, the sprinkler can make my grass wet, and so can the rain. And so there is a conditional probability table here that describes how w responds to s and r that basically says, if it's either s or r, then it's probably wet grass. But if there is neither rain nor sprinkler, there's probably not wet grass. OK, so everybody get that Bayes net in your head because we're going to use this Bayes net over and over again. It's an interesting Bayes net because the variables separate. But then they come back together at the bottom. So if I look at this, what can I do? What do I know how to do with this Bayes net? Given time, I know how to create the whole joint distribution. So I could do that right now. I could ask you, hey, what's the probability that it's cloudy, there's a sprinkler, it's rainy, but the grass is dry? We could compute that right now by multiplying a bunch of conditional probabilities together. We're going to do something similar to that. Instead of computing an entry of the joint distribution, we are going to create an event which is an assignment to these four variables. But we're going to create it by walking along the Bayes net. So we'll go to cloudy. And we'll say, well, sometimes it's cloudy, and sometimes it's not. My samples will come out both ways. Right now, it's time to pick a single sample. So I'm going to flip a coin. And here, it's going to be a 50/50 coin. And maybe it turns out it's plus cloudy. So green here will be plus, and red will be minus. All right. Time to go to the next variable because I don't have a complete assignment yet. So I'll go to the sprinkler. I've already picked plus c. So now, I'm going to pick a value for s. It's probably going to be minus s. But there's a 10% chance that when I flip this 90:10 coin, it'll come out plus s. So let's flip the coin. OK, it came out minus. I go to the next variable, rain. And it looks like I have an 80% chance of getting a sample of plus r, and a 20% chance of minus r. It comes out plus r. And then I go down here to the wet grass. There's no sprinkler, but there is a rain. And so the probability of wet grass is 90%. I flip another 90% coin, and I get plus wet grass. So I walked along this network. If I did this again, and I flipped those coins again at each node, next time, I'm presumably going to get a different sample, and then a different sample, and then a different sample. Each time I get a sample, I write it down, OK? So my first sample was plus c, minus s, plus r, plus w. And I can do that again, and again, and again. I can draw as many samples as I want. We could ask some questions like, OK, that first sample, how often is that going to happen? I mean, will it always come out that way? Certainly not. What's the probability that I will get the sample plus c, minus s, plus r, plus w? Well, the probability that I'm going to get the plus c part, that's going to be true of half the samples. And then of those samples where I got a plus c, the probability that I'm going to get a minus s, that's going to be true of 90% of those samples. And then when I go to plus r, it's going to be 80% of the samples, and then 90% of the samples. And if I multiply those together, I can determine the probability of the sample, which is just the probability of this event in the distribution described by the Bayes net. So when I draw samples according to this procedure of walking node by node, and just flipping the coin the way it says in that node, I end up with samples coming out in a way that exactly matches the probabilities in the joint distribution, OK? Does that make sense? You might wonder, why the heck would I do this, right? Because I can compute those probabilities. But that is the probability of each sample type. So here's the prior sampling algorithm. If you're going to draw n samples-- sorry. You can draw as many samples as you want. For each sample, you do the following. You visit each node, sampling a value for that node given the parents. And so you say, what happens if I visit a node before a parent? You can't do that, right? So you have to do this in a topological ordering of this, where you don't sample something until all it's parents are sampled. How do I know there's such an order? It's an acyclic graph, OK? After you sample all the nodes, you return it. That's your sample. How many samples do you want to draw? That's up to you. So we already talked about this process generating samples with a certain probability. So let's parse this expression. This is a sampling distribution. S is a sampling distribution. It's like P, except when I write it down, I might or not actually be a probability. But here S is a sampling distribution. PS is prior sampling. And the probability that I'll get the sequence x1 through xn as the assignment from the sample, the probability I'll get that snapshot, is the product of the probability of each piece given its parents. And that, as we said, is just the probability of that same exact event under the joint distribution. So we found a way of drawing events from the full joint distribution with probability P. That's not too hard. So what do we know? Let's say, we take some event, like x1 through n, and we draw some samples. We might not get any samples of that event. We might get a lot. Let n be the number of samples you get of a certain event. One thing that should seem very plausible is that as n goes to infinity, the fraction of samples that come out as that event, compared to the number of samples you take, that's the sampling probability, which we know from the form of the sampling process is going to be the probability in the network. What this means is this means the sampling procedure is called consistent. You may have noticed that in every unit, we use the word consistent. And it always means something different. This is what consistency means here, OK? So we say a sampling procedure is consistent if the ratio of the samples in the limit goes towards the actual distribution that we're trying to sample from. You might look at this and be like, well, how could it be inconsistent? Well, here's an inconsistent sampling procedure. What if I just went, and I picked each node 50:50 independently. I would still get samples. They just wouldn't come out according to a distribution that had anything to do with the one I'm trying to model. All right. Let's do some prior sampling. So here's this little network. We run that procedure where we visit each node and out pops sample 1, and then out pops sample 2, and sample 3, and sample 4, and sample 5. We've got five samples. Now, normally, we would have, maybe, millions of samples. We've got five. What can we do with these samples? Well, with these samples, these samples represent a distribution. And we can do anything we would do with an actual distribution, we can do with these samples using counts instead of probabilities. So let's say, I would like to know in this Bayes net-- which remember, deep down this represents a joint distribution of our c, s, r, and w. And I could do things, like, I could compute in this network. I could compute the probability of w. How would I do that? Variable elimination. So you say, I hate variable elimination. I haven't reviewed those lecture slides yet. I'm not going to do that. How else could I do it? Well, I could look in these samples. And I could say, well, in these samples, sometimes w comes out plus, and sometimes it comes out minus. And in these samples, what's the probability of plus w? It's 4/5 or 80%. So I could write-- in these samples, I could write the probability that plus w is 0.8. Great. Do you think that's actually the probability in the network? I mean, I have no idea, but probably not, right? This is just an empirical probability of this quantity of interest, not computed in the network, but computed over samples which were drawn consistently with the network. So if instead of five samples, I had five billion, this 0.8 would be about right. As it is, hey, it's a number, OK? So we get counts, and we can normalize any events over those counts to get the probability of that event under our samples. This approach is the true distribution if we have more samples. And we can estimate anything else that we like. So let's do some practice. Can we compute the conditional probability over the variable c, given that the grass is wet? So how likely is it to be cloudy, given that the grass is wet? Let's do it. All right. Well, when I look at these samples, I say, I care about the case where the grass is wet. So this sample here, this sample doesn't do much for me because the grass is not wet there. So I look at the other four samples. And I say, how often is c true? Looks like three out of four. So there's my answer 3/4. Is it right? Probably not. All right. Probability of cloudy, given rainy and wet grass. Man. OK, probability of cloudy, given rainy and wet grass. All right. That one's not rainy and wet grass. That one's not rainy and wet grass. So of the rainy, wet grass samples, of which there are three, how often is it cloudy? All the time. In all of those samples, it was cloudy. Now, in this network, is the probability of cloudy, given plus r plus w, 1? Probably not. But if I drew enough samples, this would start to approach the correct probability. Here's an interesting case. What's the probability of cloudy, given minus r and minus w? So I look through my samples. And I say, how many samples do I have for this condition minus r minus w? Uh-oh. I have no samples for that. So I can't even answer this question yet. And this is a real problem because, in prior sampling, I just pull samples off according to the distribution. Sometimes they match my evidence. Sometimes they don't. And if I have a question about some variable like c, conditioned on some rare evidence, I just got to keep pulling samples off the conveyor belt until I get some that match my evidence. I might never get any that match the evidence if it's rare enough. So that's a real problem with the sampling method, OK? But we can-- it's fast. We can stop at any time. And, of course, the drawback is that it may be very inaccurate, OK? Any questions on prior sampling? Yep. STUDENT: So if we never get the evidence, what do we do? PROFESSOR: What do you do if you never get the evidence? So in general, you want to set up your sampling procedures so that you're seeing the conditioning environment. You're seeing the evidence. And if your evidence is very rare, prior sampling or, actually, in this case, rejection sampling-- it's going to turn out as what we were doing there-- is a very poor algorithm. So you've got two choices. You can run for a long time, or you can have a better sampling algorithm. I mean, you could make a decision about in your code, if there is ever a zero in the denominator, you have to do something reasonable. But the bottom line is any choice you make is going to be an engineering level fix. Mathematically, you just don't know very much, OK? So let's talk about rejection sampling. So the metaphor of prior sampling is events are coming off the conveyor belt distributed according to the joint probability that we have in our domain. And as they come off, we tally things up. In rejection sampling, we go into the sampling procedure with some condition that we care about, like, plus r, plus s, or something like that. And so as the samples come off our conveyor belt, we take a look at them. And if they're samples we can use, meaning they match our evidence, we keep them. And if they're samples, we can't use, we reject them. We actually sort of already did that when we computed some of those queries on the last slide, where we are crossing things out saying these samples don't represent my data condition. So they don't actually-- they're not relevant to this query. So in rejection sampling, samples come off, and you reject them if they don't match the evidence. So let's look at the difference. If we'd like to know, say, in this network here, we'd like to know the marginal probability of c. In fact, it's sitting there in the network. So we wouldn't have to compute anything at all. But let's just say, we did it with sampling. OK, we could do prior sampling. We would get samples like this. We don't actually have to keep the samples around. We could tally for each sample as we draw it. We could be like, oh, you're a plus c sample. You're another plus c sample. You're a minus c sample. You're a plus c sample. And we could keep tallies of the event we care about as the samples stream by. In the end, we would normalize to get a probability. We could then discard the samples. They're gone. We don't need them anymore. Whoops. If we want to know what's the probability of cloudy, given that there is a sprinkler out, you can do the same thing. These samples come by, and you tally how many come up plus c versus minus c because that's your query variable of interest. The difference is everything that comes off the conveyor belt that doesn't match plus s, you just throw it in the bin. You don't tally anything because that's not relevant to your data condition. This is called rejection sampling. And this is consistent for conditional probabilities. It's the exact same argument as why prior sampling is consistent with the entire distribution, OK? So from, again, from these samples, we can answer any question. We could say, what's the probability of plus s, given plus c? And we'd be like, OK, that's this sample, this sample, and this sample are plus c. I would discard the rest as they pass. And as I go, I would say, OK, there's a vote for minus s, a vote for plus s, and a vote for minus s. And I would say, minus s, 2/3; plus s, 1/3. OK, that's rejection sampling, OK? Here's the algorithm, though we've now done it twice. Your input to this algorithm is the evidence that you have instantiated. So I know the values of this test and this test. Or this light in my car is blinking. So I know the evidence. Then I go through the network variable by variable. And I sample values for that variable. If they happen to come up in a way that matches my evidence, great, I return the sample. And if they don't, I just reject the sample. And I start on a new sample. OK, so let's think about what that means. That means we're walking through the network. We get to the evidence variable. And there's this moment. There's this moment where I'm wondering, is it going to come up in a way that matches my evidence or not? And that moment determines whether or not we throw out the sample or keep it, OK? Remember, the problem is we might end up throwing out a lot of evidence. So we're going to go back to that moment when we weren't sure whether or not the random variable that matches the evidence would come up in a way that matched the evidence. We're going to fix that. We're going to fix that by doing something called likelihood weighting. So if we look at the caricature here, if you imagine that your evidence is-- you know that the shape coming off the conveyor belt is blue. And you want to know what's the distribution over shape, given color blue. We're, basically-- we're going to take every blue thing that comes off and throw it into one of the shape bins. And that's going to be our distribution. So let me just write that down. Whoops. Let me just write that down. So we're trying to compute probability, the probability distribution of our shape. OK, I just changed models here. We're not in the sprinkler anymore. Probability distribution over shape, given that c equals blue. So what do we do? Colored shapes come off the conveyor belt, and if it's blue we throw it into one of the shape bins. This one's going in this bin. They go into the bins. In the end, I'm going to count up how many things fell into each bin. When I get something that's red, it doesn't match my evidence of blueness, so I throw it out. In likelihood weighting, we don't have the problem of throwing out samples because we make them all blue. That is, when we're drawing samples from the network, we don't risk the sample not matching the evidence at the evidence nodes. We fix it to equal the evidence at the evidence nodes. So there's a big plus. We don't have to throw out any samples. Every shape comes out blue. But there's a problem. What's the problem? Let's go see if we can see the problem. OK, so the problem with rejection sampling, if your evidence is unlikely, you reject lots of samples. Actually, how many do you reject? So let's say, we're trying to compute some query, P of Q, given some evidence, variables e. So you have a bunch of evidence, variables e. So I'm getting samples out of my Bayes net using prior sampling, rejection sampling. Each sample pops off, and I look at it, and I say, do you match my evidence e? No, I throw it out. Do you match my evidence e? Nope, I throw it out. Do you match? Yes, I'll keep this sample. What if your evidence is, like, this test result, and this test result, and this test result, and this test result? How many samples am I going to keep? I'm only going to keep the ones that pop off the conveyor belt miraculously matching all my evidence variables. What fraction of samples from the full distribution match my evidence? Well, P of e, right? That evidence has some probability. And I'm drawing things from the network according to the distribution of the full joint. And so things come out matching my evidence with P of e. Well, if my evidence probability is 0.3, that means I keep 30% of my samples. If my evidence probability is one in a billion, it means I only keep one in a billion of the samples. And that's super inefficient because I spend all my time rejecting everything. And that's what's going to happen because your evidence is, like, some weird things you've observed. Next time you do it, you're going to observe some other weird things. So the marginal probability of your evidence is almost always pretty small. And that means with rejection sampling, you're almost always rejecting almost everything, so that's not great. And in particular, as you sample, you're not exploiting the fact that you know the evidence. You draw your sample, and then you're like, oh, man, it didn't come up like the evidence. Oh, it didn't come up like the evidence again. It's like, well, you actually know the evidence. Maybe we should work this into the process. OK, so we do. We already did this, but here's an illustration. If I want to know the probability of a shape, given blue, and I draw all of these samples off the conveyor belt, I throw out everything except that one blue sphere. Because it didn't happen to come across blue. So here's the idea. We're going to rig the system. We're going to fix all the evidence variables as we walk through the network. And we're not going to sample their values. We're just going to fix them to the evidence. And that means that the sample distribution can't be consistent anymore because suddenly everything is coming off blue. And if everything comes off the conveyor belt matching our evidence, it's not the same distribution. I mean, if nothing else, the probability evidence used to be P of e, and now, it's what? Now, it's 1. Everything matches the evidence, so we've broken something. Let's go dig into the algorithm, and see where we can fix what were broken. But it does give me a solution, which is to say that each of these samples that comes off is going to have a weight. And we'll figure out what that weight is. It's going to correspond to the probability of the evidence. And what that means is we're now going to go through, and we're going to pull things off the conveyor belt. And they're all going to be blue. And so everything comes off blue. But they're not all going to come off with the same weight. And in particular, samples where it probably wouldn't have matched the evidence are going to come off with a very small weight. And samples where, actually, plausibly, it could have matched the evidence, they're going to come off with a little larger weight. OK, so let's do an instance of likelihood weighting, OK? So here's my network. What I did before is I started at the top, and I sampled each node. And sometimes things came off as plus sprinkler, plus wet grass, and sometimes they didn't. But now, everyday is a sprinkler wet grass day. So let's draw a sample. I go to node c and with probability of 0.5, I'm going to get plus cloudy. I get plus cloudy. OK, now, I go to r. And with probability, let's see, it's plus cloudy. So with probability of 0.8, I'm going to get plus r here, and with the probability of 0.2, I'm going to get minus r. Nope, we're not doing that one yet. OK, so we have plus c. We're going to go to the sprinkler. Now, given that this sample has cloudy, what I would normally do is flip a coin at sprinkler, and it would come up as plus s 10% of the time. And it would come out red, minus s, 90% of the time. So right here, imagine I haven't actually flip the coin yet. Chances are, I'm going to get minus s. I'm going to get a red node here. But my evidence says, green. OK, so what happens? I'm going to stop. There's this moment where I don't know what's going to happen in prior sampling or rejection sampling. And I don't know what's going to happen. But I know that if I got to this point again, and again, and again, how likely would I be to get the evidence of plus s? Well, basically 10% of the time that I get to this point, the next node that I sample, when I sample sprinkler, it will come out matching the evidence. So for every 10 samples they get here, 9 of them are going to get rejected, but 1 will survive. And that means, if you imagine we split into some multiverse of parallel execution here, instead of flipping the coin, and risking a 90% chance of getting minus sprinkler, I'm going to take the universes where I actually get plus s. And I'm going to pretend I'm going into those universes, which means this sample only really represents 10% of the samples that make it to this point. So when I go through, I'm going to multiply in-- I'm going to rig this node. And every time I rig a node, I'm going to multiply the weight of a sample that begins at 1 by the probability that the thing that I rigged would have actually happened. All right. So now, I go over to rain, and I say, well, this isn't evidence. So I'm just going to flip a coin and see where it falls. Maybe it will be plus r. Maybe it will be minus r. It's plus r, OK? Now, I get to wet grass again. And when I get to wet grass, I look here, and I say, I could flip a coin and see whether I get plus or minus w. But instead, I'm just going to fix it to plus w because I'm imagining that's my evidence. But not every single time I get to this point, would it come up my way. But it would 99% of the time. And so this sample is going to get another weight of 0.99. And so what's going to pop off here is that you'll get samples, and they'll have weights on them. Now, why do these samples have weights? These samples have weights because-- these samples have weights because the nodes that I rigged only had a small probability of actually having the chips fall the way I needed them to. And that probability needs to be captured somewhere. And that's captured in the weight of the sample. So here's the algorithm for likelihood weighting. And then we'll look at its properties. The input again, just like rejection sampling, is the instantiation of your evidence. But now, instead of rejecting samples that don't match the evidence, instead, what I'm going to do now is I'm going to rig every sample to match my evidence. So I'm going to go along node to node, and if it's not an evidence node, I'm going to flip a coin according to the probabilities based on the parents. And I'm going to take whatever comes out. But when I get to an evidence variable-- so that's the normal thing. When I get to an evidence variable, I am going to assign the value in my evidence to that node. And I'm going to multiply the probability of that actually having happened in an organic way into the weight of the sample, OK? What comes off now is a sample that has a weight attached to it. And so you can imagine, I pull this red ball, or sorry, I pull a blue pyramid off of the conveyor belt, except it's now got the number of 0.7 attached to it. And not every sample is going to have the same weight. Indeed, samples often have very, very low weight because you forced a lot of evidence nodes to be something that they probably wouldn't have been. And the weights on the evidence are, in general, very small. OK, so that's likelihood weighting. Likelihood weighting, we take our network. We start at the top. We flip a coin for non-evidence variables. And we see, it lands however it is. As a result, if I draw a sample from a Bayes net in which some nodes were actually sampled, and they took on whatever value the random number came up, and other nodes are rigged, what's the probability that I'm going to get, whatever, plus c, plus s, plus r, plus w? What's the probability that's going to happen? Well, the probability that I'm going to get plus c is P of plus c. The probability that I'm going to get, whatever, r is r, given c. And so the sampling probability-- the probability that we're going to get the sample which includes my evidence every time-- is going to be a product of the probability of the nodes that I actually did sample. But unlike before, there are not terms in this product for the nodes that I didn't sample. The nodes that I just rigged values to match the evidence-- those conditional probabilities are not part of this product. And if I want probabilities to match my joint distribution, I have to have every single term, one for each variable. However, samples have weights. And the weight on a sample for z, sampled variables, and e, fixed evidence variables-- the weight on a sample is all the other terms of the joint probability. It's the probability of each evidence value that I fixed, given its parents. And so either in the sampling distribution or in the weights, every term appears exactly once. And that means that if I take a weighted sampling distribution here, I get all of those terms in the Bayes net appearing on one side or the other. So what does that mean? It means we have a bunch of samples with weights. And when I tally things up, each sample counts in proportion to it's weight, OK? All right. Any questions about that? We'll do an overview. And then we'll look at something a little bit different. OK, so let's think about likelihood weighting. What was the idea in likelihood weighting? The idea in likelihood weighting is if I have certain evidence values, I have certain variables whose value I know for my query. Like, I know this test came out negative, or something like that. Or I know this light is on in the car. It seems silly to generate samples that don't match my evidence because you're just going to reject them. So think about a big network which has a bunch of variables. And let's say, the one you know-- and there's a bunch of connections between these variables in this network, OK? And let's say, we've got some evidence. We'll make it blue. Let's say, this node and this node, we happen to know their values. It seems a little crazy-- if we know that, say, this one takes on value plus and this one takes on value minus. It seems a little crazy to generate samples that don't match that evidence. Why would we do that? We're just going to throw them out? So in likelihood weighting, we would fix these values. And then as we go down the network, the whole rest of the sample would play out in a way that's consistent with the evidence. This is sort of like hypothetical simulation. Hey, imagine-- what network? Imagine there was a burglary, but not an earthquake. What would happen? Let's simulate. So I fix burglary. I fix earthquake because why not? We only care about those samples. And then I let the network play out according to its probabilities. That's just simulation. And likelihood works really well when your evidence is at the top because then everything else that falls out of the network after that, conditions on that evidence naturally. However, let's say, we have a similar network. And we had a bunch of variables. And there's errors between them. So imagine that's the same network. And now, let's say, all our evidence is at the bottom. So we know John called and Mary called. And I'd like to know what's the probability of an earthquake. Well, now, this isn't quite so good because we start simulating, and we're like, hmm, no burglary and no earthquake, no alarm. Oh, John called. Mary called. That sample, is that going to have high weight or low weight? Right, it's pretty typical up top because there's usually not a burglary. There's usually not an earthquake. The alarm usually doesn't go off. But then, instead of nobody calling me, which would be the typical way that simulation would unfold. I'm like, no, pretend they called. The sample will come off as negative, negative, negative, but they both called. It will come off with a very small weight because that's probably not what would have happened if there had been no alarm. And then you're going to have lots of samples rolling off the conveyor belt that represent no burglary, no earthquake, no burglary, no earthquake, no burglary, no earthquake. They're all John and Mary calling because that's how the procedure works. But they all have a very small weight because that's probably not what would have happened. And so as a result, everything upstream of your evidence doesn't take on values which are typical of scenarios where that evidence has that value. So in likelihood weighting everything's sort of downstream. You simulate the consequences of the evidence. But likelihood weighting doesn't do a very good job of simulating the causes of the evidence. You're just sort of going along simulating a normal day, and then, boom, there's your evidence. It's probably pretty inconsistent with what actually has been sampled. And so your samples roll off with a very small weight. So likelihood weighting is good. We take the evidence into account, at least to a certain degree. If nothing else, all the samples do match the evidence, so we don't completely reject anything. And that means more of the samples will reflect the state of the world. But as we just talked about, likelihood weighting doesn't solve all our problems because the evidence influences the choice of things that are downstream, but not upstream. What we'd really like to do is we'd like to be able to say, I know this variable, and I know this variable. So somehow fill out the rest of the network for me in a way that kind of takes that into account. Like, John and Mary are calling, so set that earthquake in a way that is typical, given that evidence. How do we do that? We don't have a method for that right now. We only know how to sample downward, and usually there won't be an earthquake or a burglary if we do that. So we're going to take a couple minute break. And then we're going to talk about another technique called Gibbs sampling, which has its own issues, which we'll get into, but fixes this problem of wanting your samples to take into account the evidence, regardless of where the evidence is placed in the network, OK? So a couple minute break, and then we'll take over with Gibbs sampling. [SIDE CONVERSATION] That's weird. [WHISPERING] OK, I changed my mind. We're not going to Gibbs sampling yet. We're going to do some examples. All right. OK, so first example, we're going to bring back the rain and traffic network. So rain, traffic, we need some probabilities. Something has to live under rain, so we'll say, P of R. Oh, let's say, P of R, OK. So let's say, plus R, minus R. What's the probability of rain going to be? Well, let's make probability of rain 0.3, probability of no rain 0.7. We'll pretend we're not in our perpetual drought, OK? Let's make the conditional probability, what lives under T? T is dreaming about the conditional probability of traffic, given rain. And so that is-- well, there's two probabilities. There's plus traffic, minus traffic, when there is rain. And then also, when there is not rain, there is plus traffic, minus traffic. So let's think of some numbers. When it's raining, let's say, there is 90% traffic, 10% no traffic. And then when there's no rain, let's say, it's reversed. Well, let's say it's 50:50. All right, so we have a Bayes net. What can I do in this Bayes net? I can compute elements of the joint distribution. So probability of rain and traffic 0.3. That's the probability of rain. And when there's rain, the probability of traffic's 0.9. So it's 0.3 times 0.9. OK, that's a thing I can do. I could run variable elimination or inference by enumeration. And I could answer questions. Like, I could answer, what's the probability of T? Well, we know what the answer looks like. There's some chance of plus T, and there's some chance of minus T. And then there's some numbers, but I don't know what they are. But I could do some compute, OK? We're not going to do that. Today is sampling day. Instead, what I could do is I could draw some samples from this network, and use these samples to compute some quantities. So let's compute a probability of plus T in this network according to samples. Well, I sort of need to be able to call Python's random in order to get fair samples, so I'm just going to make stuff up. So what's my first sample going to be? Well, what's the most likely sample? The most likely sample, which might not happen first, but the most likely sample would be that there is no rain, OK? So I flip a coin, and 70% of the time it comes up no rain. So my first sample, let's pretend it's no rain. And then at T, I need to pick a value for traffic. So in the no rain case, it's 50:50 whether it will be plus or minus. What would you like? CS188 dot random, go. STUDENT: Plus traffic. PROFESSOR: Plus traffic. You want traffic. All right. Well, good thing you're in the Bay Area, OK? All right. So then we draw another one. All right. Another one minus r. This one maybe comes out minus traffic. OK, this one, now, there's rain. And now, there is traffic. And now, there's rain again. And now, there's traffic, OK. And I could keep doing this, all right? So of these four samples that I've just drawn, what is the probability of traffic in these samples? STUDENT: 3/4. PROFESSOR: 3/4. Is that the probability in the actual underlying distribution? Probably not. But if instead of 4, it were 4,000, or 4 million, I'd probably get pretty close. That's prior sampling. Prior sampling, I sample each node in the network until I have a complete outcome. I write it down. I write the next one. I write the next one down. And then I answer questions in those samples. Questions about marginals, like, what's the probability of T that have no evidence? That's prior sampling. However, if I want to know what's the probability of rain, given traffic, only three of these samples matter. This one matters, this one matters, and this one matters. But that second sample represents the strange world where there's no traffic. And if I wanted a probability of rain, given traffic, those outcomes don't matter to me. So to do rejection sampling-- so I just answered probability of distribution over T using prior sampling. And I got plus T, minus T, what was it? 3/4, 1/4. OK, that was with prior sampling. Now I'm going to use rejection sampling to answer distribution over-- what did I say it was going to be-- R given, plus T. All right. I've got three samples that are relevant because one was rejected. That's not bad. I only rejected one. That's because my evidence is plus T, which is pretty common in the samples. And it's pretty common in the samples because it's pretty common in the underlying model, OK? So rejection sampling-- reject, OK? Now, in the remaining samples, what do I get? Probability of R, plus R, or minus R. Plus R is 2/3, and minus R is 1/3 because that's the proportion in the samples that survived the rejection process, meaning the samples that matched my evidence. Now, normally it doesn't work that well. Was everybody clear on how I draw the samples? We didn't do likelihood weighting, right? I just drew samples, and then reject the ones that don't match my evidence, and I compute quantities on the ones that are left. Draw a lot of samples, you'll get something like the right answer. OK, any questions on that? OK. What about likelihood weighting? Let's do that again. Oh, I guess if it's gone, we can just do it on the same slide. All right. Let's do this again, except now the network is fire causes alarm. And up here, I need the probability of a fire. And it's either going to be yes fire or no fire. What's the probability of fire? Let's call it 0.01, probability of no fire 0.99. All right. Now, we need to know the probability of alarm. Hmm. I still want my other example. I'm sorry I deleted it. We're going to compare and contrast. There's rain causes traffic. And remember, we decided that probability of rain was-- I don't remember what we said. Plus rain, minus rain, 0.3, 0.7. Now, way over here, we have probability of traffic, given rain. There's the plus R case. There's the minus R case-- plus T, minus T, plus T, minus T. Here this was traffic is 50:50 without rain, but it's 90:10 with rain. OK, so there's our rain and traffic case. So we didn't do likelihood weighting there, but maybe we should. Let's do likelihood weighting. So remember, we were computing-- we have to have a query. So let's say, we want to compute the probability of rain, given plus traffic. All right. So let's compute it. Now, we need to draw some samples. And in this box, I'm going to draw some samples. So what we did before is we drew something like, oh, plus rain, plus T; minus rain, plus T; minus rain, minus T; minus rain, plus T. And then we said, OK, well, that's fine because this one gets crossed out because it doesn't match my evidence. Then I compute probability of R over the remaining samples. And in this case, I would get 1/3 probability of rain given traffic, OK? That's what we did before. Let's do that again. Let's do that again, but with likelihood weighting. Why do we do this with likelihood weighting? We do it with likelihood weighting because we don't like throwing out samples, especially if we're throwing out 99% of our samples. So let's do some likelihood weighting, all right? In likelihood weighting, we say, all right, well, we're going to flip a coin. So we know that T is supposed to take on this value plus, OK? So we're going to flip a coin at R and see what we get. So let's say, we get plus R. Now, normally, we'd flip a coin, and we'd say, T is going to come up plus 90% of the time. We're not going to flip the coin. We're just going to take plus, plus T. And we write down the weight. That thing I did, this whole multiverse of samples, 10% of them just got shaved off because they didn't match what actually happened. So let's do another one. Minus R, let's say that's the sample at R. Now, normally, I would flip a coin at T. And that coin would come up plus or minus 50:50. But you know what? I don't want to waste time rejecting the sample. So I'm going to force the sample to be plus T. And you say, wait a minute. It had a 50% chance of not being plus T. And so that beautiful whole sample that I had, half of it is gone because half of it happened in the parallel reality where it came up the wrong way. So this sample has less weight. So now, I go to do this. Well, let's do another one. Let's do another-- let's say, the next sample's plus R, plus T. That will come out with 0.9. So now, when I look at these three samples, nothing's been rejected. But one of them didn't make it through quite so well as the others because it doesn't match the evidence very well. So it has a smaller weight. So now what I do, when I ask you the distribution over R, given T, all the samples are now relevant. But they don't all have the same weight. So what is the mass on plus R? So for plus R-- that's this one and this one for a total of 1.8. So here for plus R, I have 1.8. And then for minus R, I just have this one sample, and it doesn't even have as high weight, so 0.5. So as unweighted samples, 2/3 of them are plus R. But the actual weighted distribution here, under likelihood weighting, is probability of plus R is 1.8 divided by 2.3. Let's call that 0.9, OK? So you get a different answer because the weights affect how much each sample contributes. And samples that came off the assembly line matching the evidence really well-- like, plus R, probably you were going to get traffic anyway-- they have high weights. And the ones that came off the assembly line not really looking like your evidence, but what the heck, let's assign the evidence anyway, they're going to have smaller weight because they represent a much smaller chance that you would have gotten the evidence had you let the sampling occur naturally, OK? Questions on that side before we compare and contrast with fire and alarm? Yep. STUDENT: All right, so the thing we calculated is the marginal probability of R? Or is it-- PROFESSOR: Down here? STUDENT: Yeah. PROFESSOR: Here, in likelihood weighting, you are always computing the conditional probability of an event, given the evidence that you have fixed because every sample looks like your evidence. STUDENT: We never calculated marginal probability? PROFESSOR: Yeah, actually, you can't calculate the marginal probability of R from this because you've messed with your samples. They're all evidence matching samples. Now, I could run likelihood weighting with no evidence, but then it reduces to prior sampling, OK? All right. So that's the basic idea is you go through, and instead of just throwing out samples that don't match your evidence, instead what you're going to do is you're going to make them match your evidence. And some will escape with reasonably high weights, and others will just get their weights reduced because they don't match the evidence very well. So instead of being rejected when you don't match the evidence, instead you get a smaller weight. And so you have a little bit more signal coming through than if you had just rejected them. That's sort of the success case for likelihood weighting. Before we do Gibbs sampling, I want to motivate why we're doing Gibbs sampling. We're doing Gibbs sampling because of the failure case of likelihood weighting. So the failure case of likelihood weighting is like fire causes alarm. Fire happens only 1% of the time. And what does the alarm do? Well, let's say, we have to say what the alarm does in the fire cases and in the no fire cases. Maybe in the fire cases, I need to say the alarm can go off, or it can fail to go off. Let's say 90% of the time it goes off, and 10% of the time it doesn't work. And when there's no fire, sometimes the alarm goes off anyway, and sometimes it doesn't. Let's say, 99% of the time it doesn't go off, and 1% of the time you get a false alarm. So here's a domain. This domain seems fine. Let's compute some samples. First, what would happen if we did rejection sampling? So first we need a query. So we need a query. How about, let's compute the probability of fire, or the distribution over fire, given that my alarm went off. How am I going to compute this? Well, I need samples that say, plus A. So I could do rejection sampling. Let's do some rejection sampling. OK, CS188 dot random, what's the value for F? Remember, it's minus 99% of the time. Minus, all right. Now, given that it's minus, we need to pick a probability for A. This is rejection sampling, so I'm not going to rig A. I'm just going to cross my fingers and hope I get the right thing. What am I probably going to get? I'm probably going to get that the alarm did not go off. OK, well, this sample's getting rejected because the alarm didn't go off. Let's try another one. What's going to happen at F? Minus. What's going to happen at A? Minus. OK, reject. Minus, minus A, reject. Minus, minus A, reject. OK, there's a fire, and the alarm doesn't go off, reject. OK, so we're going to create a whole lot of samples that get rejected. Well, that's no fun. So rejection sampling seems super annoying in this case. Why? Because remember, we reject things unless they happen to match the evidence. And the evidence here is rare because the thing that causes it is rare, OK? So let's erase all these samples because they are not helpful. Let's do it again because we thought we had the solution. We thought maybe it's a solution to just rig the alarm. Like, the alarm's going off, plus A, OK? Now, we don't know have to-- we're not going to do rejection sampling. We're going to do likelihood weighting. I promise every sample will come off with the alarm going off. So let's do it. We're going to flip a coin at F, and we're going to get minus because you always get minus, right? 99% of the time you get minus, but, hey, plus A. But there's a weight because if I had picked minus F, what's the chance that I actually would have for reals gotten plus A? 90%-- sorry. If you say, minus F, the chance that for reals, you would get plus A is only 0.01. So this one only has weight 0.01. Like, it's there, but by its weight, it's basically already a dead sample. OK, so what's the next one? Minus F. Well, let's rig plus A. You do that for a long time, and you get a whole bunch of samples that look just like that. And you say, all right, well, on the plus side, these are coming out matching my evidence. On the minus side, all those variables upstream-- the samples that I really need are the ones that say plus F because those are the samples that are going to come through with high weight. And that makes sense because when the alarm goes off, there's probably a fire. OK, maybe someday, a plus F sample will pop off. And of course, it will be a plus A sample as well because I'm rigging A. Now, what's the weight on this? Well, this plus F won't happen very often because P of plus F is small. But given that it does happen, the probability of plus A is 90%. And so this sample will have weight 0.9. And so even if I have 20 of these, and 1 of these-- let's say that. We've got 20 of these, and 1 of these, and then we've got to compute our probability distribution over F, given plus A. Well, there's the plus F samples. There's the minus F samples. How many plus F samples do I have? 20. But when you weight them, it's just 0.2. And how about minus F? We've only got one sample, but it has weight 0.9. And then, of course, I have to divide these both by 1.1 to make them normalize, OK? No, I messed that up. I'm just going to say it again, but right, instead of just swapping them. Let's try that again, but right. Minus F. There are 20 minus F, plus A samples, but in aggregate they only have weight 0.2. Plus F, plus A, there's only one sample of that, but in aggregate it has weight 0.9. That's not a probability distribution. I have to normalize by the total weight. But when I do that, I do actually end up with a probability of fire being high, given alarm, even though it took a while for me to actually get a sample that looked like that because my evidence was rare. So likelihood weighting will do the right thing in the end. But, let's say, I do this again, except fire is now 0.000001, OK? It'll still work, except now, I'm going to get 2 million of the minus F's. And then finally, finally, finally, maybe, I'll get a sample that actually looks like what really happens with a plus A. And when that appears it will have such high weight that it'll proportionally do the right thing. But I'm going to waste a lot of time generating stuff upstream of the rare evidence that isn't typical for that evidence, OK? This is why we need something like Gibbs sampling, all right? Does that make sense? So let's do Gibbs sampling really quickly. In Gibbs sampling, we don't walk through the network-- we don't walk through the network from top to bottom, get a sample, and reset. Instead, we're going to-- it's sort of like iterative improvement for [? CSVs. ?] We're going to start with a complete assignment. And we're going to tweak it a little bit. The end result is going to be that we take into account evidence upstream and downstream, but there's going to be a price, OK? So here's the procedure. I'll say it in words first. We're going to start with a complete full instantiation. That's an assignment to all variables. For example, random, or draw a prior sample, whatever you want, doesn't matter. OK, you start with a full instantiation that is consistent with your evidence. You're then going to walk to the variables one at a time, round robin, and leave the evidence fixed. But for every non-evidence variable, you will resample just that variable, conditioned on all the rest staying fixed. OK, we'll walk through an example of that, but you do this for a long time. And so you change this variable, and this variable, and this variable, and this variable. And you do this. And you get this sequence of assignments. If you repeat this, infinitely many times, the samples that come off are going to come from the correct distribution, where what that is is the probability distribution over all of the non-evidence variables, conditioned on the evidence variables. It's sort of an amazing property. And the basic idea here is that you're resampling everything, but you're leaving the evidence. And as you resample things, the evidence can influence the things you're resampling through the other variables in all directions, OK? So let's do an example. All right. So remember, this is going to be in the cloudy causes sprinkler and rain, both of which cause wet grass. And I'd like to know, what's the probability of sprinkler given rain? So what I will do is I will fix my evidence. So r has been locked to plus r, just like in likelihood weighting. So here's my Bayes net. r has been locked because it's evidence. And what would I do in likelihood weighting? I'd walk along the other three variables, flipping coins, assigning each variable, given its parents, according to my coin flips, except for plus r which I'd just rig. And I'd get a weight out of that. Here, we do something different. We initialize the other variables. Call it random, whatever. And then we're going to repeat the following process. We've got a full assignment. We're going to choose a non-evidence variable. So let's say, we choose s. I keep the whole rest of my assignment, and I only resample this one variable. Well, I know how to sample a variable, given its parent. But here, I'm sampling a variable given its parent and its child. And in fact, I'm sampling it given everything else in the network. So we're going to have to think through the math on that. So we're going to resample x from P of x, given all other variables. You say, that's in my Bayes net. That is not in your Bayes net. Your Bayes net has P of x, given its parents. This is P of x, given everything, so we're going to have to compute that. So we're going to compute-- we're going to draw a sample from P of s, given plus c, minus w, plus r-- the whole rest of the network. And so what do we get? We get plus s, great. We're going to pick the next variable. Maybe, that'll be c. We're going to sample a new value of c, given all of the other assigned variables. Maybe we get plus c, great. Then we go to w, and we resample w, given all of its neighbors. And maybe, we get minus w. And we keep doing this. And we get this chain here of samples. First question, when I did prior sampling, each sample walks through the network, and then resets, walk through the network again. So I take sample one, and I take sample two. Are sample one and sample two going to-- are there going to be correlations between them, given the network? No, they're just totally independent walks through the network, right? How about here? Think about this sample and this sample. Are they going to be correlated? Think you have a big network. You have a huge network. Everything's assigned. You pick variable 712, and you flip a coin. And in the end, what does the network look like compared to before you flipped that variable? It looks exactly the same, except maybe in one place, OK? So you are no longer drawing independent samples, all right? So the samples are now highly correlated. We're going to have to deal with that, OK? But you've got these sequence of samples. Each one is a slight variation on the sample that came before it. And if we grab a sample, and then we do this thing for a long time, and we grab another sample, and then we rotate around robin for a while, and we grab another sample, if we wait long enough between samples, then the correlations reduce. And now, we're grabbing samples from the joint distribution over all non-evidence variables, conditioned on the evidence variables, OK? So that's great. We now have this Markov chain we run. And we get sample, sample, sample, sample, sample. And if we draw them too close, they just look exactly like each other. But that's actually OK, as long as you run it long enough. That's great. That will give us the ability to draw samples that condition upstream and downstream on our evidence. But we need to do computation to get from one state in the chain to the next. And in particular, we need to be able to compute what is the probability of the currently flipping variable, given the whole rest of the network. Well, that sounds bad, right? Isn't this what got us into trouble, was trying to compute conditional probabilities of things, given other things? It's not so bad. And the reason it's not-- so wow, that's a lot of math. OK, let's walk through it. The reason it's not so bad is because the probability distribution over s, conditioned on some assignment to the rest-- it will look this way for any network, OK? It's a variable conditioned on an assignment to everything else. Well, that's going to be a ratio. And then the numerator is just an entry of my Bayes net-- a probability of all the variables all together. And the denominator is the exact same thing, but summed over, in this case, s. So I can rewrite that like this. That's a probability of one entry of the joint distribution divided by the sum of a bunch of similar entries in the joint distribution that only differ on the value of s. So if you expand that out, so from here to here, this is P of scrw is a product of local conditional probabilities because that's the definition of a joint probability in a Bayes net. But then I look at this, and I'm like, huh, this numerator came from the Bayes net formula. The denominator came from the Bayes net formula. And if we look, we see that actually quite a lot of those terms have nothing to do with s. Like, P of plus c is in the numerator and the denominator, right? And P of whatever this is, plus r plus c is in the numerator and the denominator. And in general, almost everything in the network is in the numerator and the denominator in the exact same way because it doesn't mention this one variable that's changing. And so everything cancels out. And you're left with something really simple. You're left with the following algorithm, which is you want to resample this network. You look at all the terms in the network which mention that variable. These are exactly the things you would join together in variable elimination. You take all the terms that mention that. You assign them. And then you normalize over all the different values of s, and that's it. And the whole rest of the network doesn't matter because it's going to cancel out, because it's contribution to the likelihood function is the same regardless of what you set s to, all right? So almost everything cancels out. You're left with only conditional probabilities with s. And in general, only the ones with the resampled variables need to be considered. And so it's efficient to resample a single variable. It's not trivial. It's not sitting in the Bayes net, like it was for prior or rejection sampling. But it's efficient. And you get this nice algorithm that lets you take into account evidence in both directions. There's a couple things I didn't specify yet. Didn't talk about how many samples you need, so that your [? error bars ?] get small. That's the thing you would see in a statistics class. I didn't talk about how far you have to run this Markov chain and Gibbs sampling in order to escape your initial conditions. We didn't talk about that. And we didn't talk about some of-- we didn't talk about why it is yet, that if you run Gibbs sampling somehow upstream and downstream evidence is both conditioned. But it should feel plausible because you lockdown your evidence, and then you're sampling a bunch of things. And each time you sample a node, you push a little bit of information up, and down, and around the network. So it should make sense that that all gets smeared around the network. But we haven't proven anything about it. OK, so in summary, we have the following algorithms. We have prior sampling. This is where you walk down the Bayes net from the top, so that you can use the conditional probabilities of nodes, given just their parents, because that's easy. You walk down the network, and you draw a sample. That's great for computing P of a query without evidence. If you have evidence, you do the exact same thing. You walk down the network, but you throw out all of the samples that don't end up matching your evidence. It's a very simple algorithm. But it might be very slow because you're going to reject almost everything. You're going to reject 1 minus the probability of your evidence stuff. We had likelihood weighting, which is also to answer P of Q, given some evidence. Here, every sample is relevant, but sometimes, they all have so low weight that you didn't really gain much. But sometimes, if the evidence is high enough in the network, your samples actually are all much better. And this can be much faster to converge than rejection sampling. And finally, we have Gibbs sampling, which is a very different style of sampling, where you compute a sequence of samples. These have the property that you take into account evidence upstream and downstream in a naturally unified way. But there were a lot more questions about non-independence of samples and about the actual math you have to do in order to resample a single variable, conditioned on the rest of the network. There's a lot more that we're not going to cover. There's actually not a ton more to say about rejection sampling, and likelihood weighting, and what little there is to say, we'll say it when we see particle filtering with hidden Markov models. But on Gibbs sampling, there's a lot more that we're not going to talk about here. So it does produce samples from probability of query, given evidence. But again, only under certain conditions of how long you run it, and how often you-- how you space your samples, and so on, depending on what you're doing. Gibbs sampling has a special case of something you might have heard of, called Markov chain Monte Carlo. There's a more general category called Metropolis-Hastings samplers, which you may have heard about. They're very famous. Gibbs sampling is a special case of that. But it has its own flaws that are addressed by Metropolis-Hastings. And you can read about these things. In fact, any time you see the word Monte Carlo, that usually means sampling. So you could take a look at that stuff. We're not going to cover that here. OK, that's it for today. We will see you next time. We will talk about taking everything we've done with the Bayes nets, merging it with what we were doing with action selection earlier, and start thinking about associating utilities with knowledge and value of information. [SIDE CONVERSATION] Yeah, of course. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L07_Introduction_to_Nuclear_and_Particle_Physics_Units.txt | MARKUS KLUTE: All right, hi. So we continue the discussion in 8.701 on units. So this unit is on units. Why this is an important discussion is because it is very convenient and it simplifies life quite a bit if one does not use SI units in the discussion of, in particle physics, decays in particle physics, or cross-sections. And it's also important to avoid carrying around large exponents. And given, you have a conversation, you want to talk about things which are order of 1 of things instead of 10 to the minus 28 things. And one example is the introduction of a new unit for the cross-sections of-- units for cross-sections which describes an area, and that unit is barn. We talk about cross-sections of barns, or femtobarns, or picobarns, and one bar is defined as 10 to the minus 28 square meters. Physics processes, the one at high energies-- for example, the ones we discuss at the Large Hadron Collider-- are typically of the order of picobarn, 10 to the minus 12 barns, or one femtobarn, 10 to the minus 15. So there's an interesting story to barns and why this was introduced. The unit came out of the Manhattan Project. The idea of the scientist was to confuse potential spies towards what cross-sections for nuclear processes are. And so they introduced this unit of a barn, and they're trying to characterize nuclear collisions, maybe an accelerator shooting something at a target. And one barn is a cross-section where it's really, really hard to miss-- so a big cross-section. In this context, also, the shed was introduced. This is not very popular today anymore. And it turns out that this idea of confusing the readers of papers or of discussions turned into a new standard. So we talk about, of course, barns, and picobarns and femtobarns, specifically, quite frequently. So this is just one example. This is not really changing the units, but just avoiding carrying around exponents. But we also, in particle physics and in nuclear physics, use a system called natural units. This system is based on fundamental concepts of quantum mechanics and special relativity. So the idea here is that we replace kilogram, meters, and seconds by h-bar, which is a unit of action in quantum mechanics, c, the speed of light, and GeV, where GeV, a typical GeV, is a typical approximate mass of a proton. So then, you do this transformation, and then you simplify the option of setting h-bar and c to 1, you find that energies are expressed in GeV, momentum is expressed in GeV, and mass is expressed in GeV. That means when you talk about relativistic equations, E equals m c squared and all those things, m, E, and also the momentum have the same unit. That simplifies quite a bit. Time has a unit of 1 over GeV, length has a unit of 1 over GeV, and area has the unit of 1 over GeV squared. So that's the simplification. You might think that you lose information by setting fundamental constants to 1, but you actually do not, because you carry with you, in your equations, the dimension of the problem. If you want to do a quick exercise here, I invite you to calculate the charge radius of the proton, which is 4.1 over GeV, or per GeV, and convert this back to SI units. Again, it seems like we lost information here, but just from the dimensional analysis, you can figure out what the answer is. And the hint here is that h-bar c is equal to 0.197 GeV femtometers. So you should already know the answer from previous discussions in the lecture, but the calculation is rather straightforward. On top of this, it's useful to use Heaviside-Lorentz units and combine them with some measurable units we discussed. So what we do in addition here is set, say, permittivity in free space to 1, and also the permeability in free space to 1-- so epsilon0 and mu0 to 1. When you do that, you basically combine or tie the electric charge to the strength of QCD. And so alpha, the strength, a dimensionless fine structure constant, 1 over 137, becomes e squared, the electric charge squared, over 4 pi. So this is also very convenient, to have this kind of convention. So we'll use those natural units as we go through class. In some examples, we'll use SI, in others, use natural units. This will always be clear from the problem we're looking at. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L32_Feynman_Calculus_Fermis_Golden_Rule.txt | MARKUS KLUTE: Welcome back to 8.701. So we continue our discussion on the Feynman Calculus, and continue now talking about Fermi's Golden Rule. The heart of calculating decay rates and cross sections is Fermi's Golden Rule. And it simply tells you how we can use the calculation of amplitudes and the available phase space to make assessment of decay rates and cross sections. So the amplitude M holds all the dynamical information. And we can see that we can calculate the amplitude by evaluating Feynman diagrams directly using Feynman rules. The available phase space holds is a kinematic factor, and it depends on masses and the energies and the momentum of the particles involved. And then again, Fermi's Golden Rule simply says that the transition rate or decay rates and cross sections are given by the product of the phase space and the square of the amplitude. How does this now look like? If you look at the Golden Rule for Decays, here we suppose having one particle decaying into a second, third, fourth, n-th particle. This is that the decay rate is given by matrix element, the amplitude squared, and a term which is C, the phase space factor. There's also a factor here in front. This S has to count-- we have to account for the fact that we might have the same particle in the final state, or the same particle occurring multiple times. And we have to make sure that we don't have double counting. In this double counting has to be correct. If all particles are different, this extra factor is 1. We'll look at this some more later. Now, at first glance, this looks like rather complicated. But if you try to assess now what those individual terms mean, you will see that it's very accessible. So when we try to calculate a use from this golden rule for the case, we to integrate over all outgoing particle four-momenta. But we have three kinematical constraints. The first one is that the outgoing particles have to be on mass. So they have to be on mass shell. We talked about this issue of virtual particle [? inertia ?] before. But simply, they have to-- the energy of the particle has to follow this condition. This is a delta function, which simply means that this gives us, if the argument is 0, the delta function, which was 1. The argument is non-zero. The function returns 0. So this [INAUDIBLE] simplifies this term here. So this first part here in our Fermi's Golden Rule simply accounts for the fact that outgoing particles have to be on mass shell. Outgoing particles also have to have positive energies. And this explains our second factor here, this factor. And so this factor is the heavy side function, and this is simply 0 for negative values and 1 for positive values. And the last one means that energy and momentum of the particle have to conserve. So the first particle minus the second, third, and so on for each of the component for energy and for those [? three ?] components of the momenta have to be 0 for this to return 1. So again, another delta function. That's also factors of pi. And the simple rule here is for each delta function, you have to account for a vector of 2 pi in your function. So this basically explains already everything we see here on this slide. So now we can calculate. And I recommend to have a look at Griffiths chapter 6 for this. If you look at two particle decays. So one particle into two particles. This simplifies quite tremendously because of all the delta functions here and the heavy side functions. This equation simplifies directly to a factor. You have the momentum of the particle here, and a matrix element. Again you have this statistical factor here to account for the fact that there might be duplicates and you want to keep track of the statistical factor. For scattering, the equation looks almost the same. So you have the same phase space factor. Almost the same phase space factor. Matrix element. Again, this is the transition way it is given by, as Fermi's Golden Rule tells us, by the matrix elements squared. And the phase space factor. This overall effect, as we look at later, they are slightly different. But here, as a rule, we want to make sure that we have a way to assess cross section. Did this make sense? We'll see later in more detail. For two body decays in the center of mass frame. You know that the initial momenta in the center of mass frames of particle 1 and particle 2 have to be the same. The outgoing momenta also have to be the same. They don't necessarily have to be the same as the [INAUDIBLE] But if any uses of the differential cross-section can be calculated quite straightforwardly. Again, there's a matrix element squared. You have the final state momenta, the initial state momenta. And divide this by the sum of the energies squared together with an extra factor here. So what we have seen here is just we have looked at it. I didn't explain how we got to this. But we have seen from this Golden Rule, which helps us to assess. And we have seen how we can calculate the phase space factor. The next lecture, we now see how we can calculate the matrix element itself. And we'll start doing this by using a toy experiment or toy model, such that the discussion simplifies, algebra simplifies quite a bit. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L83_Neutrino_Physics_Mixing.txt | MARKUS KLUTE: Welcome back to 8.701. We have seen in the previous video how neutrinos can acquire mass. When they have mask, ["mass"?] their weak eigenstate is not equal to their mass eigenstate. So we observe the same mixing as we have seen in the quark sector. So let's review this a little bit. So just starting from two neutrino generations, we can write the flavoring states. We are mixing of mass eigenstate. If we do this, it's a simple matrix. You find that there is one angle used for the rotation of the mass eigenstate into the flavor eigenstate. All right. So we can add some time to 0, write our neutrino or our muon neutrino as a combination of the 1 and 2 mass eigenstates. If we then have this neutrino evolve as time, we see that the relative contribution of the 1 and 2 mass eigenstate actually changes. So if we do that, so obviously you find some time evolution. If we then ask ourselves, what is the probability that we start from a muon neutrino that we actually find in an interaction electron neutrinos. Through this mixing of mass eigenstates, we can calculate this probability just by squaring the amplitudes. If you do this, you just use this part here. We find that there is a cosine E2 minus E1 term. All right. Good. So let's analyze this a little bit further. We know that the masses need to be small. So one thing we can also do here is do a Taylor expansion of our energy and then just revise the term. If you then analyze it some more, you find that the oscillation probability simply depends on the mass difference squared, the length of distance the neutrino had time from 0 to oscillate, and the energy of the neutrino. So this is fantastic, because now by studying the probability for a neutrino to change its flavor, we can infer the mass differences of two states. This is fantastic. I should add here that in this case, in this formula, the length is given in meters, the energy is given-- unit of the energy MeV and the mass difference is in eV, otherwise the equation doesn't make sense. So again, we have seen, if you start from two neutrino kind of model, two neutrino flavor model, that the experimental parameters of interest here are the length of distance from the neutrino source to the detector on the place where we generate a specific neutrino of a specific flavor, to where we actually observe the flavor of the neutrino and the energy of the neutrino. And then, the appearance or disappearance of a muon neutrino, for example, if we start from a beam of muon neutrinos, is a function of the length of the source. And this is shown here for neutrinos of a specific energy. So you can observe or can try to measure the disappearance of muon neutrinos or you can try to find the appearance of electron neutrinos in the specific two-neutrino model. All right. So all we find later is that we want to look for disappearance and appearance of neutrinos of specific flavors in order to probe mass differences. Instead of doing this for two generations, you already know how to do this in three generations, you can find that the unitary matrix has three angles, three rotations and one complex phase. And this looks very much the same here as in the quark sector. The big difference is that the values of those parameters are quite different. For the quarks, we have seen it's dominated by the diagonal. And then we have seen, for example, in the Wolfenstein parameterization that we can do an expansion of the matrix and see terms which are all of this lambda, which would all point to two and number square and number acute. Here, on the leptons sector, the situation seems to be quite different. We have a later lecture where we look at the extra parameters and the numerical values. But what you see here is that it's more like democracy between the individual values. Question is, do we have sensitivity to the complex phase? We can only have that sensitivity if the value of this matrix element is non-zero. And this has been observed already. So that's good news in order to allow further neutrino studies. So in general, you can write the oscillation from one flavor to another flavor state using this rotation of matrices we have seen, and with that measure the individual components of the matrix. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L55_QCD_Asymptotic_Freedom.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture, we want to talk about asymptotic freedom, about confinement, and also the running of the strength of the strong force. In the recitation, we already talked about vacuum polarization, QED, and how it relates to QCD. So here we're just going to remind ourselves again about what has been discussed here. So loop contributions in QED, they make the effective charge a function of the momentum transferred q. So the coupling strength increases with larger values of q squared. So a leading order, you have to consider this diagram. And you find that there is a correction coming from this kind of diagram. m here is the mass of the particle involved in this correction. But as you know, in perturbation theory, you have to consider all possible diagrams. And this is being done here. So if you consider those higher-order diagrams, you can rewrite the running of the QED coupling as the coupling at q squared of 0 divided by 1 minus this contribution here. So there's a couple of things to note. So there's a 1 minus a factor here. And then there's also the definition or the fixed point of the coupling that's being used as 0 q squared. But that's possible in QED. But that will not be possible in QCD, because at momentum transfer 0, the coupling is going to be infinite. So it's not well-defined and we cannot use perturbation theory at this value s either. OK, good. So in QCD, we have not just one diagram, like the one we just saw before. But we also have to consider the gluon self-coupling diagram. So we have contributions of this sort. So I'll spare you from the calculation. But you find that the gluon contribution has an opposite effect. So it's producing some sort of anti-screening or camouflaging of the color charge. And so you find, if you calculate this in a very similar way as you did just before, we did just before for QED, we find this kind of correction. So here, we have to define or fix the strength of the coupling at a specific scale. And you see that you have this contribution here. There's a plus here, but this factor here, which is 11 times the number of colors involved, minus 2 times the number of flavors involved. Those numbers are 3 and 6, so hence 11 times n is larger than 2 times f. This becomes positive here. And so therefore the coupling decreases-- the coupling decreases this q squared. All right, since it decreases with q squared, at very, very high q squared, the coupling becomes 0. And that's the origin of asymptotic freedom. In the limit of very large q squared, strongly charged particles, color-charged particles become free. That's also the reason why at very high energies we can make calculations in QCD using the Feynman calculus or perturbation theory. In the other direction, if you go to very low q squared, those methods and tools are not possible anymore. And quarks and gluons are actually confined. You cannot produce a non-color thing that stays by themselves. You cannot have a free gluon or free quark. So let's look at this running a little bit more. So we have this annoying alpha s over function of q squared. You can get rid of this actually by redefining the parameter here and using this lambda parameter, or lambda QCD as it's often called, or lambda color as it's called here under C. When we do this, we can rewrite the equations. So we find that there is no dependency anymore of the scale involved. And we don't have to find this fixed point. We still have the dependency of 11n minus 2f. And then this locked dependency on q squared over this specific scale. So the strength of the coupling with this definition defined for any value of q squared, finding this lambda q, this is kind of complicated because if you go to very low q square, calculations and experiments cannot easily be compared anymore. So we find that lambda QCD is in the order of 100 to 500 MeV. OK, here are experimental measurements, so alpha. The strength of the strong interaction has been measured in many experiments. We can measure this, for example, in decays of tau leptons where there's hadrons being produced, and therefore you have sensitivity to alpha s. You can use found states deep in elastic scattering experiment. And PDF [?] fits can be used in order to constrain alpha s. We can use e-plus e-minus physics and look at the distribution of jets. We saw that the additional radiation of a gluon is sensitive to this vertex of a quark radiating a gluon, and therefore to the strength of QCD. You find that those experiments are all in reasonable agreement. This line here indicates now the average value at a specific scale. And typically when people compare measurements, they use the scale of the Z boson mass, about 90 GeV, in order to compare the values. And you can see here, the running of alpha s, the running of the strong coupling, a function of q, and you see the behavior of asymptotic freedom, meaning that the coupling becomes small at very large value of q, and very large-- small values of [INAUDIBLE]. All right, so that's it for asymptotic freedom and the running of alpha s. We have a little bit more of a discussion in QCD before we enter the next chapter. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L08_Introduction_to_Nuclear_and_Particle_Physics_Relativistic_Kinematics.txt | MARKUS KLUTE: Now back to 8.701. So in this section, we'll talk about relativistic kinematics. Let me start by saying that one of my favorite classes here at MIT is a class called 8.20, special relativity, where we teach students about special relativity, of course, but Einstein and paradoxes. And it's one of my favorite classes. And in their class, there's a component on particle physics, which has to do with just using relativistic kinematics in order to understand how to create antimatter, how to collide beams, how we can analyze decays. And in this introductory section, we're going to do a very similar thing. I trust that you all had some sort of class introduction of special relativity, some of you maybe general relativity. What we want to do here is review this content very briefly, but then more use it in a number of examples. So in particle physics, nuclear physics, we often deal particles who will travel close to the speed of light. The photon travels at the speed of light. We typically define the velocity as v/c in natural units. Beta is the velocity, gamma is defined by 1 over square root 1 minus the velocity squared. Beta is always smaller or equal to 1, smaller for massive particle. And gamma is always equal or greater than 1. The total energy of a particle with 0 mass-- sorry, with non-zero mass, is then given by gamma times m c squared. And the momentum is given by gamma times mv or gamma times m beta. The total energy squared of a particle, considering one massive particle, or one particle, is given by energy squared equal momentum squared plus mass squared. And if you now consider a particle with 0 mass, you see that the energy and the momentum are equal. If you consider a particle at rest, meaning the momentum is 0, you see that the energy is equal to the mass. You get Einstein's famous formula, the energy is equal-- E equal m c squared. Energy is equal to the mass or the equivalence between those two. But we want to fully understand and control our Lorentz transformations. Here shown for example, we have a boost or a transformation in x-direction. So you see that energy and momentum transform like time and space. And you just-- I really encourage you to just review this in more general cases, but you can always, when you have a boost in one direction, just do rotation and get to this more simplified case. So here is the first example I would like you to actually go through. The Lorentz transformation here, I decided to use the z-direction, just to change things up a little bit. And the velocity of the boosted frame is vb. So we want to calculate the quantity m squared s squared in the transformed frame. And what you will find, if you actually do the calculation-- and the solutions are in the backup slides-- it's that z quantity doesn't change in the Lorentz transformation. It is invariant. And we'll talk about an invariant mass in this context. So now in particle physics, we often have the case that we are not considering just one particle and want to describe just one particular and measure it, but often the case of particles, or multiple particles, which are involved in the reaction. So we can look at the total energy, just the sum of the energy of all particles, and total momentum, the sum of the momentum of all particles. And those two quantities are always conserved. They are not invariant. So be aware of the distinction between conserved properties and invariant properties. Invariant here means perform a transformation like the Lorentz transformation, and the property doesn't change. Conserved here means we have a reaction, and in that reaction the property is not changing. Those are two different, distinct things. So now you can look at the invariant property, or the one which is conserved in this collision, which is this mass term or mass-squared term. The total mass, we'll define this total mass as equal to the energy squared minus momentum squared. And then you can consider the two cases of a laboratory frame and the so-called center-of-mass frame. So in the laboratory frame, you have a particle. It's moving when we observe this particle, and then it decays into, in this example, three daughter particles. In the center-of-mass frame in this example, we put ourselves into the rest frame of the particle we are interested in. And then that frame then, three particles emerge. And we can describe the three particles. So the momenta between the three daughter particles are not going to be [INAUDIBLE]. But because this total mass is an invariant property, it's the same in both frames. And it's equal to the mass of the parent particle which we [INAUDIBLE] So when you measure the energy and momentum of the daughter particles, you can infer in any frame the mass of the parent particle by calculating the total mass. And so you can infer from those measurements the identity of the mother particle. And that's, for example, how we discover the Higgs boson. We measure the Higgs boson decay into a pair of photons, and then we calculate the mass of those two photons in our laboratory frame. And that mass, then, is equal to the Higgs mass. So now here we want to compare or look into those two cases a little bit more. The first case is a case where we have a particle 1 colliding with a particle 2, where the particle 2 is at rest. Particle 1 has a certain energy E1. And the second example-- this is called a fixed-target experiment. So the second particle is fixed, the first one is colliding. The second example is the one where you have two particles, and both have energies, and we bring them to collision. Often, the two particles are in nature, like two protons, an electron and positron, and the energies of the beams are the same. But this doesn't have to be the case. Later in the class, we'll look at heavy iron collisions. It's the collision of heavy ions like lead. It's a proton. And here the masses are different, and the energy of the particles can be different. All right. And here's another exercise now. So we want to actually create a Z boson, which has a mass of about 91 GeV. Note I dropped the c squared, 1 over c squared here. And you want to produce this particle colliding a positron with an election. This has happened at LEP at CERN in the late '80s and '90s. The center-of-mass energy, often what's called square root of s, is equal to 91 GeV. So that's the energy we need in order to produce this new particle. The mass of the electron and the positron are 511 KeV or 0.511 MeV. So the energy needed is 45 GeV, 45.5 GeV. However, that was the setup at LEP, where you have two beams colliding. So we have this center-of-mass energy being given by the energy directly given approximately by the energy of the two beams. So now, imagine somebody would have proposed a fixed-target experiment, where you have stationary electrons, for example electrons in atoms, just a gas of some sort, and then you have produced positrons in a beam, you accelerate them and bring them to collision. So the question now is, how large does it-- do you need an energy of this positron beam? How large does it have to be in order to produce a Z boson? So again, this is something I would like you to actually explore and just write down. Solutions for this example are also in the backup. So now, you know, there is a number of interesting examples just coming from E equal m c squared, and from being able to use-- and that can be answered by being able to use Lorentz transformation. And so now here I give you just a set of examples. And you should work on them on your own time. Maybe we'll touch on them in recitation. The first one is rather straightforward. Again, we are talking about LEP at CERN. After the Z bosons were produced, when it was trying to go to high energy to find some new physics, some new particle, for example the Higgs boson. The Higgs boson might be produced by a process which is called Higgs-Strahlung process. We will look at this later. So you have an electron and a positron colliding to virtual Z boson. So that is a Z boson which is heavier than 91 GeV. We'll see later how that's possible. But then the virtual Z boson can radiate a Higgs boson. So that's why it's called Strahlung, like the German word for radiation, Strahlung process. And so electrons and positrons were accelerated to 100 GeV each, center-of-mass energy 200 GeV. What was the gamma factor for those electrons? Another question which is quite exciting is, how much energy do you need in order to split a proton and a neutron, which is a bound state? It's called a deuteron. And it's a fundamental-- the important particle in the evolution of our universe, in the sense that in order to generate higher mass or higher proton number elements, a neutron is rather important in this. And so just by knowing the mass of the proton, the mass of the neutron, and the mass of the deuteron, you can now calculate how much energy is in the-- binded-- what is the binding energy between those particles. We'll talk a lot about models to calculate binding energy when we talk about nuclear physics. But here, just from the kinematics you can-- and from E equal m c squared, you can calculate how much energy needs to be in this binded or compound state. From atomic physics, you might remember or know that particles can-- excited particles can emit photons. And so now you have a particle. It goes-- it [INAUDIBLE] excites, radiates a photon. What happens now to the photon? I mentioned this happening in a big gas or in some solid state. Can the photon be reabsorbed by the same medium, or even by the same particle? It's not a trivial question, but what is the conditions under which-- so for example, imagine you have a gas as an excited particle. And it emits a photon. And so now the photon sees the rest of the gas. Can that rest of the gas absorb the photon? Interesting question. It's not trivial. Another interesting question, I think, is you're trying to produce new forms of matter. Like you just produced a Z boson, but you can also produce antiprotons. So what is the minimal energy in a proton on a fixed-target experiment-- so again, you have a target of protons in some form, you shoot a proton against this target, and you try to produce an antiproton. So that means that you have to put produce-- in this collision, you have two protons in the initial state, you have to have a proton, a proton, another proton, and an antiproton in your final state. But what-- how much energy is needed for the [INAUDIBLE] beam in order to succeed with this collision? My counting is incorrect here. So this should be 5, but OK, fine. Decays. So assume a pion decays at rest. So a pion is at rest. You look at a pion, that's the compound state of meson out of an up quark and a down quark. And it might decay in an electron and a positron. Whatever the dynamics is in these decays, if you just look at the kinematics of this, how fast are the decay products? In order to calculate that, you need to look at the pion mass, electron mass we just discussed, and the positron has the same mass. So how fast are electron and positron coming out of a pion decay? Assume that the pion is at rest. And you can use momentum conservation and calculate the speed of the electrons and positrons. Again, one of those minimal-energy proton colliding experiments, very similar setup. But here we try to produce a proton, a neutron, and a pion out of proton-proton collision. And then the last one is the so-called Compton effect, where you have a photon which scatters of an electron target. And so you have an incoming photon, and the electron is at rest. And then you look at the scattered photon angle, scattered electron angle, and in that collision the energy of the photon is going to change. So the energy of the photon is h times mu or h over lambda, the wavelength. And so the question is, how does the wavelength of the photon change in this kind of condition. So those are just examples in how you can use relativistic kinematics in order to calculate very important aspects of collisions in particle physics without any understanding, at this point, of the underlying dynamics, the underlying forces, the underlying conservation laws, and so on. So later in this class, we'll discuss what is the likelihood of a pion to decay into an electron positron, and why that is actually not that likely. And also, the collision rates, lifetimes of particles. But here we are just looking at the kinematic of those processes and calculate how much energy is involved and what is the momentum of resulting particles. So I'll stop here. If you scroll down on the slides, you'll find solutions to two of the problems. And we'll discuss them in recitation. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L72_Higgs_Physics_Fermion_Masses.txt | MARKUS KLUTE: Welcome back to 8.701. In the previous lecture, we have seen how the gauge bosons, the W, and the Z boson acquire mass, while the photon remains massless, through the Higgs mechanism. We introduced a new field, a new complex doublet field, the Higgs field, which then broke, through its vacuum expectation value, the symmetry. And then through the coupling to the gauge bosons, they acquired mass. Right. But we also have to find a solution for the fermions. You cannot simply add a fermion mass onto the Lagrangian. That would change, or that could violate, break, the gauge invariant. So how do we do this? We do this in a very similar way-- even easier. But before we look into how this is done, let's have a look at the masses itself. It's spectacular. The top quark is our heaviest known fermion. It has a mass of about 172 GeV. The tau has a mass of 1.7 GeV. The muon is an order of magnitude lighter, with 0.1 GeV. And for the electron, we have to go to 0.51 MeV. OK? And we haven't even tried to understand, or we were not able to measure, actually, the masses of neutrinos. We will talk about neutrinos in one of the following lectures. So here you have six orders of magnitude. And you have to go further down here in order to find the neutrinos on this mass scale. So we have to have a mass-giving mechanism which allows this broad spectrum of masses to occur. And the very simple ad hoc mechanism which was introduced to the standard model is one where the particle simply interacts with the Higgs field. So we have our Higgs field here. Let's say we have a left-handed particle coming in. And the interaction with the Higgs field turns it into a right-handed particle. This is a little bit simplified, but what we do there here is simply introducing terms into the Lagrangian which do nothing else. We turn our left-handed particles, via the interaction with the Higgs field, into right-handed, and the other way around. And we have to do this for up-type particles and for down-type particles. So here is another view of this. The strength here is the mass of the particle over the vacuum expectation value. This number here, this number, this lambda d, it's the so-called Yukawa coupling. And those Yukawa couplings now change from fermion to fermion. Each fermion comes with their own Yukawa coupling. It's basically a free parameter in our theory in the standard model. So instead of talking about the masses being free parameters, we talk about the coupling to the Higgs field as free parameter. But they are one and the same. All right. So this was rather straightforward. It's a simple coupling; you introduce this term ad hoc, and then hope for the best that it's actually realized in nature. And you'll see that this is indeed the case for some of the fermions later. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L104_Instrumentation_Accelerators.txt | MARKUS KLUTE: Welcome back to 871. So in this section of our discussion of instrumentation, we talk about accelerators, and I'll do this in a little bit of an historic way showing you some of the developments over the last up to 100 years. We use electromagnetic fields in order to accelerate charged particles, and so the developments in electromagnetic and understanding electromagnetism led to then the technical developments or the technological development of accelerators and the availability of devices, which can be used in order to accelerate or modify particles. And so this goes back to Maxwell and Hertz discovering electromagnetic waves towards JJ Thomson, who was able to use cathode rays and the classical Lorentz force in order to understand the electromagnetic fields. If we study particle accelerators, we can see three different historic lines-- direct voltage accelerators, resonant accelerators, and transformer accelerators, and we go through those three different subjects one by one. The energy limits of our accelerators, they are typically given by the maximum possible voltage available. When you think about accelerators used for colliding beams, it's not just the energy which is the limiting factor, but you also need a sufficient number of particles to be accelerated and brought to collisions. And those particles need to be in some sort of beam, which is narrow such that collisions are possible. Requirements change depending on whether or not you have colliding beams or fixed target experiments, whether or not you actually study lepton collisions, or hadron collisions, whether or not you're using secondary particles as the means of what you're going to learn from this. But the concepts are very comparable across individual fields of study. So let's start with the van der Meer-- Van de Graaff accelerator. So you basically have to be able to create large voltages, and for the Van de Graaff accelerator, it's been doing this, creating this large voltage, by just moving particles from on one level-- separating particles to create this large voltage. You can typically get up to megavolts and tens of megavolts of voltages, but you need to make sure that you don't enter a breakdown kind of regime. And so it depends on what kind of insulating materials you use. If you are using insulating gas under certain environmental conditions, you get up to 17.5 megavolts, which then can be used in order to accelerate. We can use the fact that you can use the electric fields twice in the tandem accelerators by changing the direction of the voltage. Large potential differences have been field of studies in the 1920s and '30s, and one not so-- and it's just noteworthy that this is a dangerous field. So Brasch and Lange, they use potential from lightning in the Swiss Alps, but this was fatal for Lange, who was electrocuted fatally by a lightning strike. So then you need to think about how can you make large voltages available, and how can you make them available for acceleration? Cockcroft-Walton accelerators use a sequence of a cascade generator in order to reuse the voltage in order to create a larger potential for acceleration. The max generator is very-- conceptually is very, very similar. Those are still used today, so we know why those developments are 80 or 90 years old. If you go to a Physics Today article from 2003, you see that a machine like this is being used in order to create an initial confinement for fusion reactors, and the total power is rather fascinating. It's four terabytes, which can be released in 100 nanoseconds from a larger number of those Marx generators. In modern particle physics accelerators, we use resonant acceleration quite a lot, and the idea is really like it's shown in this picture here, that the charged particles, they're kind of being accelerated on a wave. So basically, they're surfing on electromagnetic waves. And this can be seen here. You just place them correctly in your waveform on a specific point, and then they can be accelerated over some distance. So the key point here is that you have a proper or correct phase relation to the accelerating voltage, and the set up can be varying. And historically, there's various attempts to do this in an optimal way. The first one is the cyclotron, which has a static magnetic field, and in each turn, your particles are being injected, and then in each turn, your particle's getting a kick here and a kick here. And because their velocity is increasing, the radius in this fixed magnetic field is increasing as well. So after some time, the particle is accelerated and leaves the cyclotron as a specific velocity. Focusing here is important. You don't want to just inject particles and then they spray all over the place, an accelerator's focus and having the right optics for the particle is a key part of the work needed. There's a number of techniques. I don't want to go into too much detail, but what you typically see is if you compress the beam in one direction, it decomposes in the other direction. And so quadruple magnets can be used in order to get them focusing, and use fringe field and edge field in order to make sure that the beam itself stays in a compact form. You can also do techniques where you have initial focusing in one direction and then in the other direction, so the result can be made smaller. Accelerators are not just important today in particle physics, but they have specifically found their place in medical applications, in tumor therapy specifically. And the history here is very long. You see those initial ideas or methods where neutrons are used-- [INAUDIBLE] neutrons are used in order to treat tumors. We haven't discussed this in nuclear physics too much, but when you use ions as a radiation form, you can really pinpoint where energy is deposited. The so-called Bragg peak is used in order to precisely figure in a three-dimensional way where the energy of the ions is being deposited. This is in contrast to radiation therapy with photons, which just basically spray a larger part of the light material and just destroy the cancerous cells but also the ones which are still healthy. Any given large hospital nowadays has a small accelerator, and so there's thousands and thousands of those available. And the work and the maintenance of those is [INAUDIBLE].. But continuing the discussion of accelerator concepts, these kind of racetrack accelerators are quite interesting. You basically have a couple of fixed-- you have your particles being injected, and it's getting, like in the cyclotron, larger and larger kicks, and then, at some point, can be injected in order to do experiments. Again, this is not the technology which is not new anymore. MAMI is an accelerator at the University of Mainz, and it's-- the next generation is using-- the next generation of accelerators at this facility is using a similar technology. The question is what kind of conditions you have to fulfill in order to keep the particles in place, and here because the particle moves along, it, in each turn, takes more time in order to make one circulation. So what you want to make sure is that your phase and the acceleration stays in sync, so you want to make sure that the particular event comes around again, gets another kick, and it's not deaccelerated. And so that explains why in those machines the particles are bunched. They come in little blocks of particles instead of just being a continuous stream of particles, and you can work out the conditions necessary in order to fulfill the requirement that particles are being continuously accelerated. So then there's-- over the history a number of technologies which try to make use of the fact that when you accelerate, the velocity of the particles increases, and so initially, at a specific time, with one sequence, the particle gets a kick. And then in the next time, it's already faster. It travels further, so you can-- [INAUDIBLE] a linear accelerator structure, so you just increase the length of each accelerator structure in order to make sure that you, again, can give particles the necessary kick. Nowadays, they use cavities, and we use superconducting cavities in order to make them energy efficient in order to have large gradients. The general idea is, again, that you place your particle in here, and you place it such in your face in your electromagnetic field that it always gets a kick instead of being on the other side and being deaccelerated. Again, the Alvarez Linac is a very similar concept, so the advantage here is that you only have one power input, which then you couple and the walls of the machine don't dissipate energy. So then next, once you have an accelerator structure in place, you also want to make sure that the structure itself doesn't interfere with a beam, that the power transfer is as efficient as possible, such that you can get more for your buck. All right, I mentioned it a few times already, the fact when you have an electromagnetic wave of this form and you put your particle here, the question is what happens to these particles which have a slightly higher energy over these particle which have a slightly lower energy? And it turns out that this kind of wave has a self-focusing kind of structure in a sense that the particles were a little bit behind, they get a little bit larger kick. Particles which are a little bit in front, they get a little bit lower kick, which means overall in energy you focus your bunch. In energy and space, you focus the bunch. You can go one step further using RF quadrupoles. Again, no details given here. You use this unit in order to further squeeze the beam and reduce the footprint in energy and space, the phase space your particles are occupying. All right, the next level here is then to use a betatron, and the idea of the betatron is that you change the magnetic field as you go. Instead of changing the size of the structure, you change the magnetic field, so you can use the same structure in order to confine your beams. And the next level to this is that you don't just use one magnet. You use many magnets, and this is done in synchrotrons, modern synchrotrons. So here, again, the large line in history, but the point is that when you use the same orbit for the particles, the way you accelerate them, your magnet structures can become much, much smaller. So you have many small magnets instead of one large magnet, and only one or few accelerator sections are needed. So the particle passes by here as being accelerated, and then you have magnets, of course, a link, the ring, which are able to change their field strength. Again, if you have constant field strength, the radius changes, but if you're modifying the field strength accordingly to the speed of the particle, you can keep particles in the same circular structure. More words on the focusing, again, if you space a magnet such that there's gradient in the fields between-- depending on the position, you can use that fact in order to have particles which are further out being bent more inwards. Particles which are further inwards, bend more outwards. This is called refocusing, and you can do this not just once, but twice, by changing the rotation. And that's called strong focusing. Synchrotrons have limitations. There's two which are rather important. The first one, the radius of the synchrotron is determined by the momentum of the particle and your magnetic field. So at some point, you run into technical limitations on the magnetic field strength. Modern superconducting magnets, they get up to in the order of eight, nine tesla. We are able to produce accelerator-grade magnets up to 14 tesla using superconducting materials. And so that then was a fixed plot size of your tunnel of your ring limits the amount of momentum you can give or energy you can give to your particle. And so for the LHC, we accelerate protons up to 75 TV in one beam, and we use the magnetic field of 8.4 tesla. So that's really the maximum the machine can actually deliver, and we haven't actually demonstrated that we can get up to this point. Right now the center of mass, the proton energy is 6.5 TV. The previous concept and construction had started in Texas in the United States in the so-called SSC tunnel, which was much, much larger, had 87 kilometer tunnel. The idea was to get protons to center at energies of 20 TV with magnets of 6.8 tesla. OK, this is one limitation, the size of your tunnel. The second one is when you bend a charged particle with the magnetic field, it radiates, and it radiates proportional with the energy proportion energy over mass to the fourth power, which means that you, at some point, are limited by the power you have to invest in the beam to even just keep it at a specific energy. So this is e over m and to the fourth power, and this also gives you a clue why we actually accelerate protons in the LHC and not electrons. Electron mass is to low, and this goes to the fourth power here. The [INAUDIBLE] of electrons in the LHC tunnel is just a limiting factor. At some point, you don't have enough power anymore in order to give that to the electrons or the particles in order to accelerate that. All right, so as I was saying initially, so you need focusing, you need accelerating, and you need all kinds of additional components to accelerate in order to make good colliding beams, and so it's much easier to just dump a beam into a fixed target and study what's coming out. So colliding beams as a source for input into particle physics experiments came a little bit later. First concepts came to fruition in Frascati in the 1960s, and then we had SPEAR, a electron positron collider. It sent off mass energy of four GeV, which then led to the discovery of the J/Psi. And then later, we had a five GeV electron positron collider, which was then designed for eight GeV. This then continued, as you know, to machines like [INAUDIBLE],, which had a center of mass energy of up to 110 GeV. And the LHC, the center of mass energy of design of 14 GeV. So the collider elements, what do you need? You need to inject particles, and so you have to have a source of electrons and a source of positrons. Producing positrons is just, technologically in large numbers, much more complicated than electrons. Here you can-- in order to get sufficient electrons, you need to actually produce them with some kind of an accelerator structure as well. And so then they need to be focused. They need to be cooled down in order to have them as a beam being injected in the structure. So you've got-- you need as additional components, you need your RF generators. You need magnets to bend the beam in and out. You need to have magnets in order to bend the beam around, so we need bending magnets. You need focusing magnets to keep the beam in orbit, and then you want to-- before you bring them to the collision, we want to further focus the beam such that you have more actual particle collisions available. And then you have interaction points where you will put your particle for the experiment. What's being done here in this facility is you inject, you accelerate, and then you store the beam to make sure that you can fully exploit the structure, so those rings are called storage rings. You avoid having a one kind of shot kind of mentality as you have, for example, in linear machine. Sometimes you do this in separate rings. Sometimes you do this in the same ring. It depends on the design of the machine. The challenges are if you have particles travel around for hours that they shouldn't interact with the gas, for example, in your accelerator structure. [INAUDIBLE] by a very, very good vacuum in order to not lose the beam as you have it stored. The fields of your accelerator structures need to be very stable, and that stable-- they have to be stable for many, many hours. That, for example, there's some requirements on your electric grid, for example, such that you have the stability and voltage which don't end up changing the actual field strength of your magnet as you go along. Yeah, so again, I already mentioned this, the further development of those machines. In the '80s at CERN, stochastic cooling was used for the first time in an antiproton machine, and that machine led to the discovery of the w and the z bosons and first really deep study of the weak interaction at the scale of the weak interaction. At the TEVATRON close to Chicago, protons and antiprotons were brought to collision at the Fermilab at the Fermi National Accelerator Laboratory. And the lab started in the late 1980s in the same tunnel as we find today the LHC. At DESY in Hamburg, HERA was used to collide electrons with protons, and those results led to our understanding of the structure of the proton. So you see that the reason why I show you this history here is that the progress we did in particle physics at the energy frontier, each of the energy frontiers, was very much tied to the progress in accelerator structures. And rightfully so, Simon van der Meer recieved the Nobel Prize in physics as an accelerator physicist for the discovery of the w and z boson. He was working on the machine and made the machine possible together with Carlo Rubbia. And interesting, if you look at the history of the particle accelerators, it's interesting to see how we may move the energy frontier forward. So what you see in this plot here is the available energy of the machine, and what you see here is line. And this project is called the Livingston Project. See the logarithmic scale, and you saw that for a very long time it was this logarithmic increase in available energy. So this line is now turning over. You're Already in 2020, and we haven't made any progress here. So focus here on this. However, interesting discussions on going somewhere-- going to high, going somewhere here with the next machine and being able to probe further higher energies. And the way this is proposed for proton-proton collision was just making a structure similar to the LHC but with a radius about four times larger, about 100 kilometers compared to the 27 kilometers the LHC as today. And so you can compare this with the hadron machines, but also the electron machines, and also on the electron machines, this trend is not as pronounced as for the proton machines. But basically, here, the highest energy electron positron collider [INAUDIBLE] has just been decommissioned at the end of the last century, and we are thinking about the next machine. Sometimes you see those acronyms as a linear machine, international linear collider or future circular collider. It's electron and positron collisions, which would be hosted in the very same tunnel as this machine, which is called FCCPP or HH. HH was 100 kilometer circumference. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L91_Nuclear_Physics_Introduction.txt | [SQUEAKING] [RUSTLING] [CLICKING] MARKUS KLUTE: Welcome back to 8.701. We're starting a new chapter now, Chapter 9, on nuclear physics. And this video is the first introduction into the topic, where I'm explaining some of the terminology and some of the concepts. We dive in much more detail as we go on. So given an atom, you can specify the number of neutrons, the number of protons, and the number of electrons, which is equal to the number of protons, for neutral atoms. Atoms of the same element, they have the same atomic number, Z, but they're not all the same. Isotopes of the same element have different numbers of neutrons. So we can have uranium with a number of neutrons varying. You typically write an isotope by specifying the mass, the number of protons, and the number of neutrons. But that information is redundant. So typically, we simplify this by just writing things like 238 uranium, and that specifies a specific isotope of uranium. When talking about different nuclei, we sometimes refer to them as nuclide, atom/nucleus with a specific number of neutrons and a specific number of protons. Isobars are nuclides with the same mass, with the same number, same sum of protons and neutrons, but with varying individual number of protons and neutrons. An isotine is a nuclide with the same number of neutrons, but with varying number of protons, and an isomer is the same nuclide but different [? eigen ?] states, which means that the energy states are-- an energy state. So we can excite nuclides at their component particles. The nuclear radius typically can be extracted from the mass of the nuclide. And it's simply we add little balls to the sum, and it scales with A to the 1/3 the mass of a number of elements in this nuclide. There's many isotopes. There's many nuclides. And so you typically can look at all of them, if you want, or a subset of them in nuclear charts that's given here, where we plot here the number of protons, and here the number of neutrons. And we look at many more of those charts later. Here's another representation of the very same thing. You see a nuclear chart again. And here, what's spotted in red are the stable nuclei. We will see that nuclei can decay, and we'll understand why they decay and in what form they decay. It's a core part of this chapter, understanding how nuclei can decay, and what we can learn about them by studying their decays. One way to look at it, for example, we see that here I plot Z over A, so the number of protons over the sum of the number of protons, Z plus N. And you see that most of the stable nuclei, with the exception of the one with very small mass number, have less protons than neutrons. So there's an excess of neutrons. This can also be seen here. The stable nuclei are typically on or below this axis where Z is equal to A. Radioactive decays can be characterized, typically, as a parent nuclide, and then a daughter nuclide. And so radioactive decay is a process in which an unstable nucleus spontaneously loses energy by emitting particles, ionization particles and radiation. The decay and the loss of energy results, then, in an atom of one type, the parent particle or parent nuclide, transforming into another type of an atom, or the daughter nuclide. We have already looked at decay rate in the concept of particle physics interactions, but we can define this very similar here, the decay rate, or sometimes it is called decay constant. And then as we did before, we can define the mean lifetime or the half life of a parent nuclide. So this is it for the introduction. And in the next lecture, we'll start looking in the energy which is used to bind the nuclei, the protons and neutrons together, and how we can understand this from an empirical model. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L42_QED_Dirac_Equation_Solutions.txt | MARKUS KLUTE: Welcome back to 8.701. So we'll continue the discussion on QED. In the last video, we looked at wave equations and we discussed Dirac equations. Now we want to look at solutions to the Dirac equations. All right, so remember that the overall goal now is to find a description of spin-half particles, which we can then use in our matrix element calculation in order to get to cross sections or decay rates of particles. If you just ad hoc or natural choice for a solution would be a wave equation, which is a product of a spinor, which depends on energy and momentum, and an exponent. So we have a free plane wave as a solution to our free particle base waveform. We have to show or we have to make sure that this wave equation satisfies the Dirac equation as shown here. Since the spinor depends only on energy and momentum here, it's rather simple to write down the derivatives, because they only depend on the exponent. So we can do this here. And we find those solutions here for the four components. We can rewrite this by putting those derivatives here back into the Dirac equation. What we find then here's this simplified form for the spinor. Note that this does not depend on derivatives anymore. So this is a rather simple form. And then we can study-- what happens now if they have a particle at rest. So it further simplifies the Dirac equation. It further simplifies here to items E times gamma 0, u-- that's our spinor-- equal m times u. Since gamma 0 is a diagonal, or is diagonal, we can immediately find the eigenstates to this equation. So we find four different eigenstates, and they are orthogonal. And you find that they look very similar. So n here is just the normalization factor, which is the same for all four. And we find those four different values here. You can find two with a negative sign in the exponent and two with a positive sign in the exponent. Now this is for particles at rest, fine. We can interpret those solutions as positive and negative energy states of a spin-half particles or a particle with two spin states. But now we want to see what happens if you have a particle which is not addressed. So the way to approach this is, so once, we can just apply Lorentz transformation and see how the solutions transform. But it is even easier to just look directly at the Dirac equations for the spinor. So we start again from our equation here. We just write this down. And then we rewrite the equation using the gamma matrices until we find those factors p times gamma 1-- px times gamma 1, py gamma 2, p3 gamma 3. Can we write this using the Pauli matrices in this form? OK, so what this gives us is this coupled form of equations. So here we revised our spinor as a two vector, uA and uB. And you find the coupled form between those two if we look at those set of equations. Great. But this is rather cumbersome and complicated. However, now we can try to find the solutions or try to find eigenstates to the equation. We know that the solutions are of this form here. That's how we started. If you then try to find a specific [? state, ?] you can start from the simplest alternate solution, which is uA equal 1, 0. And then just put this in here. And you find for u1 those solutions here. And then you turn this around. For uB, you find 1 and 0 and you find the other solution. So similarly as for the solutions at rest, we find here for our spinors that there is four different spinors, which are independent. You can interpret them now as, again, the positive and negative energy states. So if you then, for example, say, OK, let's make sure that this is all consistent, we want to see that when the momentum is 0, you come back to the previous solution. If you look at the momentum, if they're 0, those components are all here-- become 0, you find the very same solution as we had on the previous page. You can also ask yourself what happens now if you don't use this idea of positive and negative energy solutions. If you want to do that and you define that as all are either positive for energy solutions, and not two positive and two negative, you find that you can divide them as linear combinations of the others, so they're not independent solutions. So in order to have four independent solution of the Dirac equations, two have to be positive and two have to be negative in energy. All right, so I recommend just trying the exercise of playing around with the Pauli matrices and the gamma matrices. If you have not seen this before, it's not easy to follow the algebra. But once you get a hang of it, it's actually not that complicated. In the next lecture, we look at the solutions-- specifically the solutions for antiparticles a little bit more and discuss interpretations of those solutions. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L52_QCD_Elastic_ElectronPositron_Scattering.txt | Welcome back to 8.701. Our second part of the chapter on QCD is on elastic electron-proton scattering. So elastic electron-proton scattering in general has a long history, going back into the 1960s. Very famous examples are the MIT-SLAC or SLAC-MIT experiment which led to the Nobel Prize in physics for Jerry Friedman, Henry Kendall, and Richard Taylor in 1990. And the most recent and highest energetic electron-positron experiments were conducted at DESY in Hamburg, at the so-called HERA ring. And we'll go back into those experiments. They are famous for not just elastic electron-positron scattering, but inelastic and deep inelastic electron-positron scattering, which is going to be subject of future lectures. What we are trying to do here-- you might still wonder why this is topic of a QCD lecture. What we're trying to do is study the structure of protons. And the way to look at this is in these Feynman diagrams, we're using this photon here to probe into the structure of the proton. We can start from this scattering process, which we already calculated, and we derived the Mott scattering cross-section formula. But we really want to reconsider this proton now as a prop. In elastic scattering, we are not destroying the proton, so we leave the proton intact in the scattering process. We do know that the proton is not a point-like particle. So if you want to now study the proton, we have to take into account that it's built out of constituents and that it has an extension. It's not point-like anymore. And one way to do this is by analyzing the Fourier transform of the charge density functions. Remember, the photon couples to charged particles. And so when we use the photon to probe a proton, it probes the charge distribution inside the proton. So we build a Fourier transform of the charge distribution, and then can extend the cross-section from a point-like cross-section via this Fourier transform of this charge distribution. Great. In electron-positron scattering, there is another point. It's not just the extension of the proton, but also the fact that the proton carries recoil. And so we need to use two form factors in order to describe the cross-section fully. So let's have a look at the amplitude. Again, we can start from where we left off with the electron-positron scattering, or electron-muon scattering, and write down our matrix elements, and just now for the proton. What we do here is we modify the vertex of the proton. And the modification is parameterized in those two factors, describing the cross-section of the matrix elements' amplitude with two form factors. The one, which just looks like a modification of our spin-1/2 particle-- you remember there's this gamma factor, and then this is going to be a number. And the other one is a little bit more complicated. There are the Pauli matrix here and another number, and it's normalized by the mass of the photon. All right. So this is just a parameterization. We haven't done much physics here. We have just parameterized the distribution. If you then use this, we can, as I just described, calculate the cross-section again, using this very same parameterization, and get to this formula here in the laboratory frame. This looks rather complicated, but if you go back to our form factor definitions and set this one to 0 and this one to 1, you get back-- and the same here-- we should get back to our Mott scattering result which we had before, so really extended the discussion to extended objects, and considering the charge distribution and also the recoil in the proton. So this is great. Historically-- so this is not a new idea. This has been done for generations. Historically, the parameterization was done slightly different. And so we introduced the linear combination of those form factors, and those are typically referred to as the electric and the magnetic form factors, so GE and GM. This formula here-- and this is just algebra, going from previous formula to this-- it's called the Rosenbluth cross-section formula. All right? So this is just, if you find this in particle physics booklets or in nuclear physics booklets, this is what is meant by this. The only thing we did here is extended the Mott scattering formula using extended objects, extend the charge distribution to the [INAUDIBLE] [? Rosenbluth ?] formula. All right. But we have done that by using a Fourier transform of the charge distribution. So if you now measure the cross-section, we can infer the charge distribution of the proton, and with that the radius, the charge radius, of the proton. This has been done, and we find that the RMS-- the root mean square-- of the proton charge is 0.81 femtometers. So that's the charge. And this measured charge is still today a hot topic in particle physics, because its measured distributions do not quite agree with the theory predictions. But you see this here parameterized for this value of the proton charge. The theory and the global fits to the data don't quite agree. You would have to go to slightly higher values of the proton radius in order to have theory agree with the experiments. So I'll leave it here. The next lecture now-- we split this lecture in two parts, the elastic and the inelastic scattering. The next lecture will be on inelastic scattering, where we will break the proton apart and learn about the structure of the proton. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L21_Symmetries_Introduction.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: Welcome back to 8701. In the second chapter, we will discuss symmetries and the importance of symmetries in physics in general, but also especially in particle physics and nuclear physics. So we start with a short introductory video, and then we'll move on to more details as we go along. The importance of symmetries cannot be understated in physics. And there's two aspects which are important. The first one is that symmetries and conservation laws go hand in hand, as discussed by Noether's theorem. To express the theorem in an informal way, you can say that if a system has a continuous symmetry property, then there are corresponding properties whose values do not change with time, meaning that they're conserved. You can express this more sophisticated, and say to every differentiable symmetry generated by local action, there's correspondence. There's a correspondent conserved current. And we're going to look at those actions and currents as we go along. The second aspect, beyond the fact that there's conservation laws, is that you can understand physics experiments and nature if you know that physics has an underlying symmetry, without fully understanding the physics or the mathematical backgrounds in order to do calculation in detail. So knowing that there is underlying symmetry can help in really expressing or understanding the physics behavior of experiments. A few historic remarks on Emmy Noether-- Emmy Noether was born in Germany in the 1880s in Erlangen, where she grew up and also studied Mathematics at the University of Erlangen. After getting her degree, she worked for a full seven years at the university in the Math Department, and received zero dollars, and not just because it wasn't the currency being used there, but at that time, women didn't really have a prominent role in academia. And so there was no job for her to take. But her talents and her qualification was seen in the mathematical world at the time, specifically in the center of the mathematical world, which was in Goettingen. So Hilbert basically discovered her, and asked her to come to Goettingen. In order to do habilitation, she did get an habilitation in Goettingen in 1919, and then stayed in Goettingen till the situation in Europe degraded in the 1930s. She was born Jewish and couldn't stay in Goettingen beyond the year 1933, and then had to immigrate into the United States, where she worked at Bryn Mawr College, and also with Princeton. Her work-- you see here her habilitation, which is in German [SPEAKING GERMAN],, "Invariant Variation of Problems," was highly regarded. And she had a lot of influence and impact on various strands of mathematics and physics. Unfortunately, she passed away already when she was about 50 years old. She was diagnosed with some sort of cancer, and passed away really, really quickly after this, after some surgery. Her temperature rose and a few days later, she passed away. To come back to symmetries and conservation laws, every symmetry of nature uses a conservation law. That is what Noether's theorem tells you. And you can reverse this to saying that every conservation law in physics reflects an underlying symmetry. And examples for this are the fact that the properties, the laws of physics are invariant on the time translation, meaning that physics is the same yesterday, the same tomorrow, and it's going to be the same next week. And out of this, we can deduce energy conservation. Similarly, translation in space results in a momentum conservation, angular rotations or rotations without the angular momentum. And then a little bit harder to grasp, but we will see this in more detail, internal symmetries can also lead to conservation laws. And gauge transformation leads to the conservation of charge. So there is internal symmetries as well. Before we dive into more detail, a few things. First, in many cases, symmetry operations can be expressed via matrices or groups. And there's a few rules or operations which are rather important and define symmetry. The first one is that any symmetry operation has to have identity, meaning there has to be an operation which doesn't do anything with an element of this group. There has to be closure, meaning that if you apply a first transformation and then a second, the resulting transformation is, again, part of the set of transformations. And there is an inverse, meaning that if you rotate in one direction, you can rotate back. And there's associativity, meaning that if you have a rotation acting on two other rotations, you can regroup and follow what's shown in this equation here. It's not clear that you can reverse the order of certain elements of your group or your symmetry operation. You can classify them, however. Those where you can commute, those are called abelian groups, and those that you cannot, those are non-abelian. All right, so with this, we have introduced, with the first video, symmetries. And now, we just dive into more detail in understanding continuous symmetries and also discrete symmetries, and what we can learn from them. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L34_Feynman_Calculus_HigherOrder_Diagrams.txt | MARKUS KLUTE: Welcome come back to 8.701. In the previous lectures, we studied how to read Feynman diagrams, how to calculate amplitudes and phase space. And we are able to use this using from Fermi's golden rule in order to calculate lifetimes and [INAUDIBLE] cross-sections. We exercise this with a toy theory and simple examples. But we focused on leading order or tree-level diagrams. So in this lecture, I just wanted to introduce some features of higher-order diagrams, which are rather important. So we started off with our toy theory where we have a primitive vertex where three particles that are going into action. The strength of the interaction or the coupling has-- we just label this as g. And then we can use this primitive vertex in order to build up scattering processes. And here we want to consider the process of two particles A going to two particles B. And they do this by exchanging a particle C. This is the lowest order diagram, or sometimes called the leading order, or sometimes called the tree-level diagram. So how do higher-order diagrams now look like. Here's the first example. And this example is one where one of the legs involved, or one of the particle involves, has a correction to its own mass and energy. This is the so-called self-energy diagram. And if you do the counting correctly, you find that there's five of those diagrams. Here shown, we have a correction of particle A. But you could also have a correction of particle C or B in the outgoing legs and obviously also in this particle here. The second form of diagrams is one where you correct the vertex involved. So here there's two diagrams correcting each of the vertices, and so this is shown here. So apparently this changes then how the vertex actually looks to the outgoing legs. So instead of directly interacting in this primitive vertex, you have this interaction here in those two additional vertices here. So this changes intrinsically how the interaction, how the strength of the interaction looks like. And then there's the form of diagrams which we discussed in the context of CP violation, so-called box diagrams where you just go around in a box. That's why they're called box diagrams. And apparently also here you change the strength of the interaction involved. So those are three varieties of classes of higher-order diagrams. And we will see much more of those when we talk about QED, the weak interaction, or QCD later on. The strength of the individual couplings involved and the particles involved changes in how the resulting features of interaction change using those diagrams. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L103_Instrumentation_Calorimetry.txt | MARKUS KLUTE: Welcome back to 8.701. So in this section, we talk about calorimetry. In contrast to the discussion of tracking detectors, here what we're trying to do is measure the energy of the particles. And we do this by basically destroying them. The underlying content is rather straightforward. We have a particle. And we put a piece of material in front of it, such that it slams into it, and the energy deposited by the particle is the energy, the measurement we try to undertake. So in nuclear and particle physics, that is exactly what we refer to as calorimetrics. So the detection of particles for measuring the properties through the total absorption in the block of matter. The common feature, or the central feature, is that the measurement is destructive. So again, in tracking detectors, we try to minimally disturb the particle and in calorimetrics, we try to destroy them. The exception to this might be a muon which might at high energies deposit only a small fraction of its energy in the calorimeter, or a neutrino, which just flies through without having any interaction. But the purpose is really to measure energies by destroying the particle. And it's widely used in all kinds of areas of particle and nuclear physics. Neutrino experiments, proton decay experiments, cosmic ray detectors, collider experiments, and so on. And in collider experiments specifically, the idea is to build the detector such that it completely surrounds the interaction region, such that you don't lose energy from the collision just passing through an uninstrumented region. The detection mechanism can vary quite a bit. We use scintillators a lot. We use silicon in some modern detectors. With ionization, we use Cherenkov detection. We use sometimes cryogenic detectors, which are very sensitive to very small energy depositions, and they can be quite useful. They are used in dark matter experiments or on neutrino [INAUDIBLE] decay experiments like so. Again, conceptually, you can differentiate between homogeneous calorimetrics and sampling calorimeters. The homogeneous calorimeters, basically the entire absorbable material is equal or is the same as the detector material. So an example for this is lead glass, which is often used. So what you do then in the calorimeter is you induce electromagnetic and nuclear showers and then the energy of the incoming party is converted into photons. And then what you need is a photodetector, which then measures the number of photons coming out of your detector material. For this to work, the detector needs to be transparent. Alternatively, one can use sampling experiment, sampling detectors, where you have the heavy material being used in order to induce a shower and then the detection material in order to count, again, the number of photons. So homogeneous calorimeter have typically very good energy resolution. And the reason for this is that nothing gets lost. Everything is being measured in the absorber, which is the detector. But that leads then to some limitations. For example, that the granularity of the detector is typically limited. And then there's no longitudinal information about the shower development. You basically have one block. For example, a left blockage is shown here, the tungsten block, which is shown here, are used for the measurement. So you produce photons, and so then the photons need to be measured. And then that's done with photodetectors. Here, the requirements, the range of requirements, is quite big. Sometimes you want to be able to measure every single photon so the quantum efficiency needs to be quite high. In other detectors, you need to be able to put this detector in a radiation hot environment. And so that then changes. The main types available are the old-fashioned photomultiplier tubes, which actually become quite sophisticated, PMTs. There's gas-based photodetectors. There's solid-state detectors, which are quite popular, so-called SIPM silicon photodetectors or some hybrid modules of those. So the energy resolution in a calorimeter depends on a number of things. As I was saying, one measures the number of particles being produced in a shower. And so that's just a counting experiment. And the uncertainty of that scales to the square root of the number of particles produced or measured. And so here, we have a square root n term. So the relative energy measurement has an arrow, which is with 1 over square root of the energy. And then there's more contributions. There's constant terms, which come from inhomogeneities. Those are elements where there's just no detector, no equipment in the direction of the party. But those can be overlapped regions or regions where you have two detector modules being glued together. You don't measure there. And that leads to them, a constant term in the energy resolution. And then when you translate the signal, the electromagnetic signals into an electronic signal, there can be noise induced, and that noise then leads to a typical turn, which goes with one over the energy. So very classical, you have those three components to the energy-- three components to the energy resolution. One is this 1 over the square root of the energy. One is constant, and one is depending on 1 over the energy. And so when you design a detector, you want to place it such or design it such that the most important physics you want to do with this detector is optimize towards those components. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L96_Nuclear_Physics_Gamma_Decay.txt | MARKUS KLUTE: Welcome back to 8.701. We continue discussion of nuclear decays with gamma decays. We have seen that we can understand nuclear stability or instability. We discussed alpha decays and beta decays. Now after the discussion of the shell model, it is apparent that transition from various nuclear states can be accomplished via the admission of a photon, a gamma ray. Gamma decays are specifically important in decay chains following an alpha decay or a beta decay where the remnant, the daughter nuclei, is left over in an excited state. Then the de-excitation follows with the admission of a photon. Practical consequences of this alpha example-- in fission processes where a significant amount of energies can be released with photons, radiotherapy where we try to remove cancer cells or kill cancer cells with gamma rays. Medical imaging works this way. And in general, you can use the emission of those photons to deduce the spin and the parity of excited states. If you go back very early to this lecture, we discussed the Wu experiment. And also there we used gamma rays in order to reduce the spin and the parity of the states involved. So nuclear spectroscopy, we haven't discussed the detectors yet. But what's shown here in this picture are two characteristic gamma ray spectra from the case of cobalt and cesium. And if you just focus on the blue line, for example, you see here this peak. This corresponds to the energy of a transition. But then photons, when they're emitted, they can go through Compton scattering. And they can lose through Compton scattering some of the energy. So you see this tail here. And then in this tail you see additional peaks. And those additional peaks can come from the fact that a photon can produce an electron-positron pair. And then you see one or two of those pairs in cases where one electron or positron, or both of them, have escaped the detection. So whenever you look at a nuclear decay, you find spectrums of the sort. And then there are various Compton scattering-- depends obviously on the material around and the composition and also this single and double escape kind of peaks, quite characteristic for the material you're looking at. So from this, on nuclear spectroscopy you can learn about the sample composition, the element composition of the probe. And interesting effect is the Mössbauer effect. Here again, I'll try to remind you of the discussion of special relativity. We looked at the energy of an emitted and an absorbed photon. And because in this emission process the leftover nuclei, there has to be a momentum balance. So there is a recoil on the leftover nuclei. So that means it starts moving. And because it's moving, we have to do a Doppler correction of the energy, meaning that the emitted photon energy is not equal to the energy needed in order to excite the nuclear state again. So this leads then to those energy spectrum. Here, they're the natural widths. And only in this overlapping region here you can reabsorb. Now the most Mössbauer effect now is a special variation of what I just described. In cases where the nucleon is part of a lattice, the lattice can absorb the recoiling energy. And it leads like to a situation of very, very heavy objects absorbing this recoil. And in those cases, you can have resonant effects, meaning that this emission line, then absorption lines, they lay over each other quite strongly. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L81_Neutrino_Physics_In_the_Standard_Model.txt | MARKUS KLUTE: Welcome back to 8.701. In this chapter, we will talk about neutrinos. And we'll start the discussion with a relatively simple introduction. How does a neutrino look in the standard model, and how does it interact? We have discussed the neutrino already quite a bit. So this is more or less a summary. In the standard model, the neutrino is massless. It's a massless particle. And it interacts with a weak direction. So it interacts with w bosons and with z bosons. And specifically, it does not interact with photons of gluons. If we look at the Lagrangian or try to write down the current, we find that there is a charged current via the w. And there is a neutral current, we have the z boson. It's quite interesting to think about those two currents a little bit more. So in case of a charged current, for example, I have an incoming neutrino, we can determine the flavor, the kind of neutrino we have. We are detecting the flavor of the lepton. So if, for example, identify an electron in this interaction in the interaction, the initial neutrino was an electron neutrino. While for the neutral current, when we have some sort of interaction happening, we cannot identify directly the neutrino. Hence, we cannot find the flavor of the neutrino. You can just measure the sum of all flavors of neutrinos in the neutral current. On that story, neutrinos have three flavors. They come in electron flavor, muon favor, or tau flavor. Neutrinos are left-handed. Anti-neutrinos are right-handed. So that's the story. That's how neutrinos are characterized in the standard model. In the standard model, in the framework we set up, we can calculate cross-sections, scattering cross-sections. And here, we are looking in the neutrino nuclei scattering. Again, we can split this up in the charged current and neutral current discussion. But they go very much in parallel. So we have elastic scattering. Or in the case of the charged current, we talk about quasi-elastic scattering. Then we have an incoming neutrino, let's say a muon neutrino, hitting a neutron, producing a muon and a proton. It's called quasi-elastic, because we do not break up the target, but we change its kind. So we change from a neutron to a proton, in this case. While, for the elastic scattering, the neutron just stays intact. We can also have nuclear resonance production, where we hits the nuclei. And then inside the nuclei, we could use a [INAUDIBLE],, like a neutral or a charged [? pion. ?] And also, that's possible in the neutral current exchange. And then we have deep-inelastic scattering, where we hit a nuclei or nucleon that hard, that we'd start breaking it up. And in this case, we scatter off the quark, and we produce a new quark in the charged current interaction in the same quark in the neutral current interaction. So this is no different from the stories we had before. The intriguing part about studying neutrino scattering is that we do know that we have weak interactions being-- we use a dominant force of being the process. While, if we use photons to interact, or we have electrons being [INAUDIBLE],, then we can have a mixture of weak and electromagnetic interaction. Another important takeaway from this slide is that, when we calculate cross-section, we find that a linear increase in the cross-section of the function of the energy. So while cross-sections for neutrinos are small, at higher energy source cross-sections scale linearly. As we can discuss this for neutrino scattering with nuclei and nucleons, we can also look at neutrino scattering with electrons directly. There's a lot of electrons in the metal around us. And the neutrino [INAUDIBLE] with this metal that can interact with electrons, too. And they can live in the charged current interaction of muon in an electron neutrino and in a neutral current interaction in electron in the [INAUDIBLE]. Also, here, you see cross-section, total cross-section scaled with [INAUDIBLE] energy. Good. It's the first introduction to neutrino physics. We'll go into more detail in the following presentations. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L35_Feynman_Calculus_Divergency.txt | MARKUS KLUTE: Let's come back to 8.701. So in the previous video, we talked about higher-order diagrams and we looked at how we can classify those contributions to the total matrix element. We didn't do any of the calculations, or we didn't calculate the Feynman diagram itself. And I'm not actually planning to do this in the lecture. What we want to do here is investigate one of the features, a very important features of having those higher-order corrections. So if we look at this higher-order diagram here, one of the specific ones where we have a self correction, self energy correction to the propagator C here. We find this loop here in the middle. And if you were to-- and we have all the tools at hand to actually do the calculation-- if you were to calculate the amplitude, we find this term here. So very good. So let's investigate. The first part is this finite element here, which we can rewrite as q cubed times a finite element dq times all the angles which we have to integrate around. So we find this q cubed. If you look at under in this fraction here, we find a q squared times q squared. And if you go to just very large values of q, that's the only thing which remains. So we have an integral, 1 over q to the fourth power, times q to the third power. And then we have to integrate this from 0 to infinity. So if you do this, you know that this results in a logarithmic term. And if you have this evaluated at infinite, we find that it diverges. So the result of the integral is infinity. That is a real problem. If you calculate the scattering process, the result is better not infinite. The cross-section shouldn't be infinite. The lifetime shouldn't be 0. So that is a real problem. And that actually caused this entire theory to not really make much progress for quite some time, because you were not actually able to calculate anything. The solution is to introduce a cutoff. So what happens now if we don't just [? jump ?] to the integration [INAUDIBLE],, but to some scale. And so you introduce this additional factor here in the integral. And you just calculate the integral up to a cutoff scale m. And then you have an additional term you have to in principle evaluate from m to infinite. And you find that that additional element is still infinite. You can evaluate all the other parts. And it turns out if you are smart and introduce the cutoff, the theory, the calculation still remain sensible, meaning that they perform fine under Lorentz transformation. All the physics intuition we have is fine. You just have the issue that still there is a contribution to this integral, which is infinite. It turns out now by miracle that you can redefine, re-scale, or re-normalize the physical objects in your calculation such that it appears that there is a correction to your masses or a correction to your couplings. So what you find then is that there is a component which is your physical value, which is a bare mass and the bare coupling, your coupling constant, plus some correction. There's still a problem that those corrections at infinite scale are infinite. However, when we do experiments, we are performing them at a specific scale. And so this problem of if you go to really high scales, things get out of hand, is actually not a real problem when you compare the theoretical prediction with the experiment. There's an interesting feature here. When you actually look in the running or the evolution of your coupling, which is shown here, the function of energy that shows this as a logarithm of the energy, you can do this for the electromagnetic, for the weak, and for strong interactions. And note here that this is an inverse of the coupling. They all run. They all are dependent and have to be evaluated at a specific scale. But unfortunately at very high mass scale, they don't all appear to converge in the same spot. It is interesting to know that if I introduce new particles along the scale here, note that this is 10 to 10 GeV, this new particle will change the behavior of the running of the couplings. The energy behavior of the coupling changes if I introduce new couplings. And you can already understand this because I would introduce new diagrams which contribute in this way. And then those result in a change in the running behavior of the [? plot. ?] So one of the ideas for new physics, which we might discuss in the very last lecture of this class, is that by introducing new particles along the way, you are actually able to combine all of the couplings involved-- here, electromagnetic, weak, and strong-- at a specific and specific scale, and then have a combined, unified theory describing all of the physics we discuss in nuclear and particle physics. So that would be great. That is new physics, and we don't know if this is realized. However, what is realized in our calculations is that the physical masses and couplings we observe, they are evaluated at a specific scale. And they do run as function of a scale as shown in this plot here. We will look at this very specifically at the running of those three interactions-- the electromagnetic, the weak, and the strong. And we can also observe this when we study the masses involved. They're have to be evaluated or are evaluated in experiments at a specific scale. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L93_Nuclear_Physics_Stability.txt | MARKUS KLUTE: Welcome back to 8.701. So in this video, we talk about stability and, the opposite of stability, the decays of nuclei. OK, so let's look at this diagram first, which shows the number of protons-- the number of protons and the number of neutrons here. And you find that the stable elements sit in this so-called Valley of Stability. So there's a certain part of our nuclei which are stable. This valley follows directly out of the discussion of the binding energy, where you can calculate the optimal Z value such that the mass is minimal. It's about A half in this area here at low masses, and about 40% of A the higher part-- at high value of mass. We already discussed that the binding energy has this form here, with a maximum around the mass of iron, which is like somewhere here-- 50 some. OK, so it's energetically favorable for some unstable nuclei to break apart in so-called fission processes. And it's also energetically favorable to fuse together to create energy. And we'll talk about the applications of those processes later on. But for now, we want to just look at a radioactive nuclide, which can decay in various ways. And so the first, most prominent decays we want to discuss here-- and then we'll follow up a little later-- is an alpha decay, which is splitting up-- a mother particle splits up into daughter particle nuclei and helium-- which is an alpha particle-- or the emission of electrons, positrons, or the capture of electrons as well. It is also possible, but rather rare, that a nuclei just spits out a proton, or spits out a neutron. So this is possible, but it's rather rare. All right, so let's look at the alpha decay first. As I said, we start from a rather large, heavy nuclei. And it seems possible for it to spit out helium or an alpha particle. So the first thing we want to-- the view we want to have here is of the potential in which the alpha particle sits. So the alpha particle sees a really deep well of the nuclear potential. And it also sees this boundary here, this barrier, of the Coulomb potential. So for the alpha particle to be emitted, it needs to break through this Coulomb potential here. And you can calculate the likelihood using quantum mechanics-- the quantum tunneling likelihood-- in order to figure out how stable an individual particle is. Here, I just wanted to show you something. This plot here shows you the lifetime of an unstable nuclei, for various sorts, as a function of the energy. And what you see here is it's very strong energy dependent. This is a logarithmic plot. And what you see is the lifetime seems to be rather, rather short when the emitted particle has a lot of energy. And that can be explained by this plot here very easily. In order for this particle-- the energy of this particle, when it goes here, is dependent on where this particle sits in this potential, right? So particles which sit at very high values here will have a high likelihood to tunnel through this barrier, and therefore a short lifetime. Hence, this particle has a lot of energy after it's been emitted. So we see lifetimes in the range of 10 nanoseconds to 10 to the 17 years for some examples. So there's this huge variation depending on where the alpha particle sits in this potential. We can have a discussion of the energetics involved. And we use our very same formula for the binding energy. So you write this down here, including your helium here, and then just figuring out what are the contribution of the individual terms. And since we observe experimentally alpha decays only for heavy particles, in this discussion here, we can assume that Z is approximately 0.041 times the mass number. And so the energy of the emitted alpha particle for this to be able to occur has to be positive. And so we find this to be possible for A values starting from about 150. Experimentally, you observe this to start happening at about 200. All right, the next thought we want to discuss is beta decay. So this is shown in this diagram here, where-- starting from carbon-14 decays, for example-- carbon-14, you will see this in the next recitation, is very useful probe in order to date living things, and date when they were not living anymore. And you will discuss why and how you can actually do this. But for now, carbon-14 can decay via beta decay. And what you find is nitrogen, an anti-electron, and an electron. This is a beta decay, or beta minus decay. And using our particle physics discussion, we can easily understand this. The neutron is transformed into a proton via electroweak processes with a W. The electron comes out, the anti-neutrino comes out. And similarly, we can look at carbon-10 here. Into boron is a neutrino and a positron. And again here, we have a W plus in the decay. So now, we can, again, use our binding energy in order to understand this. So what we want to do here, for constant mass numbers, you want to plot the binding energy for the individual atoms or nuclei. And so if you do this for odd A, those as where the pairing term doesn't contribute, you find this nice quadratic term. And you can find beta decays in each of those instances here. For A even, you have the question whether or not Z is odd or N is odd-- oh sorry, Z and N are odd, or Z and N are even. So you have two quadratic functions here. And you find the beta decay between one and the other. And so that's interesting you find those decays chains, based on where you start in the chain, you have the possibility to go back and forth. And because of the even and odd pattern, the lifetime of the decay can vary quite tremendously between those individual states. Last but not least, election capture-- if you have a very massive nuclei and atom, and the electrons-- you know, thinking about a cloud around this-- some of the electrons can come very, very close, and be captured into the nuclei. And you find this here. The time direction goes up, proton captures via the weak interaction. The electron becomes a neutron and emits a neutrino. And an example is the electron capture of krypton into boron. All right, so we start to get some sort of understanding of nuclear decays. We find this Valley of Stability here. We find, in a large range for lower numbers of Z, beta decays. For higher numbers of Z, we find beta plus decays. And for very heavy nuclei, we find alpha decays. Proton decays, seen at the very boundary here, and nuclear fission processes where the nuclei spontaneously breaks up, we haven't discussed in detail. But you can think about them very similarly to the alpha decay. As you already kind of probably saw from the discussion of the beta decay, it's possible to have rather long decay chains. So you start from radioactive nuclei, which then decays, and decays, and decays, and decays in those kind of chains. So this is two examples here-- the thorium chain and the uranium chain-- creating all kinds of other new elements. And on the uranium chain, it's very interesting to say uranium is part of our core. If you build a house and you build-- if you build-- the foundation is concrete, you probably have some uranium in there, which then, in this decay chain, generates radium. And therefore, if you build a house with concrete, you want to have a measure to get rid of the radium which is just floating around. All right, so this is it for now. We continue the discussion with more detailed understanding, or detailed discussion, on how those decay processes are possible, and what we can learn from them. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L43_QED_Antiparticles.txt | MARKUS KLUTE: Welcome back to 8.701. So in the last two videos, we looked at the Dirac equation and we looked at solutions Dirac equations. And in the last lecture we found that, along with positive energy states, we had those negative energy states. Since we cannot simply drop them or disregard them, we do have to find physical interpretation for these negative energy solutions. The first one which was put forward, is the one where you think about negative energy states all being populated-- and that is the vacuum. The vacuum is basically a sea full of negative energy states which are all populated. So if you have a positive energy state, and there are electrons sitting in this energy state here, the electron, because of the Pauli exclusion principle cannot fall down into the negative energy state. But you are able to kick them out, for example, to excite them with a photon. Very excited. The negative energy state, you get an electron out. This process is then will lend to the creation of a positron and an electron pair with a photon. [INAUDIBLE] pair production. It can also explain undulation where there's an empty and a negative energy stage where the electron just folds into creating a photon. So while the interpretation is useful and it explains pair production and undulation processes, they fail to explain what this vacuum, the sea of negative energy state even is. So a more useful interpretation is one part forward that Feynman and Stückelberg, which came out of the discussion of quantum field theory. And we already discussed this interpretation when we looked at Feynman [INAUDIBLE].. So have a look at this Feynman diagram here where you have an electron with a positive energy and an electron with a negative energy, building a photon, which is twice the energy in the symmetric configuration of the electrons before. And you're interpreting the negative energy solution here of the electron as the electron moving backward in time. This is an equivalent to a positron with a positive energy and an electron with a positive energy where the positron and the electron move forward in time. Again, in both cases, you see the energy of the photon is two times the energy of those two particles. All right. So this is a very short discussion. And we will see later on how we use the spinodes for antiparticles together with spinodes for particles order to make relations that could have matrix element. And so we move forward with our discussion of Feynman rules, this time now, with spin-1/2 particles included. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L410_QED_Noethers_Theorem.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture, I'd like you to have a first connection between particle physics and the Lagrangian formalism. In classical mechanics, you have seen that you can write down the Lagrangian using the kinetic and the potential energy of a particle, and from that derive equations of motions. In quantum field theory, you can translate this idea and derive Lagrangian densities. It's beyond the scope of this class to do all the mechanics of this. We'll visit this topic later in the class when we introduce the Higgs mechanism, for example. And we'll be a little bit more systematic then. I'm introducing the topic now because it allows you to answer one of the homework questions. So you can just follow this lecture and then you should be able to answer the first question of the second p-set. All right, so you just have to trust me at this point that you can write the Lagrangian for a Dirac field, or Lagrangian density for a Dirac field this way. One exercise would be to use this and show that from this Lagrangian you can derive the Dirac equation for a spinor field. But that's not what we're trying to do here. You're trying to see what's the effect is of having this Lagrange density being invariant or unchanged under global symmetry. So we are able to rotate our spinor field with a global phase. And we will see that the Lagrangian doesn't change and the consequence of this, which is if we can. This is exercising Noether's theory. There's an overarching global symmetry. And out of the symmetry follows the conserved property-- in this case, the current. All right, so we can express the symmetry with infinitesimal phase transformation, as shown here, for our fields and for our adjunct fields. For the field and the derivatives, then, you just have to do the math and we find those expressions, which we can then put back into our Lagrangian. First of all, we write the change of our Lagrangian in this way. And as we just have seen the Lagrangian, its invariant under this transformation, and therefore the change is going to be 0. So then we use this information at this in the equations. We find this very complicated-looking set of equations. OK, so now we get this. And then we can rewrite the terms. So this is already with a vision of what we would like to actually find later. So if we now look at the terms involving the derivative with respect to du mu of our spinor, we can express this equation as shown here. And with that, we find the next equation. I only show this for the spinor not for the adjunct spinor. This looks exactly the same. But you, however, find in this part here, this looks like Euler-Lagrange equation. And this part needs to be 0. So we only have to worry about this part of the equation, and the same for the adjunct field. So this then leaves this equation here where we have i epsilon, a derivative of this part of the equation. And something like this you have seen before in our continuity equation, something like this. It's our continuity equation. We discussed this in one of the last recitation session, which leads us then to conserve currents. And let's go one step further. If we now identify this part as our current, we can then use the partial derivatives of our initial Lagrangian. Now we're just calculating those terms here. And we find that our current, our conserved current is given by the adjunct spinor, gamma mu spinor. And so what we have just seen-- and this is conserved, so the derivative is 0-- have seen that we have a Lagrangian density, we have a global symmetry. And out of that, we find that the current is conserved. So this is all I wanted to show here in the homework set now. We start from a different Lagrangian. So this is our Lagrangian for a massive spin half particle, which satisfies the Dirac equation. In the homework, we are looking at a scalar particle, a massive scalar particle. And the exercise, however, is very much the same. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L56_QCD_Hadron_Collider.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture, you want to look at hadron colliders very briefly as an introductory lecture to this topic, but also as the conclusion of the discussion we had in QCD. So historically speaking, hadron colliders might have been the tool in order to probe the energy frontier. So, and that has to do with accelerator technology, the fact that heavy particles emit less synchrotron radiation allows them in circular colliders to be collided at higher energies. This plot here shows as a function of year the energy of the constituents. So these are the elementary particles used in the interaction, or the energy of the quarks and gluons being part of the interaction. And you see that typically we find that hadron colliders, here in red, have an edge over lepton colliders in terms of the maximum collision energy. So the energy frontier usually is given by the hadron collider as compared to the lepton colliders. The protocol is a Livingston plot, and it's a little bit old. We are now here in 2020. We haven't built this machine yet. We might build a lepton collider at 250 GB in 10 years from now or 15 years from now. And also, the LHC is very stable at this energy here. But the point of this lecture is more to discuss how we can make a cross-section calculation of important photon collisions. So we have already seen how we can make this cross-section calculation where we have, let's say, an initial quark and an anti-quark colliding with exchange of a photon. We haven't seen how we can do this with a Z boson. But we'll see that next week. This process is called Drell-Yan production. And we're looking at the decay either of the photon--virtual, so virtual photon, or the Z boson and a pair of leptons, electrons, and muons at highest. So we can calculate this. And we call this cross-section the hard scattering cross-section-- the cross-section of this hard scattering process. But we need to, in order to calculate this, know the momentum distribution and the abundance of the initial quarks and anti-quarks. And so we do this using the structure from the parton distribution functions, as we discussed them before. We have to integrate. And those are labeled here with those q's. We have to integrate them over all possible momentum. And we have to sum over the quark species inside the proton. For this process, we don't have to consider the gluon. Because of leading order, we cannot produce a lepton pair with gluons. Higher orders, we also have to consider gluon densities in this discussion. And then the momentum of the parton collision has to be equal to the momentum-- the center of mass energy considered in the parton cross-section. This process, this technique, here is called factorization. So we factorize the hard shattering process from the structure of the proton in the cross-section calculation. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L62_Weak_Interactions_Electroweak_Unification.txt | MARKUS KLUTE: Welcome back to 8.701. So in this section, we look at electroweak unification. So the aim is to combine the weak and the electromagnetic interactions. The issues we can see here are first, the strength of the interactions are very different. This can be mitigated by the fact that we have heavy gauge bosons involved, and we have seen our heavy particles as being used as mediators to kind of change the strength of the interaction. So this might not be a big issue. The second problem is that the structure of the coupling is very different. We have seen for QED that they are the vector coupling and for weak interaction that they are the vector-axial coupling. This 1 minus gamma 5 is vector minus axial coupling. So one way to mitigate this problem is to simply absorb this 1 minus gamma 5 term in the definition of the particle spinors. And I have to warn you, this is a little misleading. And I think that led also to some of the confusion we had in the class before. So what we are doing here simply is, we take our spinor, and we project out with the 1 minus gamma 5 term what the left-handed component of this spinor is. This is just a projection and the definition of this. And we can do this for antiparticles as well as for the right-handed components as well. Good. So now we can look at the current again. And we look at this weak current that you have seen you can write as our antineutrino here, gamma mu times 1 minus gamma 5 1/2 times e. Now, if we now define our particles which gamma matrices, we find that this simplifies quite a bit, because we now find the current, which can be simply written as a vector current. So we mitigated this quite nicely. So what now happens to our electromagnetic interaction here? So we have an electron coupling to a photon. We can project out a right-handed and left-handed component, and then we have to add them together again. When we do this, you find that this component of the left-handed component, or the current corresponding to the left-handed particle and the current corresponding to the right-handed particle. There's no mixed term here because of the way gamma matrices or gamma 5 matrices multiply. So that's nice. This also explains why the helicity is not changed in QED. You basically see this from the algebra involved in those equations. Good. So far, we haven't done anything. We have just changed the notation. So we can go one step further. And we use the concepts we introduced when we talked about QCD or the strong isospin. And since we can nicely describe those currents here of those particles, we can maybe see if we can write a neutrino and an electron as part of a duplet. And when we do this, we rewrite the currents, the positively charged and the negatively charged current as simply the left-handed components of those duplets. We introduced a new matrix here, tau plus and tau minus. And they're simply combinations of tau 1 and tau 2, which are, in fact, the Pauli matrix. This is just a relabelling as well. So there's a lot of relabelling going on, not to confuse you. But we have simply written this current as positively charged weak current and negatively charged current, where we rotate a neutrino into an electron or an electron into a neutrino using the weak interaction. Great. So now we can write this current here as the third component of this current. And we see when we write down the third component of this current using tau 3 here, we find that something looks-- something which looks like a neutral current. So this is something like a neutral current, where we have a neutrino coupling to a neutrino and a left-handed electron coupling to a left-handed electron, on a vertex vector [?] where there's an interaction that was going on. This is not quite the full story yet. Let me remind you about the definition we used also in isospin, which is the Gell-Mann-Nishijima equation, which connected the electric charge to the isospin and the strangeness of the particle. And we do the very same thing. We have an isospin component and a so-called hypercharge component. It's similar to the strangeness we had before. And Q is the electric charge of the particle involved. And so with this, we can now define an iso-- a hyperspin current, which is given as 2 times the electromagnetic current minus 2 times the third component of the-- third component of the weak current here. And so now we find interesting effects here. There's a new component which also couples to right-handed particles. So the missing part here-- some of you might have seen this already-- is this neutral current in the upper equation didn't connect or didn't have a contribution from right-handed particle. And since there's right-handed electron which coupled to Z boson, there needed to be this kind of additional term. So now we have a current which includes the right-handed particles as well. So that's great. We can generalize this by writing those duplets for all particles we know. There's an additional kind of caveat here. We haven't talked about this too much. We have to consider the fact that mass eigenstates are not really the same eigenstates which participate in the weak interaction. Right now we can ignore this. We'll later come back to this question. And then we can write the three components of our isospin current and our hypercharge current as well. Note that this EM current here is our electromagnetic current. Good. So now we rewrote this, and we find somehow very close a consistent picture. We find that there is a charged current, and then there is a neutral current in the weak interaction. But what we actually wanted to do is combine the weak interaction and electromagnetic interaction. Now, let's look at this again and start over again. So we have an isospin current here, which copies to the three components-- the triplet, the isospin triplet. So this is a W1, W2, W3 triplet. And then we have the singlet here which couples to the hypercharge. Very good. So now, if I try to identify now components which we already know the first thing we can do, we have to make sure that we find our W plus and W minus bosons again. And they're simply linear combinations of the W1's and the W2's. And then the next thing I have to do is I have to find my electromagnetic interaction. And we can do that by binding this A-- this is the photon-- as a linear combination of the third component of our triplet, isospin triplet, and our singlet. And what you see here is that there's actually mixing going on. So we basically rotate those with this mixing angle, which we already introduced, sine omega weak mixing angle theta omega-- sorry, theta W. So we find that the photon can be made out of a mixing of the third component of the isospin triplet and the singlet component B mu. And similarly, we can find the Z boson as the other component in this mixing, the other state we find in this mixing. The way we find those mixings here is through the couplings we already know, that we find this omega g times sine theta W is equal to g prime cosine omega W. And that's equal to the electromagnetic coupling. And then similarly, we find a solution for gz. So what we have seen now is that apparently we are able to combine the weak and the electromagnetic interaction by mixing-- by introducing the weak isospin and by mixing isospin triplet components with a singlet component. And so we find a picture which is consistent with the W plus, a W minus photon, and the Z boson. So that's very nice. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L47_QED_Casimirs_Trick.txt | MARKUS KLUTE: Welcome back to 8.701. So the name of this plan is called Casimir's trick. But what we're actually going to do is we're going to evaluate or learn how to deal with spin information in the calculation of our matrix elements. So now what is it we're trying to do? So the first problem we might have is that we have polarized particles. So we have here again our example of electron muon scattering. And if you assume that the electron and the muons are polarized, you will find as we discussed in the previous lecture that our matrix element is proportional to the adjoined vector of mu 1 times some sort of gamma matrix times the spinor. And we probably have a polarization that is as well included here to give the polarization of the photon involved at the propagating. Good. Now in order to now get a number for M, we actually have to be explicit about the base function of the external particles. And you can do this-- you can just write this down. However, in experiments, we are often interested in the scattering of unpolarized particles. Even if you have a way to polarize a beam, your polarization is not going to be perfect. So in your calculation, you might want to average or some opposite available spins of your particle. So how are we going to do this? So we're trying to calculate the spin average amplitude. Why averaging? We want to average over the polarization of the incoming particles, again, because we don't know what the polarization states are. And we want to sum of the polarization of all final state particles simply because each polarization state is a possible outcome of the interaction. And we have to sum over all possible outcomes. Good. So how do we calculate the spin average amplitude? So again, we start from our matrix element where we have the adjoined vectors and gamma matrix or spinor. If you then calculate the square of this matrix, we find this solution here. Great. And now we're just doing a few tricks. So first of all, we can write our adjoined mu 1 equal to mu [INAUDIBLE] 1, gamma 0. And then just continue this rewriting to refine this part here of he solution, which looks a little bit simpler where we now the adjoined matrix instead of the adjoined matrix given here. So we want to evaluate what this adjoined u1 gamma u2 is times adjoined u2 gamma adjoined u1. Good. So this was just matrix algebra and working with gamma so far. Now we're using this completion relation where I probably haven't told you yet what this syntax here is. So gamma flash is equal to gamma mu p mu. And that's just the way to simplify it writing down the equation. So if you use this so-called Feynman's slash, you can also rewrite that Dirac equation in the simple form phi del flesh minus M y equal to 0. That's now our direct [INAUDIBLE].. Remember there was a gamma mu in here. All right, let's just decide now again. We have to sum over all spin states, or the spin states of all those particles. Now if you start with something over particle 2, we can rewrite this here. We use this equation, this completely-filled relation, and just really find this q as equal to this part here. So now we have this equation here, which looks a little bit simpler, because we just have our incoming and outgoing spinors for particle 1 given here. All right, now we're looking at this part of the equation now. And again, we're just doing a little bit of matrix algebra here. And you find that this is equal to q u 1 mu 1, i-i, which is just simply something over the same indices, which is the same as building the trace of this matrix. All right, so if you just put all of us together, you find that calculating the sum of matrix elements, it's similar to building the trait of a matrix. And then our final results then give you [INAUDIBLE].. All right, so summarizing this part, we have a matrix element. And we saw that the matrix elements of this form's proportional to this form. And then when we try to calculate the spin average matrix element squared, that's equal to the traits of the particles involved using the completion image. There's an additional factor of 1/2 here, which comes from the averaging over the initial spins. So assuming exactly one of u1 and u2 corresponds to the initial particles defined as vector 1/2, if both initial particles are the same, like in a parent relation, then vector is 1/4. And if neither is in the initial state, then the factor is 1, which you would have for fair production. Good. So now we simplified the calculation of our matrix elements, putting the specific spin states in there and the specific polarization vectors to what the calculation of traces of matrices. So now we can look at what does it mean. So this is what Casimir's trick really is about. So summing over spins reduces to summing over matrices. If you have antiparticles, the completeness relation, which uses two p/minus M, and then you we go ahead. So all you need to know now is how to do this trace it. So first, some general remarks on traces. We have to matches, A and B. You want to calculate the trace of A plus B. Set equal to the trace of A plus the trace of B. If you have a multiplicator here, a vector alpha, you can just take this out of the calculation of the trace. In the traces, two matrices commutate. The trace of A times B is equal to the trace of B times A. And then you can use this in order to show a more complicated relationships. Good. We have already started playing around with the gamma Here are a few more identities which might be of use when you calculate traces and calculate matrix elements overall. The first one is g mu nu times g mu nu is equal to 4. The anti-commutator relations we already discussed and have shown in one of our recitation. And if you have three matrices you can rewrite them as minus 2 times the matrix, which is in the middle here. So I'm not going through this in much detail here, but I encourage you to just follow, it's an additional exercise. We didn't have the opportunity yet to play with the quantum. And then you can use those tricks, for example. And commutator relation to calculate the traces. For example, the gamma mu, is 4 times g mu nu. OK, so what you see here is basically used this anti-commutator. Put it in here, use this part here, this is basically one trade-off, 1 is 4. So we get g mu-mu times the trace the 1. But 4 times 4 matrix is 4. And so you get the trace of gamma mu gamma mu is equal to 4 times g mu, and so on. Later on, we haven't discussed gamma 5 so much, gamma 5 is defined as 5 times gamma 0, gamma 1, gamma 2, gamma 3. We will see them and discuss the weak interaction. That there's a prominent role will come up this very special gamma matrix. And here are just some pieces of information for traces for gamma 5. The trace of gamma 5 is at 0. The trace of gamma 5 times-- sorry-- a different gamma matrix is zero. And that's also true for the product of additional gamma matrix that you can find when you just try to calculate this relation and the relation of that. One last piece of the traces, this gamma matrices, it's shown here. Only with four or more gamma matrices can you define or find a non-zero trace involved in gamma matrices. And here's one example where you have the product of five or four gamma matrices with gamma 5. And that's equal in this case-- 4 times 3i times the total asymmetric tensor. The total asymmetric tensor is defined as minus 1, or even for rotation of those numbers here plus 1 for odd permutations and if there's two instances of this state. All right, again, I'm not showing you this in too much detail. But I encourage you to play around. And you will be asked in the next homework to play around with some of the compilations with that. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L82_Neutrino_Physics_Mass.txt | MARKUS KLUTE: Welcome back to 8.701. In this section we want to look at how we can theoretically describe masses of neutrinos. There is not just one way to do this and the judge is still out there which one is actually realized in nature, or perhaps both are realized in nature. So mass terms can be constructed by introducing so-called sterile neutrinos. The first way, and it's shown here in the Lagrangian, is very familiar to you where you have a left-handed component of a particle coupling via the Higgs boson to a right-handed component. Now, if we now have a right-handed component or right-handed neutrino, that neutrino does not interact with the weak interaction and doesn't interact with any other interaction we know. Hence, this neutrino, this right-handed particle is a sterile neutrino. It's not interacting in any of known ways other than with the Higgs field with the rest of the standard model. The second part, the second way the mechanism is using Majorana particles, particles which are their own antiparticles, and we'll look in how this is being implemented. So again, the first, the Dirac term is generated after electroweak symmetry breaking from Yukawa interactions. We have seen the very same thing for our charged leptons. What we see here is that the lepton number is conserved. Before and after the interaction we have the same number of leptons, but the lepton flavor is not conserved in this interaction. We can rewrite this. We identify the sterile neutrino as the right-handed component of the spinor. I mentioned this already, and we basically couple the weak-doublet components as you would just expect that to appear. The second term, the Majorana mass term, is interesting as we introduce another singlet into the standard model. So this then can appear as a bare mass term with some consequences. So here what we are trying to do is we are involved two neutrinos, right-handed fields. Those break the lepton number. So if those neutrinos are realized in nature, we have to observe lepton-- we should observe lepton number-violating processes. And so the search for the specific kind of neutrino is through searching for lepton number-violating processes. So we can rewrite this part of the Lagrangian, this part of the math term here using this matrix. Let's see how this unfolds. If the math term now is much, much larger, or larger than the electroweak scale, you can try to diagonalize the master and it leads to three neutrinos, three light neutrinos, three light neutrinos you would expect, and one potentially-- one potential, or maybe multiple potential heavy neutrinos. If you then rewrite the math term you find for the light ones, the term which goes is 1 over the scale of this [? known ?] neutrino. That is a nice motivation for this kind of physics as it automatically reduces the amount of the neutrino as we observe the math is to be very small in nature. And then the mass of the heavy neutrino is proportional to the mass. This mechanism is called see-saw because it automatically moves the scales of those two neutrinos, the heavy ones and the light one apart so you happen to observe the heavy one, maybe because they're very, very heavy, and the light ones have light mass because of this mechanism of being proportional to 1 over the mass scale-- the mass eigenstate of those neutrinos. However, if the mass scale of those eigenvalues is much-- not higher than the electroweak scale, the low energy spectrum contains these additional light states. So you have not just the three light neutrinos but you have additional light scales-- states which mix with these three light neutrinos. And that is kind of an interesting area to look for these particles as they would lead to small deviations in observed electroweak efficient properties, and they might yield to some interesting decays in nuclear physics, and we'll come to those specifics later. We have seen in this lecture two different ways to generate masses. One's to recover interactions, same as the interaction with the Higgs field, and one, we have the see-saw mechanism introducing Majorana neutrinos. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L02_Introduction_to_Nuclear_and_Particle_Physics_Course_Organization.txt | MARKUS KLUTE: Hello, everybody. So this set of slides is on the organization of this course. It's part of the syllabus. I'll explain how the course is structured, how you participate, how you'll be graded, and I'll give you some details and where to find proper pointers. To get started, this class will be taught in an inverted classroom or flipped classroom setting, and it's going to be online-only so we will not have in-person opportunities to discuss. There's a second video where I'll just briefly talk about the strategy of the inverted classroom and the benefits. What you are typically used for as lectures is organized solely in short videos where I discuss concepts or methods. The videos also include a few short questions for your own self-evaluation, and we'll pick up those questions and later in recitation session. So when you meet Tuesdays and Thursdays we'll have time to answer your questions. We'll have time to go through those [INAUDIBLE] and hopefully have a good discussion. The recitations are also used for another concept, which I'll explain on a later slide, which is your presentation of a specific paper. This is part of your homework set is to find a paper to discuss with me whether or not it fits into this class schedule, and together with a partner, have a short discussion of it. So we'll use these Tuesday and Thursday sessions for this purpose. And in addition, there's going to be an office hour on Friday. It's one hour where I'm just going to be connected to a Zoom meeting. You can log in at any time and we'll discuss whatever you want to talk about. For that let's contribute to the Doodle poll, which I posed during the first [INAUDIBLE] time of the first class, and then we'll find the best possible time for everybody involved. The course evaluation or your evaluation in this course will be made up 50% out of homework. So it's going to be six PSets are going to be posted. One will be forgiven so I'll basically count five PSets, and each will allow you to accumulate 8 points. The paper presentation I was talking about will have 20 points. We're talking about a 20-minute presentation of a paper. It's really a summary of a paper of your choice together with 10 minutes of Q&A. Again, this is being done in two groups so you have to split up the aspect of presenting-- preparing and presenting, and then also when it comes to the response and the questions. And then there's going to be two short oral exams. This is 20 minutes, so the actual exam is only going to be 15 minutes, with the teaching staff, so this is Justin and myself where we ask you a couple of questions and just make sure that you are on top of the content we discussed in the weeks prior. So there is two. The first one deals with the particle physics content, and the second one with the nuclear physics and experimental method as well. The grading scheme will not be worse than what I've given you here. The great divide-- the great divide or the grade divide-- at 85% between A and B, 70% between B and C, 60% between C and D, and below 50% earns you an F. I don't think anybody can get into this environment as long as they participate. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L11_Fermions_Bosons_and_Fields_Quantum_Field_and_Matter.txt | [SQUEAKING] [RUSTLING] [CLICKING] MARKUS KLUTE: Welcome back to 8.701. So in this second chapter-- chapter number 1-- we start talking about quarks' and leptons' interactions and fields. And we start by a very general discussion of quantum fields and matter. So you all know what we mean by particles and forces in the classical sense. However, we need now to see how they connect with quantum fields, and how this helps us to consider matter and forces in a very similar way. The modern view of the basic way that particles come to exist is in terms of quantized fields, which is an extension of the quantum mechanics you love and know, which you have done before, where you quantize particles. These fields have quantum equations for their field amplitudes which are basically like the quantum simple harmonic oscillator, but there are an infinite number of them-- one for every possible frequency of wave in the field. This means the amplitudes for the wave for each frequency are therefore quantized in integer steps, just like in a simple harmonic oscillator. This is what we see as a particle. The first excitation gives one particle of a frequency. The further excitation of the amplitude for the same frequency corresponds to two particles. Et cetera, et cetera. So hence, the concept of quantum field, unlike normal quantum mechanics, allows an arbitrary and changeable number of particles to exist. This is necessary, as you will see later, such that we can create and annihilate particles in reactions and decays. And the standard wavefunctions correspond to an equation of a particular frequency, amplitude when it exists in the [INAUDIBLE].. So now, just let's consider a few cases here. Imagine you have two particles-- two fermions, for example-- let's say two electrons. And you consider the wavefunction. Quantum field theory actually says that there's only one electron quantum field for the whole universe, and every electron which exists is due to an excitation of the field. Hence, all electrons are identical in the quantum-mechanical sense, as they all arise from the same field. The theory says, then, that particular properties are the resulting wave equations-- namely, their symmetry-- and the exchange of these particles. So the extra symmetry depends on whether or not the particle is a fermion, which means it has spin 1/2 or 3/2 or 5/2, et cetera, or a boson, which means that it has spin 0, 1, or 2, and so on. So for any identical fermion and electron, our quantum field theory says that their wavefunction must obey the property of antisymmetry. This means that when we write an overall wavefunction and we replace the particles, we pick up a minus sign. This property is not just for electrons, but for all fermions-- that's all matter particle, as we saw last week. So it also holds for composite particles. A composite spin-1/2 particle is subject to the same antisymmetry. This property of exchange antisymmetry leads to a well-known principle-- namely, the Pauli principle, which means that you cannot have two electrons of the same energy state or the same state, because when you would actually swap them, you find that they are identical, which is a stark contrast to the actual description of this wavefunction. So this doesn't really work. And therefore, two electrons, or two fermions, cannot be in the same state. [INAUDIBLE] very general. Constructing a wavefunction or a total wave equation for two fermions is not that hard. We can simply do this by this construction. An important additional statement or note to take here is that an antiparticle such as a positron is not identical to a particle, such as to the electron, again. If you move on to bosons, boson exchange is symmetric, meaning that if you [? request ?] two photons, you find the identical wavefunction. And then constructing a two-boson total wavefunction, you do this by adding those two functions together. This is, by definition, symmetric. Let's look now forward to exchange particles. Again, you have a very good idea of the classical picture of how forces are transmitted. So the modern picture of how a force acts under quantization is by emission and by absorption of a particle. That is shown in this diagram here, where let's say you have an electron and a second electron. They see each other. And they see each other by emitting and absorbing photons. And you see this here. So this electron comes along, maybe emitting a photon. This electron [? readmits ?] it. And by this exchange of emission and absorption of photon, those two particles, the [INAUDIBLE] electrons to each other. So this you can think about like two ships shooting cannons, if you want. But you also have to consider that there is not just repelling forces, but also attracting forces. We could have replaced the electron with a positron, and the negative and the positive charge would interact with each other. This is it for this short-- it's basically an intro into the intro of the intro. I hope you enjoy this. All of those concept we go into more detail. This is really just the starting point. And then the next lecture, you'll see how we can actually understand aspects of this diagram here, which we call the Feynman diagram. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L22_Symmetries_Flavor_Symmetry.txt | MARKUS KLUTE: Hello. So with this recording I'd like to introduce the topic of flavor symmetry, what we mean by that. So when the neutron was discovered, it was noted that the mass of the neutron is very close to the mass of the proton. And so it seems like those two particles are somehow related. Even so, the electric charge is different. The proton is charged, the neutron is neutral. And you can see here that the masses are really very, very close, about 1 MeV or about 1% difference in mass. So Heisenberg proposed, and that was in the 1930s, to regard them as two states of the same particle. They were really so different that you could think that they are basically the same, just a rotation from one end to the other. And that's exactly what he did, considering them as one particle, a nucleon, where the proton is described as a doublet, with an up doublet, and the neutron as a down doublet, similar to an up quark and a down quark in electron and neutrino later on. Those particles were not known at the time. So he introduces a new concept, so-called isospin or strong isospin, where he's doing exactly the this. He labels the proton up, and he labels the neutron down. So, so far, we haven't done anything, but introduce new labels for new particles or particles, new particles at the time. But now if you assume that the strong force is invariant under rotations in this isospin space, meaning when you flip the neutron into a proton and vice versa, those rotations are invariant. The strong force is invariant under those rotations. That means or it follows directly that the isospin is conserved in all strong interactions. So that is what really the conclusion is of this introduction of those new labels is that isospin is conserved under strong interactions. So this was proposed in the 1930s. Again, we noticed the symmetry in nature. And from that symmetry, a conservation follows. Even so, and we can conclude in physics cross-sections or ratios of cross-sections from it, without understanding in this case, QCDs a strong interaction. So this is very fascinating. And you can just apply this concept now to other particles, for example, the pion. The pion has an isospin of 1. And there are three pions or three states-- the 0 state, the up state, and the down state, which is pi plus, the pi 0, and the pi minus. In general, you can conclude that the multiplicity of your particles, as you see the neutron and the proton, the pi plus, the pi 0, and the pi minus, the multiplicity is 2 times the isospin plus 1. Isospin equals 1 means that the three particles as part of the representation. So far so good. So later, this concept was moved to other new particles. Many new particles were introduced and produced in the emerging accelerators and experiments on the market. And people tried to classify them by the isospin. Gell, Mann, and Nishijima empirically observed that there's a relation which holds, this equation here, which is that the charge, if you assigned the maximum value, I3, the third component of the isospin, to the member of the multiplet with the highest charge-- in the previous example it was the proton or the pi plus. Then the charge of this particle follows from the isospin, the baryon number, and the strangeness. We looked at baryon number and strangeness before. As a reminder, strangeness is the number of strange quarks in the baryon or the meson, and the baryon number is simply the number of baryons. So if you just look at this, for example, for this pion case, we had the isospin equals 1, baryon number equals 0, strangeness equals 0, which follows that the maximum charge involved is 1, which is a charge of a positively charged proton. So far so good. This was empirically observed. But once you then later discover and develop a quark model-- this is then in the 1970s-- you can deduce this equation directly from the assignment of isospin to quarks, which is rather fascinating. Again, we don't understand the physics fully. But just from the symmetry you can, and empirically you can deduce information about physical systems. However, if you try to now extend this idea of isospin to the complete quark model, you find that the symmetry starts to be broken. It already starts to be broken slightly, when you include strangeness or strange quarks. But it's badly broken when you include charm, bottom, and top. And the reason can be seen here. The up quark and the down quark, both of the particles making up ions and the neutron and the proton. And even if you include strangeness, the different in mass is not very large. So the symmetry, the particles really look like they're the same particle in a different state of the same particle. But when you introduce other quarks, heavier quarks, charm and bottom, you find that the mass difference is so large, that the symmetries are broken. So this concept starts failing because of the large mass differences, because the symmetry is broken. All right, so from here, we now go to discrete symmetries. And again, from the observation of those symmetries, we can deduce physics without fully understanding the underlying physics. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L71_Higgs_Physics_Higgs_Mechanism.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: Welcome back to 8.701. So in this lecture, we talk about the Higgs mechanism. As you might know, the Higgs boson was discovered in 2012 by the LHC Experiment, but the theoretical discovery of the Higgs boson happened much, much earlier than that did. In the mid 1960s, Peter Higgs and a few others proposed a mechanism which gives rise to masses of the gauge boson, the w and the z boson. And the Higgs boson, or the Higgs field then can also be used to give masses to the fermions. So let's have a look at this and start with a simple observation. When we have written down our Lagrangian for a simple spin one field, gauge field, like a photon, we find that we want to have local gauge invariant for this equation, which means that we can do a local gauge transformation of our fields, and the physics should be unchanged of this. So the physics, meaning that the description by the Lagrangian, should be invariant under this transformation. All right? The problem, however, is that, if you want to have a spin one gauge field which is massive, you have to have terms in your Lagrangian, like this one here, where you have a mass term for your fields. So in general, this is not possible without breaking gauge invariants, and this is a guiding principle of our Lagrangian theory. So this is a real bummer. So if you want a stopping point, so you have a beautiful theory which describes all the interactions, but one important characteristic, the masses of the particle is missing. But you are able to actually do this, not by adding specific mass term but by breaking the symmetry, by breaking the local gauge symmetry. There's various ways to do this, and one of the ways is to use spontaneous symmetry breaking. So what is spontaneous symmetry breaking? Imagine you have a symmetry of rotation, a symmetry like this pen here, and by applying some force on top, this pen would bend. And by bending this pen, it bends in one specific direction, that breaks the rotational symmetry. Another way to look at this is to just let this pen drop. Let it go to its ground state, lowest possible energy state, and it will land somewhere on the table and by doing this breaking the symmetry, and it does this spontaneously. Let's look at spontaneous symmetry breaking in a toy model first. So what we're going to do here just add a complex scalar field and the corresponding potential for this field. Potential is shown here, and this general potential can have multiple forms. The first form would just be this parabola here. This is a solution where mu square, this term, mu square, is greater than 0. In this term, there's this unique minimum. The minimum is here at 0, and because of that, the mass of this field would be equal to 0, and the mas of our gauge field would also be equal to 0. But what happens now if we have through this potential a breaking of the symmetry? So in this case here, the vacuum itself, the lowest energy state, breaks the symmetry. You go away from the 0 point, and you're breaking the symmetry. So this minimum is at v over square root 2. v is the vacuum expectation value of this field, and you can simply rewrite then this field itself by evolving it around its minimum. And so you find two fields here, this chi and this h. The h is already kind of pointing towards the Higgs boson, and the vacuum expectation value. Now, if you add this back into your Lagrangian-- and I'll do this again. This is shown here, but also on the next slide-- you can start to identify terms which look like mass terms for your particle. And the first one is here which can be identified as a mass term for our gauge field. The mass is e time v. e times v is the strength of the coupling of this gauge field e times the value of the vacuum expectation value. All right? So this is interesting. So we used this new scalar field to break spontaneously the symmetry, and then the mass term appears which is proportional to the strength of the coupling and the vacuum expectation value. So the mass is generated through the spontaneous symmetry breaking and the coupling to the field. You also find a mass term for the six field here, for this h field. This is not the Higgs boson. It's just a field which looks like it, and so this mass term is here. But remember that mu squared is less than 0, and then this chi, or the so-called Goldstone boson, its mass is 0. But then we have those terms left over here, which we cannot really interpret it very well. And it's possible to remove them by choosing a specific field. So we do a gauge transformation by just relabelling things, and then the new Lagrangian is independent of this field. Just as a reminder, Goldstone, the Goldstone boson, you find those Goldstone bosons in many places in physics. And Jeffrey Goldstone is a retired faculty at MIT, so you might, in the spring or next summer, you'll him walking across the corridors. So we find our new Lagrangian which has our mass terms here, which has a term for the Higgs field, and has our potential for the Higgs field. So this specific gauge we just decided to use, this so-called unitarity gauge, and it's important to note that the Lagrangian itself contains all physical particles. But this chi, the Goldstone boson, is gone, and the lingo we sometimes use here is that the Goldstone boson has been eaten by the physical bosons. And the way it has been eaten is through the longitudinal polarization of this boson. It's the equivalent of saying that it has acquired mass. So the pocket guys here for spontaneous symmetry breaking is such that spontaneous symmetry breaking of a u1 gauge symmetry by a non-zero vacuum expectation value of a complex scalar results in a massive gauge boson and one real massive scalar field. So we created mass, but as a side product, we also have an additional field. And that field itself has a mass term, so it's massive. The second scalar we had just disappeared. The Goldstone boson has been eaten by the longitudinal component of the gauge field itself. All right. That was a simplified toy model. Let's look at the standard model. So now, here, we have to generalize from u1 to su2 or su-n gauge groups. The scalar field is now an n-dimensional fundamental representation of that group for the standard model. That will be su2. The gauge fields are n squared minus one-dimensional joint representations, like our photon, for example, or our unmixed w boson field. And the Lagrangian looks very similar to the one we just before with our potential. Again, we have this mu squared term. We also have a lambda term here, and then we require local gauge invariants again. OK. So now, for the standard model, again, su2 cross u1 gauge groups. We introduce a complex field, complex six field in su2. It's a duplex, meaning that it has four components. It's complex and has two components, which there's a total of four components to it. So we already know that we want to use m square less than 0 for our potential to allow for spontaneous symmetry breaking to occur. The minimum is then at 1 over square root 2. 0 for the upper component, and v, the vacuum expectation, for the lower component. That's a choice already. Again, why mu square less than 0? Because if we would have chosen to use a positive value for mu square, we wouldn't have spontaneously broken the symmetry. OK? So we need to have potential which looks like this Mexican hat here. All right. So now, what happens now to our w and z bosons. We have discussed electroweak mixing already. Good. So that was the first step. Now, we will understand where the mass terms actually come from. So now, we just did the spontaneous symmetry breaking, and now we are looking at what happens now if we also couple the Higgs field to the bosons. So again, we write this this way, and then we just try to find terms. It's really like a mechanical writing of the individual terms, and you find again terms which have a vacuum expectation value here and the coupling here. And you find the coupling to the u1 term and the coupling to the su2 and the coupling to the u1 term representative to the coupling to the original photon field and our gauge field for su2. All right. The rest is rewriting and identifying terms. If you do this-- and this is like a couple of pages of writing, fine-- but if you do this, you find, again like before, that you find the first and second component of our su2 gauge field. It gives us the charge at the boson, the w plus and the w minus. And then the z boson and the photon mixtures of the third component and the field b. All right? So these are all physical fields, and then we try to identify the mass terms you find for the w. That the w mass is proportional or equal to the coupling strands of the su2 group times the vacuum expectation value over 2. And the mass of the z boson is given by both couplings times the square root of the sums of the square times v over 2. OK. If you're trying to look for a mass term for the photon, you find none, meaning that the photon is massless. And then we can look again at weak mixing angles, and they are now defined directly through the couplings in those two gauge groups. The masses of the w and the z bosons are related via this weak mixing angle, cosine theta w. All right. Those elements we already saw before. Now, we find that the masses of the gauge bosons are given by spontaneous symmetry breaking via the vacuum expectation value and the strength of the coupling of the gauge field to the Higgs field. All right. So in summary, we started with a complex scalar field. A representation of su2 is four degrees of freedom. The Higgs vacuum expectation value breaks the symmetry spontaneously. The w plus and w minus and the z boson require mass, and the three Goldstone bosons are each absorbed into the w's and the z bosons. We also find an additional scalar Higgs boson that remains. And so that was the understanding in the '60s and '70s, and the standard model was further developed. And then it took us all the way to 2012 to actually find this new scalar particle, the Higgs boson itself. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L94_Nuclear_Physics_Nuclear_Force.txt | MARKUS KLUTE: Welcome back to 8.701. So we continue our discussion of nuclear physics. And in this lecture, we talk about the nuclear force. Now, in the last lecture we saw that nuclei are bound together, and we were able to calculate using an empirical model the binding energy of various nuclei. Now, the question remains, what is actually binding those nuclei together? If you remember, I mean, in the first weeks and months of this lecture, we looked and discussed various interactions between elementary particles. We saw the electromagnetic interactions, the weak interaction, and the strong interactions. There was no discussion of the nuclear interaction or the nuclear force. Now, what is this? We have seen that the strong force acts between quarks and hadrons. For example, we have discussed at length the pion, the pion, which is made up of an up quark and a down quark. And those are held together or bound together via the strong force. And we looked also at the structure of a proton and the structure of a neutron. Now, the nuclear force is the residual interaction between the quarks localized in different hadrons. So the interactions between protons with this up quark here and the down quark here in a different nucleon. You can already understand that it will be difficult to have a full understanding of the nuclear force. Why? Because there's many quarks involved, and there's many protons and neutrons involved in this process. So what we find later that we can describe this using a mean field approach, a mean field of forces between the particles involved. So what is the experimental status? Our understanding of the nuclear force is based on various kinds of experimental information. The first comes from nucleon-nucleon (proton-proton, neutron-neutron, and proton-neutron) scattering experiments. And some of those experiments have the benefit of using spin-polarized projectiles, for example a polarized electron being used to probe the structure of a nucleus. Nuclear binding energies, we've seen those again, and the precision measurement of masses, they give us insight, especially useful for the light nuclei. And the nuclear structure information such as energies, energy levels, spin, parities, magnetic and quadrupole moments, again, especially of the light nuclei. And there's many more to be named in more detail. But conceptually, those are the three kinds of pieces of information we have. The experiment-- those experimental results indicate that the nuclear force depends on the distance between the interacting nucleons-- this is the radial part-- how far apart are those nucleons? And also, the spin and angular momentum of the interacting nucleon. There seems to be a spin-orbit and also a tensor part when it comes to the nuclear force. It is also interesting to note-- and we'll talk about this more-- there doesn't seem to be any indication that the nuclear force depends on the type of nucleon, whether or not it's a proton or a neutron in the interaction. So that's charge independence. So looking at the radial part of this, nuclear force is short range, which implies it vanishes for distances longer than about 2 femtometers. So it basically vanishes in this area here. And the nuclear force is strongly repulsive for distances shorter than about 0.5, in this area, femtometers. You can understand the repulsiveness by the fact that you cannot really push or press an existing nucleon further than its actual radius. You cannot compress them further. This is also kind of apparent in the liquid-drop model, where we discussed, you know, the volume term, and the volume is not-- cannot be compressed further. On the other hand, you saw that there is short-range distance, in fact, because we find this linearity with the mass number in the binding energies. Right. So here are the arguments. The binding energies per nucleus which is roughly constant, indicates that the nucleons and nuclei interact only in their immediate neighbors. Otherwise, it would have an A squared term or an A times A minus 1 term in there. And then the measurement of distances between the nuclei at which nuclear reactions start to occur, those are in the order of 1 to 2 femtometers larger than the corresponding radii. The nuclear densities which are only slightly smaller than the nucleon densities, indicating very dense packing. So again, they are already very densely packed. You cannot push them much further. On the spin-orbit force, here that's an area where we could go into much more detail. But for this introductory class, we don't. We will not. The scattering of spin-polarized nucleons or other spin nuclei particles allows us to understand that nuclear force has a component which depends on the spin and the angular momentum of the interaction. Here's a fun fact. So the charge independence nuclear forces, meaning that it doesn't really-- the nuclear force doesn't really depend on whether or not there's protons and neutrons. And you can-- I could ask you, you know, would you have expected this? Would you have expected that there is a dependence on the charge? And the answer should have been no. Because we just learned that the nuclear force is a remnant of the strong interaction, the strong interaction doesn't know about electric charges. So the answer needs to be yes. The charge independence of nuclear force implies that electromagnetic effects are eliminated in this scattering, meaning that when you measure aspects of the nuclear force, you have to be aware of the fact that there is electromagnetic interactions, and you have to kind of try to get them out. And that can be done by, for example, comparing scatterings of protons and protons, protons and neutrons, neutrons and neutrons. If you do a comparison carefully and subtract out the electromagnetic effect, then you'll see that the force is indeed independent of the charge. We can make use of this. So there's a lot of information behind, or experimental techniques which can make use of the fact that nuclear forces are charge independent. What you can, for example, do is study so-called mirror nuclei. Those are the ones where basically revert the N and the Z for the same A. So they're basically mirrors of themselves in terms of mirroring the protons and the neutrons. Examples are helium-3 and tritium, for example. And those, then, allow you to study in detail those effects. So this heavy mirror nuclei inherit heavy mirror nuclei, the breaking effect-- the effect breaking charge independence of the nuclear force are strong, and the similar does not hold. Good. One example where you can study is if, for example, one of those mirrors is radioactive or unstable nuclei. You can study the properties of the unstable nuclei by looking in detail at the mirrored nucleus. So that's one of the common and interesting ways to study radioactive nuclei, where you cannot just simply take them, excite them, and study the properties. They simply decay too fast in some cases. Here is a table where you see this effect. You see the comparison between the nuclei, this mirror nuclei here. There's four pairs. And you see that the binding energy, the net binding energy after removing the Coulomb term, is very much the same between those groups, those mirror groups of nuclei. And this table here or this diagram shows you excitation energies for two mirror nuclei. And you see that the energy levels are pretty much on par. Without going into any detail, they're on par between those two mirrored nuclei. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L61_Weak_Interactions_Feynman_Rules.txt | [SQUEAKING] [RUSTLING] [CLICKING] MARKUS KLUTE: Welcome back to 8.701. So in this lecture, we open a new chapter of weak interaction. So we are one by one adding together the components we need in order to describe all elementary particles and their interactions. And I'll be adding the third form of interaction. After the QED and QCD, we enter into the discussion of the weak interaction. So let's have a look at the standard model. So we discussed gluons and QCD. And we saw that gluons couple to themselves and also to all quarks. Because they carry a charge under the-- on the QCD, a color charge. We have also discussed the photon, and seen that the photon, they do not couple to themselves, but they couple to all charged elementary particles. Those are the meta particles, the fermions. The photon also couples to the W boson. We call it the electrical charge. So now what we want to do in this next chapter. We want to fully understand the W and the Z boson. And we will see that they couple to all meta particles. And we'll also discuss how they might couple to themselves, or the Z boson couples to the W boson. Well, that's the story of this entire chapter, and we'll take it one by one. As an introduction, we start with the Feynman Rules. So having the Feynman Rules in place, and the cookbooks, the recipe, in order to calculate decays and scattering processes. That is all we need in order to get moving. You can, for example, look at this vertex here, of this component of the Feynman diagram. And what we need to analyze this is the propagator for the W and Z boson, and the vertex factor. This vertex factor now looks a little bit more complicated than for QED and QCD, because the W boson and the Z boson, they carry mass. So we have some additional factors. q squared minus M square. And this q squared over M squared term as well. One interesting fact about this vertex factor is what happens now, is q squared is much, much smaller than M squared. We have to get rid of those components here, and we find a vertex factor which looks similar to the one we have in QED. However, that's not one over q squared term, but 1 over M squared term, which is constant. So we would see that we can describe this in the context of the Fermi theory, which is a lower energy approximation of the full theory of heat conduction. It's kind of an interesting concept, and it extends to the entire understanding of the standard model. It might be that our standard model, you know, that we have all the packages together, describes the lower energy approximation of a more complicated-- more holistic theory, which we then can discuss under the concept of a grand unified theory. Maybe there's a symmetry group which is embedding the symmetry groups we need for QED, QCD, and the weak interaction. But that's a side remark. So we will look at the Fermi Theory a little bit more later. The vertex factor itself, describing the vertex here. It's given here. For the W boson, and also for the Z boson. And it looks a little bit more complicated than the vertex factors we have seen so far. What you notice that there is the parameter, which is associated to the strength of the interaction. And the gamma matrix. But there's also this term here, which has two components. There's the one, and the gamma 5 matrix. We have talked about gamma 5 matrix already. And we can later identify those as individual currents are coupling to vector current and an [INAUDIBLE] current. So this looks even more complicated now for the Z boson, because here we have not just numbers of one, but an additional factor. This factor cV is a vector coupling, and it's specific for each fermion. So each fermion has one of those constants. And the second part of the package or set of constants, for the axial current. You have a second parameter here, which is the strength of the coupling of the Z boson. So at this point, you just take this axial, and you can do all our calculation. On our next slide, I'm going to explain to you what the corresponding numbers and what d values are for those parameters cV and cA. Later, we will see how it comes to this more complicated structure, and why there is a vector, and why there is an actual axial current in the weak interaction. But for now, we just take this for granted, and we just take this as a recipe. So now for the neutral. So we've just have seen that this is the vertex factor. And here, for all fermions, we list what these values are for cV and cA. What you can see is for the neutrinos, the factor is one half, both for cV and for cA. And for the charge leptons and quarks, there is an even more complicated term here, which includes a new parameter. Sine squared theta w. The value of this is 28 degrees. Sine squared theta w is 0.231. As a little bit of a preview here already, the fact that there is this new parameter and an angle involved leads to, or can be explained later, by the fact that the weak interaction is actually a result of a mixing between an original weak interaction, and QED. So there's a mixed thing going on. In other words, the Z boson itself is a mixture between the thing which couples to the weak part of the particle, and the part, which couples to the electrically charged part of the particle. When you see that, that's why there is a simple factor for the neutrinos who are electrically neutral, and a more complicated term here for the electrically charged particle. And you see that this is the electric charge here. Or two times the electric charge of the particles. But for now, those are all just constants and recipes to be used. One additional word on the history of the neutral charge of the neutral weak current is given here. So in the '60s and '70s, the standard model was slowly developed. A little bit more slowly than we do in this class here. And there was-- the hypothesis is that there have to be something like a neutral current in there. But it has never been observed in nature. And so this bubble chamber, specifically the one Gargamelle at CERN, one was able to actually see those, see virtual and really see, those interactions for the first time. And the first pictures that have been taken in the 1970s, 1973. And this picture here illustrates-- I will expand it in a second-- illustrates the interaction of a neutrino coming into the bubble chamber, making an interaction with an electron, and then scattering off, kicking off the electron. So what you see here is this incoming-- this is an anti-neutrino kicking off an electron. See the electron here. The neutrino goes off undetected. It just disappears. You see here, the electron. And then there's also two protons. One proton here. Let's use a different color. And one photon here. Causing electron positron pair, and the second photon here doing the very same thing. You can see those particles here. See here and then going on here as well. So this is a bubble chamber picture. We'll talk about bubble chambers very briefly later in the lecture as well. But they're very extremely important and useful tools in order to illustrate-- to visualize and measure particle interaction. All right. So much to the introduction, and we'll continue now with the next lecture on talking about this mixture, this electroweak mixture [INAUDIBLE] |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L03_Introduction_to_Nuclear_and_Particle_Physics_Teaching_Staff.txt | MARKUS KLUTE: Welcome to this short recording of our 8.701 lecture. With this short discussion, I want to introduce the teaching staff to this course, the instructor, which is myself, Markus Klute, and our TA Justin. So I am faculty in the physics department since 2009. I received my diploma, which is my undergraduate study, in Germany and also my PhD from the University in Bonn. I-- with research on the OPAL experiment, which is an experiment on the large electron positron collider at CERN, on the ATLAS experiment, which is in the Large Hadron Collider, also at CERN, and also on the D0 experiment, which was one of the experiments at the Tevatron in Chicago at Fermilab. So after my PhD, I joined MIT as a postdoc, and later as a research scientist. And I worked on the CDF experiment at the Tevatron and the CMS experiment at the Large Hadron Collider. In 2007, I accepted a faculty position in Germany, where I spent about a year, before coming back to MIT. It's not a surprise-- this is CV-- that I-- my interest is in particle physics at the energy frontier. I work on design, the construction, the commissioning of detectors. We made major contribution to the hydronic calorimeter in CMS and also the data acquisition system. Most recently, I was leading the software and computing project within the CMS experiment. Most exciting-- the physics. And in 2012, we were able to discover the Higgs boson with the CMS experiment. And so ATLAS had a similar experience. And since then, we were able to look more deeply, more closely into how the Higgs boson-- case. We were able to show couplings to W and Z bosons, to photons via loops of top quarks and W bosons. But then we looked into whether or not Higgs bosons couple of fermions like electrons. So we were able to show couplings to taus-- those are the heaviest brothers of the electrons-- and most recently, couplings of Higgs bosons to muons, which are second-generation particles. So this exploitation and exploration of the Higgs boson is really at the center of my research portfolio. We don't just spend our time analyzing the data from the LHC, but we also look whether or not new machines can teach us important information about the Higgs boson. When I'm not doing research, I have a little family who I like to spend time with. I used to play soccer quite a bit, and also tennis. But when you get older, those kind of contact interactions are not very useful for you anymore. You get injured quite a bit. So I left it to the running part of those activities. And I picked up running quite a bit. In a couple of weeks into the semester, everyone, my and the first virtual Boston Marathon-- so some of those videos might have a little bit of a fighting face in front of you. But I hope everything is going well. Our teaching TA is Justin. Justin is a graduate student in my group. He's in the second year. He took this very class with Mike Williams last fall. So he should be well-prepared to guide you and answer your questions. He received his undergraduate degree from the University of Michigan at Ann Arbor, where he was working on the g-2 experiment. We'll probably talk about those kind of experiments later in the class as well. Recently, he has been taking up running as well. But he spends, also, time on rowing and hiking. So we, both of us, look forward to meeting you in this first class on Tuesday and hope that we have a good time together with 8.701. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L49_QED_Renormalization_and_HigherOrder_QED_Diagrams.txt | MARKUS KLUTE: Welcome back to 8.701. So in this short video, we'll talking about the effects of renormalization and higher-order QED diagrams. And we have already seen that when we perform the integration of a matrix element over q, that there's infinities. And those infinities can be gotten around with by introducing a cut-off scale, and then only integrating up to this cut-off scale, and in effect renormalize or redefine masses and couplings involved. So the first of such an effect or effective higher order is the so-called vacuum polarization. So you have high-order contribution which may look like this, an additional loop, which you can think about in the photon is so energetic that it can polarize the vacuum and produce a particle-antiparticle pair. And that particle and antiparticle pair then provides kind of a screening of the charge you want to probe. So I mentioned this particle here emits a photon and wants to probe the electric charge of this particle. The fact that there is this vacuum polarization going on screens the charge you actually want to probe. So effectively, you see a charge which is either reduced or increased, depending on the impact, on the sort of impact. So the fine structure constant, remember, alpha, is 1/137 [INAUDIBLE] at 0. We measure that the value of alpha slightly increases as we go to higher and higher energies. And so this has been measured in many places, experimentally confirmed, for example, at lab where, at the [INAUDIBLE] of the W or the scale of the W mass, the value of alpha has been measured to 1/128. And you can do this analytically. You can calculate what the scale of effect is and plot this. So you see here the running of alpha QED as a function of the energy scale. And you see this increase here. There's another interesting point. As you open up new particles of higher masses, quark and antiquark pairs, you find this kind of stepping approach here-- the new particles, for example. The muon, antimuon, and the quarks and antiquarks are [INAUDIBLE]. All right. So this is the first effect, vacuum polarization. The second effect is very interesting-- probably one of the most famous higher-order processes in QED. And it has to do with the anomalous magnetic moment. The magnetic moment is defined as the factor g times e over 2m times S, the spin of the particle. Diagrams of this form here modify the vertex. So instead of having a vertex like this, you have higher orders which modify the [INAUDIBLE].. This leads, then, to a modified magnetic moment of the fermion involved-- in this case, an electron, but it can also [INAUDIBLE] muon or tau or another form. So this was shown by Schwinger already in 1948, and then experimentally confirmed many times after. This g minus [INAUDIBLE],, which is 2, leading order, is modified to 2 plus pi over alpha in next leading order. We can calculate this to many orders, and with mind-blowing precision. So the precision is better than 10 to the minus 12 pi now. I think it's 1.7 [INAUDIBLE] at this stage. And experiments have tried to measure this to find new effects. I mentioned you can have new particles which make modifications to the magnetic moment here. And you would have sensitivity by making experimental measurements. So the electrons [INAUDIBLE] muon g minus 2 has been measured with very high precision. No new physics have been observed, but some differences are small differences that have caused quite some excitement about further improvements in g minus 2 measurements specifically for the muon. So they're ongoing at this point. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L14_Fermions_Bosons_and_Fields_Decays.txt | MARKUS KLUTE: Hello. So in the last video, we looked at ranges of forces. So we already saw one aspect, and wanted to learn about what kind of interaction happens when you study a specific force. You learn about the force carrier from noting the range in which forces interact. In this class, we talk about decays. But in more general terms, when we want to measure properties of forces, we have basically three concepts at hand which can be experimentally determined. The first one is masses of bound states. | you might remember from atomic physics that you learn a lot about electromagnetic interaction by studying, for example, the hydrogen atom, where you have an electron circling around the program, and you can study in detail aspects of the electromagnetic interaction. The second aspect is decay rates of unstable particles, or the width of an unstable particle. So in quantum mechanics, the lifetime of the particle is related to the width. And so that's what we're going to discuss in this video. And then lastly, we can look at the reaction rates expressed as cross sections. So that's the topic of the next video after this. Let's talk about decays. So we can define this new symbol, the decay rate lambda, and s as a function of time, probability that a particle will survive at least until some time t. We can now discuss this, and say, the probability at some time t relates to the probability at some time t plus delta t by the likelihood 1 minus the decay rate times the time interval, delta t. And so from this, we find that the change of the probability for the particle to survive is proportional to that probability and decay rate. So now if we integrate this, we find that the log of this probability is equal to some constant times lambda decay rate times time. So now, if you simply assume that the particle existed at the initial time t, we said we set k equal to 0. We find this very famous exponential decay law, e to the power of minus lambda t. And this is shown here in this picture as this exponential. So far, so good. We can now define and look at this distribution a little bit more. We can, for example, look at the average time for that a particle lives, because the average time tau is simply given by the integral from 0 to infinity. So we basically integrate over this distribution to get the average time for the particle. And that's equal to 1 over the decay rate, 1 over lambda. You can do the algebra, yourself, in fact, to follow this. So if you now express this probability for the particle to survive until some time t through the lifetime, you find this is equal to e to the minus t over the lifetime of the particle. So you might not want to look at one particle, but a sum of particles and look, at the time dependence of the number of particles which survived. Because it's equal to the number of particles as a function of time is equal to the initial number of particles times the probability that any given particle survives. And that's, again, given by this exponential. In nuclear physics, one often talks about the half life, half life time of the particle or of [INAUDIBLE]. And that's given, as you would assume, by the time it takes for half of the particles to decay. So N of tau 1/2 is equal to the number of initial particles, N0 over 2. And you find, then, that this half life is related to the lifetime of the particle with a factor of about 2/3. This leads to some confusion in numerical values sometimes when you ask for specific answers from up here, in experiments. All right, so there's another aspect of decays which arises, which comes from a fundamental property of quantum mechanics. So if you have an unstable state, or any unstable state does not have exact energy state, it just follows, if you want, from the uncertainty principle. So the width of the particle is quantized, and it's quantized with this lambda here. And if you can see, lambda relates, again, to the decay rate or the lifetime of the particle. Another complication can occur when there's multiple ways for the particle to decay. For example, you have a Higgs boson as we see on the next slide, which might decay into multiple-- has a way to decay into multiple particles. Here, we define a partial width, where the partial width is defined as half the width of the particle to decay into a specific mode. And then the total width of the particle is given by the sum of the partial widths of all possible ways for the particle to decay. Using this, you can also calculate the likelihood of a particle to decay in a specific way. That's called the "branching fraction." And that's given by the partial width divided by the total width, or the partial decay weight divided by the total decay rate. Again, the total has to be 1, the probability for a particle to decay in any mode is 1. Therefore, the sum of the branching ratios is 1 as well. All right, so looking at a specific example, the Higgs boson is probably my favorite example in this entire class. You find here, given branching fractions or ratios of specific decay modes, because it's not always in which the Higgs boson decay, but the most dominant one. The most prominent one is the one with Higgs boson decays into a pair of b quarks. We will later see, maybe even as an exercise, why the distribution function of branching ratios is the way it's being shown here. The Higgs boson has been measured with a mass of 125 gV and a little bit. And you see at this mass here, the prominent decay mode is simply the b bar, but it's also possible for the exponential to decay into a pair of W bosons, where an interesting loop diagram into gluons. Even so, gluons are massless, or tau, charm, Z bosons, and so on. And we just showed you in a paper which was submitted today to the arXiv, the Higgs boson also can decay into a pair of muons with a branching ratio of 2 times 10 to the minus 4. So it's rather rare, but it's possible. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L92_Nuclear_Physics_Binding_Energies.txt | MARKUS KLUTE: Welcome back to 8.701. In this video, we talk about nuclear binding energies. But before we get started on this topic, I would like you to have a look at this table or this diagram, which shows nuclear abundances in our solar system, so how many atoms of the various types are present in our solar system. And you see this super interesting structure. Most of it is hydrogen here. And then there seems to be some sort of an excess of iron. And, you know, this table goes all the way to lead and beyond. There's a little bit of a gap here. So how is this possible? How did this particle get there in the first place? Why is there some which are more frequent than others? A very interesting question which we will be able to answer at the end of the discussion of nuclear physics. And the first starting point is to understand why nuclei are stable in the first place, what holds them together. And that's the discussion of binding energies. So we can very simply write the binding energy down. We just sum up all the ingredients, the number of protons and electrons, times Z, the number of neutrons, A, the mass number minus Z. And then we subtract from this the mass of the nuclei itself. What remains is the binding energy. Just for record, the mass of the proton, the mass of the proton, the mass of the neutron, you see that the neutron is slightly heavier than the proton, and the neutron itself decays into the proton. A free proton does not decay. However, inside the nucleus, a proton can also decay. And then we have the mass of the electron. You see there's a factor of almost 2,000 between the mass scales of an electron and the mass of a neutron. For all practical purposes, we can ignore this, but if it comes to precision measurements, then the mass of the neutron which is 1/2 MeV becomes-- of an electron, which is 1/2 MeV, becomes relevant. This plot here shows the average binding energy per nuclei as a function of the mass number. And you see that with the exception of those light elements, you see that this is fairly stable and in the range of 7.5 to 9 MeV. You also see that there seem to be a maximum around iron, which then leads to an advantage in gaining energy when you are-- gaining energy going to lower-energy state when you go in this direction and in this direction. This part is called fission, this part is called fusion. Both processes, because we go to a more energy-preferred state, are possible. They can be used in order to attain energy from nuclear processes. This diagram here can be parameterized. And the rest of this video, we'll talk about a very popular parameterization of the binding energy. This is semi-empirical. It's called the Weizsager formula, because it was proposed by a German called Weizsager. Sometimes it's called the semi-empirical mass formula, and sometimes the discussion is summarized in the liquid-drop model. And you'll see why in a second. What you see here is very similar to where before we can calculate the mass and from that the binding energy by having those first elements here. And this part then here is our binding energy. And there's 1, 2, 3, 4, 5 terms, which we're discussing now on the next slide. What is shown here is a parameterization so you can fit the data and get a best estimate for the individual parameters in this equation. All right. So as the name said, liquid-drop model, we can think about, in some essence, about a nuclide atom as being built out of a soup, a liquid of protons and neutrons which are bound together. So the first term which contributes to the binding energy is the so-called volume term. This dominates the binding energy. And it's proportional to the number-- the mass number. Remember, the mass number is proportional to the third power of the radius, hence proportional to the volume of the nuclei. And you know, this contributes with about 16 MeV per nucleon, per proton and neutron. And from this, you can conclude that the nuclear force must be very short range. Why is that? Because in order for the binding energy to depend dominantly on the volume, the individual nuclei can only see its nearest neighbor. So this corresponds to a short range force which is roughly of the distance of two nuclei. If any given nuclei would be able to see everybody else, we would see a term quadratic in the number of nuclei available. As a result of this, you can calculate the central density, which is about 0.17 nucleons per cubic femtometers or an average distance between protons and neutrons of 1.8 femtometers. OK. So they are really tightly packed. The size of a proton is about a femtometer. All right. However, the protons and neutrons which are on the surface of this construct, they see less nuclei-- they see less nuclei, nucleons around him. So therefore, the binding energy needs to be reduced. And it needs to be reduced with the area of the surface of the nucleus. So they need to be proportional to r squared, and therefore proportional to A, the mass number, to the 2/3. OK. Then the protons in the nucleus, they are electrically charged. So they are going to get apart. And that itself also reduces the binding energy. So this is proportional to the number of charges squared. And then you normalize this by the radius. So charge squared over-- no, normalized. Charge squared over r. So this is q squared over r, Coulomb term. All right. There's two more terms, which are quite interesting. The first one is sensitive to the asymmetry between the number of neutrons and the number of protons. And it can be explained by the Pauli exclusion principle, which allows only two identical fermions, neutrons, or two protons to occupy the same energy state. So you basically fill up the energy states. Now, as is shown in this picture here, you reach the lowest energy state if the number of neutrons and the number of protons is actually the same. You have higher energies if there is an asymmetry between those two numbers. But we have yet another term which is sensitive to the asymmetry between the number of neutrons and protons, which reduces the binding energy. And last but not least, the pairing term, which has a very similar origin. So what we are looking at here is the energy is lower if you have an even number of neutrons or protons. It's higher when we have an odd number. So you can have an odd number for the protons or the neutrons, in which case A is odd. And the worst case, or the worst energy state, is achieved having both the protons and the neutrons odd. So that's why this is a little bit more complicated way to write this. You have those three different cases-- number of protons and number of neutrons even, A odd, or both Z and N odd, in which case A is even-- just to add to the confusion a little bit. All right. Then you can make a drawing of the binding energy as a function of the mass. And you see, again, those individual terms, the volume energy, the volume term here constant, the volume plus surface reduced, the volume plus surface Coulomb further reduced, and then all terms put together here. And you see again this binding energy parameterization, which we just discussed-- we saw the actual values. There's a maximum around here for iron. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L64_Weak_Interactions_Quarks.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture, we look at the interaction of W bosons with quarks, or the charged weak interaction of quarks. Let's just make a number of observations. Now, we observe that the weak interaction respects the lepton generation, meaning that a W couples to an electron and an electron neutrino, but not an electron muon neutrino. But in the case of the quarks, there is violation of this. So there is a disrespect of the quark generation when it comes to the interaction with Ws. So when you investigate these two diagrams here, you find that the W couples to the V quark and the U quark, but it also couples to the S quark. S quark and the U quark, all right? In order to encapsulate this, we have to make a correction. And the corrections are typically called cosine theta C and sine theta C. Theta C is the Cabibbo angle, so theta Cabibbo. Turns out this is rather small, so it's a decrease. So it is a correction, a small correction. We studied the partial decay width of the kaon in leptonic decays over the partial decays to a pion of the pion in leptonic decays. We found that there is a set of forms that are the form factor for the pion decay and a form factor for the kaon decay. It turns out that the form factors are due to this additional correction, so you find the tangent-square of the Cabibbo angle as part of this correction. Good. So far, so good. Now we have made an observation. We haven't explained anything yet. We can make one more observation, or discuss one, and that's the decay of neutral kaons to a pair of muons. It turns out that those are not very likely. Even so, you would expect that the amplitude has a factor here of sine and cosine of Cabibbo, so the amplitude should be on the order of sine theta Cabibbo times cosine theta Cabibbo. So when this was studied, the charm quark hadn't been discovered. And the explanation to why this decay is suppressed comes from the fact that there is a second diagram here, where we just replace the U quark in this rule with a C quark. This diagram contributes there's a minus sign to the amplitude. And therefore, those two diagrams, they cancel. Right? They are the same magnitude, about the same magnitude, and they have an opposite sign. So this was the first indication that there must be a force quark contributing to this kind of process. That's the charm quark. Let me now try to understand what's going on here. Why is the W coupling modified? Or why is not the full down quark or charm and strange quark participating in the weak interaction? We can do the following here. We can rewrite-- we note that the weak interaction eigenstate, the eigenstate which participate in the [INAUDIBLE],, is not the eigenstate of the particle itself, the so-called mass eigenstate. So we have to write the weak eigenstate as the linear combination of the mass eigenstate or [INAUDIBLE].. This can be done in this matrix form here, where we simply multiply the weak eigenstates with a matrix, and just basically rotate it into the mass eigenstate. So this was proposed by Cabibbo, and rather successful. But it didn't incorporate the third-generation particle. And this was done by Kobayashi and Maskawa, who generalized the scheme and proposed the so-called CKM matrix, the easier form Cabibbo-- Cabibbo, Kobayashi, and Maskawa. Because of constraints we'll discuss in one of the recitations, this matrix can be parameterized as only three independent angles and one complex phase as independent parameters. So you can choose different parameterization to capture that there's only four parameters in this matrix, which has nine components. One is by thinking about this matrix as three independent rotations and this complex phase here. In terms of numerical values, you see that the diagonal elements of this matrix are very close to 1, meaning that this mixing is on the block sector of the [INAUDIBLE] effect. You find that those next-nearest off-diagonal elements are on the order of 20%, and the next-to-next off-elements are even smaller. OK? This leads us, then, to the discussion that we can use different parameterization in order to capture [INAUDIBLE]. We already discussed the standard parameterization, which you can really think about three different rotation. And the values of those angles are give here, together with the value of this additional phase. Another way to look at this is the so-called Wolfenstein parameterization. And this captures the fact that it seems like that there's a correction being applied to the actual particle. So you find elements of the order of lambda. Lambda is about 22%. And you find elements which are in the order of 1 minus the lambda-square correction. And then there is elements which are of lambda-square and lambda 3rd power. So this captures the matrix, and then there's higher-order corrections to that which are of order lambda to the 4th power. OK? Because there's constraints on this matrix-- and specifically, unitarity constraints, meaning that we have three generations that will make a mixing of those three mass eigenstates to weak eigenstate-- then unitarity, the total number of particles in this discussion, is conserved. This will change if there would be, for example, a force generation particle. So the study of weak-charged interaction with quarks helps us to understand whether or not there might be a force generation. We'll not go into too much detail here, but also, the complex phase explains part of our understanding of CP violation. And we might discuss this in a little bit of a later lecture. But nevertheless, what we can achieve from here is those unitarity constraints, just simply summing over the matrix elements, the scalar product of matrix elements. And those where the contribution vanishes, so those where j and k are not equal, those can be represented as a triangle. That's kind of interesting. You can just rewrite this. You just say that those three elements of the sum are equal to 0. Then you normalize by one element. In this case here, normalize by Vcd, Vcb. And so then this makes this point being 0, and so we have this nice triangle here, which has three angles, alpha, beta, and gamma, and this point here, rho and eta. And so this is a nice way to illustrate actual measurements of the elements of the CKM matrix [INAUDIBLE]. And without actually explaining how we do this experiment, you can assume that all measurements have-- well, you can understand that all measurements have to do with the weak interaction with quarks. That's how we have access to the CKM matrix elements. Sometimes this results in the modification of masses or splitting of mass states, and sometimes the direct measurement cause a recoupling. When you put all of those measurements back together, you can look at this. So we see our triangle here. We see this point, eta and rho, which is given here in this right-angular plane. And you see various number of measurements which correspond to elements of this CKM matrix. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L15_Fermions_Bosons_and_Fields_Reactions.txt | MARKUS KLUTE: Hello. Welcome back to 8.701. In this lecture, this little video, we're going to look at reactions and how they relate to cross-sections. So we start or continue the discussion of how we can relate experimentally measured determined properties to the forces involved. The last lecture, we looked at decay rates and the rates of unstable particles. This time, we're going to look at the reaction rates expressed as cross-sections. We can start doing this by looking at this simplified picture. So the reaction rate is related to the rate of the beam, so how many particles per second are available for the interactions times the number density of the particle in the target. So you have your target here. And clearly, the number of reactions depend on how dense your target is, the thickness-- so you go along how the thicker the material the target is, the more likely it is for reactions to occur-- and then by the actual physics-- by the likelihood of a collision to occur. And this likelihood is called a cross-section. And we can think about this cross-section as a geometrical area. All right? So let's look at this a little bit more. We can stay with a very classical model, a model in which we have two billiard balls-- and light one, a small one, with radius r1, and a larger one with the radius r2. Clearly, a collision occurs when the impact parameter b here between those two billiard balls is smaller than the sum of the radii. OK? So now we can analyze this reaction a little bit more and look at angular distributions. We find that the cross-section differential distribution is given as a function of sine theta. We can also express this using the azimuth or as a solid angle here. As a reminder, the solid angle theta is equal to sine theta d theta d phi. All right? So for this specific problem here, we talk about an isotropic reaction, because by definition the cross-section per solid angle is independent of theta and phi. The mapping between sine theta and theta is not trivial. That's why you see this shape of the distribution. Yeah. But the d sigma d cosine theta is [INAUDIBLE] if you look at this as a function of cosine theta. All right. So this is just a classical picture of what we're going to do later in the class. It's using quantum field theory or Feynman rules in order to calculate cross-sections, or the decay rates. But this classical picture is really something I would like you to keep in mind-- the idea of the likelihood of the collision to occur is in units of an area and can be seen as a geometrical cross-section is kind of a very nice picture to keep in mind. And it also helps estimating orders of magnitude of rates of collisions to occur. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L53_QCD_Feynman_Rules_in_QCD.txt | PROFESSOR: Welcome back to 8.701. So in this lecture, we want to write down the Feynman rules for QCD. A lot of work happened to get to this point. If you recall, we derived Feynman rules for toy theory, then we extended this to QED on the electrodynamics, introducing and keeping track of the spin of particles. And now in QCD, We have to do one step in addition. We have to keep track of the color of particles. So color, the charge of QCD, plays a very important role here. If we just investigate this fundamental process here, this fundamental vertex, which is very similar to a photon being radiated from an electron. Here, we have a gluon being radiated from quarks. And when this happens, the quark changes its color. So here you have a quark with color blue and a quark with color red. The gluon adds a color. It adds red, but it also takes away the blue. So the quark itself basically carries two colors. It's bi-colored-- one color and at the same time an anti-color. A leading order process here has two vertices. And this is a scattering, for example, of two quarks which [INAUDIBLE] through a gluon. All right, so we have three kinds of charges, meaning that the quarks come in red, green, or blue. So we have to keep track of this when write down our amplitudes or matrix element. So we do this by just introducing a new vector. I call this c here. Very simple-- a three-element vector for those three colors. All right, but how about the gluon itself? So QCD is based on a symmetry group which is called SU3. So it's basically a rotation in three dimensional space. There's eight independent such rotations. And what you want to think about is just moving from one color state into the next. So you can write them down as I show here. And we do this in linear combination of color and anti-color. If you want to keep track of this, we do this in this form. We have a vector of A elements. And the gluon is one of those components. This is given here for this very first one. So let me introduce the notation here. Literally, we were introducing Pauli matrices, but not for SU2, as a symmetry group with SU3. They're called the Gell-Mann matrices, and they're just written here. Again, those are the rotations I was just talking about. Those are the rotation from one color state into the next. There's commutator relations for this. So if you have two of those Gell-Mann matrices and you tried to write down the commutator, you find this 2 times i times this structure functions, or structure constants, times 1-- well, the next Gell-Mann records. So if you're just thinking about how many combinations are there, there's eight matrices. We have 8 times 8 times 8 combinations of those constants here. And so it means that there is 512 of those constants. Most of them are 0. And the ones which are not 0, they're listed here, or combinations of those. All right, so now we're ready to just write down the QCD Feynman rules. Again, we start from the external lines. So for the incoming quark or outgoing quark, we write our spinor, and then we keep track of color. We do this for quarks and we do this for anti-quarks in the very same way. For the gluon now, we have to keep our polarization, keep track of the polarization of the gluon and also of the color. And those are the vectors I just introduced on the previous slide. For the propagators, we have quarks and anti-quarks and gluons. Gluons are massless, so the propagator here looks very much like the one for the photon. And the propagator for the quarks looks very much like the one we had for the electron in QED. The fundamental vertices, I introduced one already, but there's two more. That's because gluon carries charge and can come to itself. We already discussed this in a recitation. So we have those self-couplings of the gluon here. And those vertices are as relevant or more relevant even than the one here. All right, so the vertices come with a vertex factor in our Feynman rules. Again, for this very first one, we find a very similar one as we had before. You see here this Gell-Mann matrix. And let's just call this a rotation in color. But we have this three gluon vertex and the four gluon vertex as well. And those come with structure functions here to keep track of the permutations, commutations between the color involved. And it becomes more complicated for this four gluon vertex, which has pairs of structure constants each. All right, that's all we need in order to calculate matrix elements or amplitudes for QCD. The rest is just executing the Feynman rules as we did before for our toy theory, and also for QED. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L73_Higgs_Physics_Production_and_Decay.txt | MARKUS KLUTE: Welcome back to 8.701. So now the theory about Higgs boson is quite interesting. And we have seen that it's able to produce masses of boson and of fermions. But as Feynman said, a theory is only as good as the experiment. And so we have to actually find Higgs boson and measure their properties and see whether or not their properties are consistent with the standard model. And so the first question we have to ask is how Higgs boson is actually produced and how they might decay. So this little video is just talking about this in very simple terms. So we are able to produce Higgs boson at proton-proton machines, at proton-proton colliders, because they are able to bring collisions or produce collisions which allow to have energies of a mass scale consistent with the Higgs boson. We already know now that the Higgs boson has a mass of 125 GeV. So the collisions have to have enough energy to produce this particle. We're using protons to do this because protons can be accelerated easier than electrons, for example, to those energies. But protons themselves are, as we have seen, objects which consist of quarks and gluons. So how do gluons now produce Higgs bosons? And this is shown in this Feynman diagram here where you have two gluons from two colliding protons. And they are able via this loop diagram, triangle diagram, including the top quark-- the top quark goes around here-- to cause an excitation in the vacuum, which is the Higgs boson. They couple to the vacuum producing the Higgs boson. And then instantaneously, the Higgs boson decays. And it can do this in various ways. And we look at this on the next slide. And in this example, it decays via a similar triangle loop into a W. And then the W is electrically charged and can radiate a pair of photons. So while the gluons and the photons are massless, we are able to produce Higgs bosons via the collisions of two gluons and observe it via decay into photons. This is quite spectacular. In more general ways, it's not just one mechanism to produce Higgs bosons, but there's various. And the leading ones are shown here. We have just looked at this first one, which is called gluon fusion channel, so two gluons fuse together to a Higgs boson. There's this one here, the second one, which is called Vector Boson Fusion, BBF, where we have the quarks radiate to vector bosons. This is either the Z boson or the W boson. And then those couple to the Higgs field and produce the Higgs boson. We can also have associated production where we have two quarks via, again, a Z boson or a W boson radiating then a Z boson or a W boson and generating a Higgs. And then a very exciting one is the last one here where the Higgs boson is produced in association with two top quarks. Remember, the top quark here has a mass of 175 GeV. The mass of the Higgs boson is 125 GeV. And so we have two of those. So the scale of this event is in the order of 500 GeV. This plot here shows the production cross-section as functional center of mass energy. Now the Large Hadron Collider, the LHC, operates currently at 13 TeV. So the cross-section we want to look at those. So you see that the leading cross-section, which is in the order of tens of picobarn, is the one where we have gluon fusion. On the order of magnitude less is the one with vector boson fusion. And then we have associated production. And then the last one is the one where we have Higgs boson produced in association with top quarks. Because the mass scale is much higher, the cross-section for TD Higgs is lower. If you were to increase the center of mass energy of the LHC, you see this rapid increase in cross-sections just because there's more phase base available for this production. The coupling here and the coupling here is the same. Then the Higgs boson, as I said, decays. And we have already discussed Higgs branching ratios, or branching ratios in general. And since the coupling of the Higgs boson to fermions is proportional to the mass of the fermions, you see the dominant decay is the one into Higgs to bb-bar. And then you can find lower decays to the taus, the [INAUDIBLE],, and to muons here. You also have to into the vector bosons, W-W, and Z-Z. So similar triangle diagrams, you have decays into glu-glu, and then to photons as well. So we have already measured the Higgs boson at 125 GeV. And so those are the branching ratios as predicted in the standard model. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L63_Weak_Interactions_Pion_Decay.txt | MARKUS KLUTE: Welcome back to 8.701. So now after we introduced the weak interaction and the Feynman rules for weak interaction, we can now look at decays of muons, and in this case, the decay of a pion. Decay of the pion is specifically interesting. And we discussed the decay of the pion before when it came to the discussion of helicity states. Now, let's look at this again with the information we have and what we learned. Now, if you look at the pion decay, the two or three leading decay modes are given here. The one is where the pion, in this case a negatively charged pion, decays into an anti-electron neutrino and an electron, or V or the W in a muon and an antimuon neutrino. If you look at this in the rest frame of the pion, we can see that the neutrino and the lepton, charged lepton, are produced back-to-back. Now, the spin of the pion is 0, which means that the opposite-direction outgoing leptons have to have the same helicity states. Since the neutrino is massless, the antineutrino is massless, the antineutrino is produced right-handed. It is always right-handed. The chiral state of the neutrino and the helicity state of the neutrino are essentially the same, because they're massless. Means the projection of the spinor is basically the same as a projection of the spin on the momentum [? direction. ?] All right. But the charged lepton is massive. If the charged lepton would be massless, the decay would not be allowed. There would not be a right-handed helicity state for a charged lepton. Now, this causes quite some confusion. And I've seen, even in this course, some students being confused by this. I can write the, let's say, right-handed charged lepton and decompose its right-handed helicity state. So this is, let's say, the right-handed helicity state. And I can decompose this through the chiral states, the right-handed and the left-handed. And you have seen in the previous lectures that only the left-handed component participates. Now, you can also see from this equation here that if the momentum and energy would be the same, as it is the case for massless particle, this would be 0, this would be 1, this would be 1. And therefore, this right-handed helicity state would be the same as the chiral state, and it wouldn't be coupling to the weak interaction. Now let's erase this really quickly, because you want to actually look at this decay. And so now we have all the tools together-- almost all the tools together to calculate this, the decay rates, or the ratio of decay rates. And you want to do this in the pion rest frame, so the momenta are given here. See that the pion momentum is 0. And for momentum, the energy is equal to the mass for the charged lepton. And for the neutrino, just produce in an opposite direction. So neutrino in this case goes into negative d direction. Then we can write the leptonic current, as we have just seen in the previous lecture. You see this 1 minus gamma 5 term here. Good. And I could have just called this left-handed here and put this into the definition of the spinor. Fine. When we put a real spinor, this comes out immediately [INAUDIBLE]. Immediately. You have to keep this in mind. The matrix element, then, is a little bit more complicated. And here is an additional-- so you see the current here again. You see the propagator, and I went into the low-energy approximation here. You see that instead of having a [? q ?] square minus m square, I'm just keeping the m square component of this. And then I have this part here for the current, for the pion current here. And I simply parameterize my missing understanding or missing ability to calculate [? non-prohibitive ?] QCD with a form factor. So I introduce this form factor for a pion. This is not an important part of the discussion, we just keep track of this here. All right. Then we can calculate this matrix element fine. We then have to be explicit about the spinors we are using, and we use the momentum as defined above. So this step here I'm not doing explicitly. If you want, you can go to Thomson and read in chapter 11. He gives quite some detail on this. All right. So moving on, there is one extra thing. When we try to calculate the spin-averaged matrix element, we find that we don't have to do any work because there's a spin [? 0 set, ?] there's only one state contributing, so we don't have to do any work. We just have to square the matrix element. We find this as a solution here, and there's an additional factor we haven't introduced yet. This is the Fermi coupling. Again, this comes out in the low-energy approximation. And G Fermi is simply defined over the coupling to the W over the W mass squared, as shown here. All right. Again, this is just a factor which is not relevant to the discussion at this point. But we can then, using Fermi's golden rule, calculate the partial decay width of the pion decay. OK? So we just put in the matrix element here, and we replace the momentum with the energy, being equal to the mass of the pion. And voila-- we get this as an answer for the partial decay width. OK? If you now want to know some experimental information, like the partial decay width of the pion, charged pion, to electrons over the muon, you want to know what this factor is, we immediately can do this. We don't need to know any of the details F, G Fermi as a structure function of the pion. All of those factors cancel out. And what is left here are the parameters of the electron mass, the muon mass, and the pion mass. And if you just use values like this, the mass of the muon with 205 MeV and the mass of the pion 240 MeV, you find 10 to the minus 4 as a value for this ratio of part [? indicators. ?] And you see where this comes from. This basically comes from the fact that the electron mass is much, much smaller than the mass of the muon. And factually, you can expand this by the fact that a right-handed helicity state for a muon can have a much larger contribution of the left-handed chiral state of a muon, while this is not possible-- or, that is, the component is much smaller for the lighter electron. And again, only the left-handed component of the charged lepton contributes to the weak interaction. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L51_QCD_Hadron_Production.txt | [DIGITAL EFFECTS] PROFESSOR: Welcome back to 8.701. So we'll start a new chapter now, QCD or quantum chromodynamics. And then, this first lecture of this chapter, we talk about the production of hadrons. This is really meant as an introductory lecture, but we will also already see some very interesting and useful concepts. So we want to produce quarks and anti-quark pairs, and we do this electron-positron collisions. We have studied in detail this first part of the diagram. Specifically, we calculated a cross-section of muon and anti-muon production, and we also have seen more changes than we have same kind of particle in the final state of the electron-positron scattering to electron-positrons. So now, we've replaced the muons with our quarks and anti-quark pairs. And the first step, we want to remind ourselves of the available quarks in this discussion. So we have the up quark, the down quark, charm, strange, top, and bottom. You see that the charges are given here, and the photon couples to the charge. So the charge is not 1, but it's either 2/3 or minus 1/3. Third We have also here in the table mass is given for those particles. They range from a few MeV to 173 GeV for the top quark. The bottom quark is about 5 GeV heavy. Remember, when we discussed re-normalization, we discussed that those masses are not fixed parameters, but in our perturbation series, they run like the [INAUDIBLE] one. So that might cause some difficulties later on. OK so now we want to produce an up quark and an anti-up quark in this collision. So what happens? So we now have collisions, and then we produce those particles. So let's say an up quark, and an anti-up here, and then your plus and minus collision. Those quarks only live for a very short time, or travel a very short distance in space-- about 10 to the minus 15 meters. And then, they start to pull in out of the vacuum gluons, and quarks, and anti-quark pairs, and those then form into the actual hadrons after some time. And those two pictures here will look very much the same. The difference is the way we treat the hadronization-- the actual fact of forming hadrons. In this first picture, we are thinking about clustering energy particles together and that way form hadrons. In this picture here, we connect them with so-called strings. And those are two different ways to model and model the production of hadrons. Remember, when we look at this process here, we are looking at very low-energy kind of or lower-energy kind of phenomena, and at lower energies, the strength of the strong interaction, the strength of QCD, is-- the coupling is on the order of 1. It can be larger than 1. So perturbation theory is not possible. That's why we need specific models. This is all I want to say at this point. We'll come back to this discussion later on. What I actually want to discuss is what we can learn out of measuring cross-section of hadron production-- for example, by comparing directly the cross-section of [INAUDIBLE] production with a cross-section we just calculated for muon/anti-muon production. And experimental results are given here. So you see as a function of energy, here from 1 GeV to 7 GeV center of mass energy, and then the lower plot just continuing from about 10 to 60 GeV. What you see here is that there is a rich structure. So you see those resonances here, and you also see that there seem to be some sort of increase in the value of this ratio. So how can we now understand this? At leading order, we can just write this down we just calculate its cross-section for electron-muon scattering, or for electron-muon production. And we can write this very same cross-section of the leading order for quark-antiquark production. What we find as differences is the coupling, the coupling itself. Here, we have to use the charge of the quarks and not the charge of the electron. So there is an additional factor here, 1/3 or 2/3 squared. And then, there's the number of possible quark pairs which are available, and that depends on the number of colors. Remember, each quark appears with three different colors, so we have to account for this factor. All right, and then we built the ratio. And the ratio, everything just cancels out-- great. So we have just a number of colors times the sum of the charges squared of the quarks available. What do I mean by "the quarks available?" As we go from lower-energies to higher energies in this plot here, the kinematic-- the energy is sufficient to produce particles based on the masses available. So we find that this explains a step function. Let's look at a specific example. So if you look at center of mass energies, which are larger than 2 times the mass of the bottom quark, and maybe lower than 2 times the mass of the top quark, we are in this specific regime here. Can you see, this is almost flat, and the number we get is almost 4, OK? What we get from this leading order calculation is 3 times 4 over 9 for our up quark, 1 over 9 for down, 1 over 9 for strange, 4 over 9 for charm, and 1 over 9 for our bottom quark, OK? So we built the sum here for all quarks which are kinematically available, and as an answer, we get 11 over 3. So this is in very good agreement-- a leading order, very good agreement with the experimental results. 11 over 3 is almost 4. Excellent, so this is clear indication, experimental indication, that this color factor here is a real thing. There seemed to be 3 up quarks, 3 down quarks, 3 charm quarks, and so on. And we also see that this leading order effect here, the leading order calculation, is already very precise. And the reason for this is that this process here is a QED process. So the production cross-section is QED process, as we just discussed. So now, why do I actually have this as part of our QCD introduction? First, you learned about the color factor here, OK? And second, there is, indeed, corrections, and one of the correction is the one where you actually produce a real gluon in the final state. And those corrections can be calculated. If you go to higher order, you get a correction from this radiated gluon. You also get corrections which looks like this-- vertex corrections-- and you find that the correction here to [? r ?] is about 1 plus alpha s at a specific scale over pi. Now, at a reasonable scale, this is then about 0.1 the value of alpha s, and pi is 3.14, and so you get about a few percent, 3% correction to the r value rate. Why is it so important when there's a small correction? It's a percent or few percent level correction. What is really important is that this process can be used in order to demonstrate the existence of gluons, and so gluons have been discovered this way. And the way this was done is by producing the plus and minus collisions and detecting three bunches of particles-- too from the quarks, and one from the gluons. And then, to identify that this gluon on here is actually a gluon and not some other particle, one can look at angular distributions. One can identify that this is a spin 1 particle, and so on. So there's a little bit more work needed beyond just showing that there's [INAUDIBLE],, but identifying this kind of topology in the plus and minus conditions led to the discovery of gluons in this kind of conditions. So that was my introduction. As a next step, I want to add one more step of, now, how can we learn about this kind of structures before we then dive into Feinmann diagrams, or Feinmann calculations, Feinmann rules for QCD. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L54_QCD_Deep_Inelastic_Scattering.txt | MARKUS KLUTE: Welcome back to 8.701. So we continue our discussion now of electron and proton scattering. And we dive deep into the structure using deep inelastic scattering. Inelastic here means that we are destroying the structure of the proton in the scattering process. But we have a way to look at the remnant of the proton and also of the scattered electron, and then compare our theoretic expectation for the cross-sections with the finding in experiments. Let me just talk about this in more general terms. The energy of the probing electron or the photon in the scattering process allows us to look at the proton with varying resolution. So at very low energies, we basically see a point-like particle. And then the scattering process looks very much like the scattering of an electron with a muon. If we increase the energy of the electron, we can see that there's an extended charge distribution in the proton. Further increase allows us to resolve the fact that the proton is made out of three quarks. And if you increase the energy further, we see a lot of new particles appearing, quarks and antiquarks and gluons, which make up the structure of the proton. The picture here is-- I like this very much, I drew this myself some years ago-- is what I would like to have you remember. So in deep inelastic scattering experiments, we basically use the photon scattered-- radiated off the electron as a magnifying glass for the proton. So we can look into the structure of the proton here, and we see the distribution of the charged particles in the proton-- only the electrically-charged particles. We don't have scattering between-- direct scattering between photons and gluons. So in measurements, what we can do is we can test scattered electron, and we can look for the remnant of the proton in our measurement, and then we do differential cross-section measurements, compare with our theory, and can infer information about the structure of the proton. To do this, we have special kinematic variables which turn out to be very useful. The most important one is probably this x year, which is called the Bjorken scaling x or Bjorken x, which you can think about-- so this is q squared, the momentum transfer of the photon, p is the momentum of the proton, q is the momentum transfer, this q here, factor of 2. And what this is basically the fraction of the momentum carried by the parton here in the scattering process. There's a few other useful variables, but I don't want to go into any of the details yet. So there's a number of very important scattering experiments. The first one I mentioned before is SLAC-MIT experiment, which led to the discovery that the proton is made out of quarks, and the Nobel Prize in Physics 1990 to Jerry Friedman, Henry Kendall, and Taylor. And what they did is they basically had a beam at SLAC of electrons of 5 to 20 GeV. And they scattered this beam off of hydrogen target protons. And they used this spectrometer here in order to then make a differential measurement of the scattered electrons. So that's very cool. Even higher energies were available at HERA, the electron-positron collider, where the energies of the electrons were in the order of 30 GeV of the protons up to 830 GeV. And so what we find then in those collisions, the differential cross-section measurements, is shown here. And it's very-- not an easy plot to read. So you see our structure functions here in the logarithmic plot. Remember, this is the a log 10 plot here. And you see here q squared, the momentum transfer of the photon, so the energy used in the scattering process. When we try to read this here, we can look at a fixed q squared, for example. And at a fixed q squared, you see that if you probe-- if you are testing for a fixed fraction of the partons taking away partons, a fraction of the parton's momentum of the proton, you see that the lower the fractions, the more particles you see. So at a fixed energy, you see many, many more particles the lower you go in the fraction. So there seem to be like an increase of particles the lower the fraction is. If you then check for a fixed fraction, let's say 0.4-- it means that the parton carries 40% of the proton's energy-- you see that it's almost flat as a function of q squared. So it seems like there's 40%-- the number of particles you see at 40% of momentum fraction is constant, this q squared. However, if you look at smaller energies-- sorry, smaller momentum fractions, you see that higher energies seem to show even more particles at this momentum fraction. And the ways to understand this is two diagrams. So the first one is this one here, where you see a quark radiating a gluon. And so what you see is that the deeper you look, you are able to then resolve this part here. And so you see more quarks and gluons which carry even smaller momentum fractions than the initial quark here. You also see diagrams like this, which is called gluon splitting, as a gluon splits into a quark/antiquark here. And you start resolving those. And those also carry lower momentum. The evolution of our-- of this parton distribution function, or the structure functions, can be calculated and described in the so-called DGLAP equations. And all you do here is calculate the contributions from the so-called quark and gluon splitting. So you calculate the splitting functions, these higher-order corrections to a very simple quark model. And you find that you can actually very nicely describe those curves here. So you see this yellowing-- yellow here is the QCD fit, which basically uses those splitting functions as input. All right. So we learned quite a bit about this proton already. If we want to now calculate a cross-section of a proton scattering with a proton, we are actually interested in the energy distributions or the momentum distributions of the partons in the protons. And so we'll come back to how we use this later. But I want to introduce parton distribution functions which do exactly that. They're defined as the probability to find a parton in the proton that carries energy between x and x plus dx. You can write them using the structure functions before. But they literally describe this probability. And so what you find inside the protons are now the valence quark, the down quark and the up quarks, c quarks and antiquarks, and gluons. So you want to describe those momentum distributions or energy distributions of those particles. There's a number of sum rules. If you integrate momentum fractions from 0 to 1, 1 being the momentum of the proton, if you integrate them all together, you have to find 1, because that is the momentum of the proton you start with. If you integrate the down quarks and the up quarks, you find 1 or 2, meaning that those are the number of valence quarks we have available. If you integrate the distributions of strange and antistrange and charm and anticharm, you get 0, because there needs to be the same number of strange and antistrange and charm and antistrange. And because of energy conservation, the sum needs to-- this sum, those sums need to be 0. And then we can look at those. So what's shown here is x times the Parton Distribution Functions, the PDFs, as a function of f. And what you see here for our valence quarks, you see a distribution which is kind of what you expect-- it almost peaks at 0.3, a surge, and has the distributions because there's kinematics involved, and also interactions-- because of the interactions with the gluons. And then you see the c quarks and antiquarks. And you see them increasing in numbers quite significantly here as you go to small fraction of the momentum carried. It's exactly what we just discussed in the previous plot already. An interesting way to look at this very same distribution function is if you plot them proportional to the momentum fractions, or the area proportional to the momentum fractions. And what you see there is that a very significant part of the momentum of the proton is carried by the gluon. So you see here again, our valence quarks, our c quarks, and the gluons itself. So that's all I wanted to say on the structure of the proton. We'll later in a lecture see how we can use those PDFs, those Parton Distribution Functions, in order to calculate a cross-section in proton scattering. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L95_Nuclear_Physics_Shell_Model.txt | MARKUS KLUTE: Welcome back to 8.701. So in this video, we'll talk about the nuclear shell model. We've already seen an interesting empirical model to describe nuclear binding energies-- the liquid drop model. But it comes short in the description of all aspects of the nucleus. So let's see what we can find here. First of all, you probably remember shell models from atomic physics. And shell models are very successful in describing hydrogen, for example. The question is, can this also work for the nucleus? After all, the nucleus is a many-body system, compared to hydrogen, where you have a proton and an electron circling around. There's no analytic solutions, like the Schrodinger equation. There is no dominant center for a long-range force, like the proton has been the dominant center. And we have short-range forces with many pairs of interacting nucleons. And I can continue the list of difficulties. On the other hand, the interactions kind of average out and result in a potential which depends only on the position, but not on the timing of the nucleus. And that leads us, then, to what we call a nuclear mean field. So on average, our proton and our neutron inside the nucleus sees a specific potential. And we can use that, then, parameterized as potential with a harmonic oscillator, and use that model, then, in order to describe our nucleus. So this works, actually, surprisingly well. But before we go there, we'll look at experimental evidence for closed nuclear shells. So again, here is our plot of the binding energy. And you see that there are those areas here that seem to be some sort of higher binding energies. And it turns out those happen at so-called magic numbers. Magic numbers are 2, 8, 20, 28, 50, and 126. So the question now is, how can we explain this? Where does this come from? So again, the experimental evidence is numerous. We find that the number of stable isotopes or isotones is significantly higher for nuclei with a proton-- or neutron, or both-- numbers equal to one of those magic numbers. The nuclear capture cross-section, meaning the likelihood to capture a proton or a neutron, are high for nuclei where exactly one nucleon is missing from a magic number. But it's significantly lower for nuclei with number of nucleons equal to the magic number, meaning that there is this concept of a closed shell. We either just add a nucleon to close it or you have to pay a higher price. The energy of excited states for nuclei with a proton or neutron number equal to the magic number are significantly higher than for other nuclei. And these are all experimental observations. And the excitation probabilities of the first excited states are low for nuclei with a proton-- or neutron, or both-- numbers equal to the magic numbers. Quadruple moments-- we haven't discussed those at length, but you can think about them as deformations of the nuclei. They almost vanish for nuclei with proton or neutron numbers equal to the magic numbers. So those are more kind of sphere kind of objects. Here's a plot which shows or points out the double magic numbers-- as in, those are a nucleus where both the proton number and the neutron number are laying on the magic number. So calcium here has two of those, with 20 protons and 20 neutrons, or 20 protons and 28 neutrons. And there's alphas. Those are specifically interesting object of research. There was some historic confusion in this, and it came from the fact that while the experimental data pointed to nuclear magic numbers of 2, 8, 20, 28, 50, and 126, if you just think about a flat bottom potential, just a flat potential, you find magic numbers which are 2, 8, 20, 40, 70, and 112. And those are typically not in agreement. So therefore, it seemed like that this shell model kind of worked, but not really. We found agreement here, but then disagreement in the higher part of the magic numbers. So something was missing. And so what was missing was the spin-orbit part of the discussion. We alluded to this in the nuclear force. What you have to do is, beyond three-dimensional harmonic oscillator, you have to add the spin-orbit coupling to the Hamiltonian. And when you do that, you change the orbit such that the magic numbers agree with the experimental data. So you see here, the potentials for proton, which has also the Coulomb repulsion added, and the nuclear potential, and then you see that the spin-orbit coupling slightly changes the potential. All right. As a comparison here, the nuclear and atomic shell models, just for an example. And you see we call them shells because we see that the energy gaps between individual shells are quite large, much larger than within the shell. And the same-- this is for the atomic model. And for the nucleus, you see very similar. So it's not that extended, but still larger gaps in energy when you go from one state to the next. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L46_QED_Examples.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture, we are going to start looking at an example of the QED process, for which we can now, with all the tools we have in hand, calculate the matrix elements' transition amplitude. All right. In more general terms, we can look at all the examples. And they are listed here-- second-order processes and one third-order process. We are going to discuss them in more detail as we go along. This is really just to give you some feedback for the different kinds of processes we're going to look at. So the first one is elastic scattering. And muon-electron scattering, that's the one process we're going to look in more detail. Why? Because this is the simplest case. For this process, there is only one leading-order diagram, which is exactly the one shown here. For other processes where have the same particle interacting, we find that we do have to consider multiple diagrams-- for example, this one here, where we have electron and electron scattering. And so we have to calculate not just this leading diagram, which looks exactly like the one for the muon scattering, but we also have to include the [INAUDIBLE] where we change the outgoing electron leg. And so on. And other processes are including electron-positron scattering, which is caused Bhabha scattering, Compton scattering, which we discussed the kinematics for already, but also inelastic processes like pair annihilation or pair production. There's a very interesting diagram here, which is the third-order diagram, which is responsible for the anomalous magnetic moment. And we'll talk more about that when we talk about higher-order interactions. So let's have a look at this electron-muon scattering process. So only one diagram contributes at the second order. And so you have an electron and a muon scattering via the exchange of a photon. This is after all of QED diagram. So now, how do we calculate the matrix element? We simply just follow the Feynman rules-- Feynman rules as we discussed them before. And if you want to do this now, you draw your Feynman diagram. It's always very good and useful to draw a Feynman diagram first and label accordingly. That's super-useful if you want to systematically evaluate this process. And then you start going backwards from an outgoing leg back to the initial leg. And you see this part here. You have the u3, the third particle here, the vertex vector, and the first particle. Then you have a propagator here for your photon. It's given by minus i g mu,nu divided by q square. And then you analyze the second part here. Here you find the first particle, vertex vector, and the second particle. For each of those lines, you have to make sure that energy and momentum is converted into those [INAUDIBLE] delta functions. And then the last part you have to do, integrate over your momentum. All right. That's already the end. The next step in your list of rules is carry out the integration. Integrate over q. That drops your delta function, but you are left with one delta function which you are also supposed to drop, which then gives you your matrix element. Now, here we're already done. If you now further want to evaluate this diagram, you actually have to be more explicit about the spinors involved. What needs to be done now is have a discussion on how to handle the spin of the particles, meaning being explicit about the spinors. And in order to do that, we'll discuss how we treat spin, how we have to treat spin, either in an experiment where the spin of the initial particle is known or an experiment where we have to average over all possible spin states. So that's part of the next lecture discussion. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L06_Introduction_to_Nuclear_and_Particle_Physics_Particles.txt | MARKUS KLUTE: Welcome back to 8.701. With this lecture, I'd like to introduce the major players of this class, the particles, fundamental particles, but also some of the compound particles, which play a role in the discussions we'll have over the next weeks. For centuries, people believed that atoms are the most fundamental constituents of matter. The name atom comes from the Greek atomic, which means not divisible. But as you know today, electrons and nuclei build an atom. But even those nuclei are not fundamental particles. As you see nicely here in this picture, the nucleus can be built out of many neutrons and protons, and even those protons and neutrons are not fundamental particles. A proton, for example, as depicted here, has three components and three constituents, two up quarks and one down quark. A neutron, then, is built out of one down quark and two up quarks. It's kind of important to understand and appreciate the size of those particles, specifically the difference in size. Comparing here an atom, a typical atom, of the size of 10 to the minus 10 meters, and that compares to a nucleon, which can be a few 10 to the minus 15 meters. When we talk about a proton, we typically like to use units of femtometers, which is 10 times 10 to the minus 15 meters. The tremendous size different or the expansive finding of the famous gold foil experiment, which found that an atom basically is made out of a nothingness, empty space, and a very dense charged core, the nucleus. So you see this very much in this picture and by comparing those order of magnitudes. Not shown in this picture here is a particle which doesn't really like to interact with anybody else or which the forces it's interacting with are so weak that it cannot be found, and that is a neutrino. A neutrino is not that different from an electron or from a quark. It's just that the interactions it participates in are only the big force as we understand today. Just to be clear, when I talk about a fundamental particle, I talk about a particle which has no size, it's infinitely small. It has no substructure, meaning it cannot be broken up into constituents and can also not be excited. Having said that, this is our current understanding of nature and of those particles. Experimentally, we can only probe those particles to a certain scale or size, and we'll talk later about how precisely we actually do know that a quark is fundamental or an electron is fundamental. In this discussion here and most of the lecture, I talk about the standard model in particle physics. It is a fact that our measurements and our experimental findings are in fantastic agreement with this very predictive theory. The only experimental deviation from this is the fact that we measure the mass of neutrinos to be non-zero. As a consequence, you could say the standard model is broken or we found physics beyond the standard model, but it is actually rather straightforward to extend the standard model to accommodate neutrino masses. So we can just forget about this small fact and assume that the standard model describes nature as we know it. In the last week of this class, we'll talk about motivation, why we thinks that the standard model, in fact, is not complete, and one of the big drivers here is the fact that we cannot describe all observations in nature, specifically the observation of dark matter, with the standard model in particle physics. But that's for a later date. Looking into some more detail, the standard model has sets of particles, some particles which carry forces and some are metaparticles. The ones who carry forces are all spin-one particles, and their bosons. We have, in the standard model, describe three interactions. The electromagnetic interactions, which are known from light, electromagnetic phenomena, chemistry. The atom's behavior molecule is determined by electromagnetic interactions. And then there's a strong interaction. The name already tells you that it's strong. It's very strong. The first carrier here is the gluon. The gluons, there's eight, and then differentiated by the so-called color, which is an interesting effect. And then, there is the weak interaction carried by the W boson and the Z boson. They are different in their own right because they carry mass. They are massive particles. And they're actually quite heavy, about 80 to 100 times as heavy as a proton. The weak interaction is responsible for neutron decays, also responsible for the burning of the sun. And in our nuclear physics part, we talk in detail about what that all means. Gravitational effects are not considered in the standard model. They are very, very weak compared to the strength of the other forces we'll discuss here. But it's technically very difficult to actually accommodate gravity as part of a quantum [INAUDIBLE] theory. And therefore, we will simply ignore this fact. And this is yet another reason why you can consider the standard model to be incomplete as a model or theory describing nature. The matter particles themselves are all fermions. They have spin half. And they come in three different generations. The only difference between one generation to the next is the fact that those particles have different mass. In other words, their coupling to the Higgs field is different. And that's the only difference between those particles. There's consequences out of this, for example heavier particles indicating to lighter particles. We differentiate between the quarks and leptons. Quarks partake in the strong interactions, while electrons are neutral in the strong interaction. And then we have seen neutrinos already and, electron-type particles. Electron-type particles, charged leptons, they have electric charge. So they couple to photons by neutrinos bond. I'll show you here again one of those striking differences. An electron is 9 times 10 to the minus 31 kilograms heavy. We like to talk about units in a different class. But that's 511 keV. While the neuron is about 200 times heavier, and even heavier the tau lepton. And as I said before, that allows that tau lepton [INAUDIBLE] neurons to decay into lighter particles. We'll talk about this and how we can use those decays in order to learn about the standard model. And then there is the Higgs boson. And again, the Higgs boson, discovered in 2012, plays a very special role in the standard model. We talk about the Lagrange is a theory which describes the standard model, or which is the basic-- is a theory itself. And later on, in this theory we introduce a potential which is shown here-- it's this Mexican hat-- if you want-- potential. And what you see here is that in this picture, that the lowest energy state of this potential breaks a symmetry. So it's away from the 0 point. And that symmetry breaking then gives mass to the W and the Z boson, which I was describing before. And the coupling of meta particles to the Higgs field gives mass at this point. All this on a later date. I thought here, I'd also give you how CERN actually depicts the Higgs boson. July 4th is a special day in the US, but it's also a special day at CERN, because the Higgs discovery has been announced on this day. And at CERN, typically, the menu is enriched on that day by the Higgs boson itself in the form of pizza, which takes the shape of event displays-- displays of proton-proton collisions into the Higgs boson and then further decays. We'll look at some of those event displays also later. Well, that completes the elementary-- the fundamental particles of the standard model, particles which describe almost everything around us. So we have seen the charged and neutral leptons. We have seen the quarks. We have seen the force carriers, with the W/Z bosons, the photon and the gluon, and the Higgs boson, which is kind of the very special particle holding this all together. An interesting point here is, again, looking a little bit at history and when those particles have been discovered, and also when those particles have been explained-- and I don't want to read this all to you. You see that the earliest discovery was a discovery by J.J. Thomson of the electron, and the latest, the completion of the particles in the standard model-- the Higgs boson-- in 2012. Interesting for the Higgs boson, the time between the theoretical discovery by Peter Higgs and friends was about 50 years before the experimental discovery. But then there is also composite particles. The things around you are all composite particles. Here, we can differentiate between mesons and baryons. Mesons are particles which are made out of quarks and antiquark pairs [? for the ?] bound states. And they are bosonic, because you add 2 1/2 particles together. One example is a pion, which is made out of up quarks and down quarks that can be charged in neutral pions. And the zoo of particle increases quite quickly if you then consider that it's not just up and down quarks making those particles. But you could add strangeness to that, meaning a strange quark. And so you see here in this picture, the mesons, the pions, etas, but also then neutrons and charged particles with different charges, and then kaons, which are particles which have one s quark in addition to an up quark and down quark. And then there's baryons. They are made out of three quarks. We have already seen the protons and neutrons. But also here, you can see a different configuration. We'll introduce the concept of isospin. You can see here the proton and the neutron, and then strangeness being added with one or two components of strangeness. And then there is this isospin component as well. This situation becomes complex very, very quickly. But we'll look at this in more detail. And then again, putting those bound states together-- bound states of protons and neutrons through the strong force gives us a rich table of nuclei on isotope tape. Here, you can describe nuclei by the number of protons, which is typically called Z, and the number of neutrons, which is typically called N. The sum, N plus Z is the atomic mass. Now, each proton and neutron are about 1 geV heavy. And then the atomic mass is simply the sum of them. So you already know explicitly how heavy your isotope might be. We'll talk about the fact that those masses are not quite the sum of the masses later, because there is some binding energy involved. And then isotopes can be stable or unstable. They can decay in various processes. You can combine them. It's very interesting to understand how they're actually being created in our solar system or the universe in general. So with this, I would like to conclude this part of the introduction. So we have seen the major players of this course with the fundamental particles, but also the compound particles, meson, baryons, and nuclei. And there's a few more points in the introduction before we then dive into a little bit more of the theoretical discussion. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L84_Neutrino_Physics_Experimental_Study.txt | MARKUS KLUTE: Welcome back to 8.701. So in this video, we want to look at experimental studies of neutrino oscillations. The first question is, where do we get the neutrinos? How do we produce the neutrinos? The answer is, there's numerous sources for neutrinos. You might be lucky and find them in supernova explosions. Or if we're really trying hard, we can observe them as relics of the Big Bang. There is a lot of neutrinos as a relic of the Big Bang around us. Problem is that they have very low energies and are difficult to observe. Easier-- so is the use of neutrinos in the-- generated in cosmic ray showers. There's a lot of neutrinos coming from the sun. Beams, beamlines-- accelerators can be used to smash particles into a material, and then in the decay product, produce also neutrinos. And also, reactors. Nuclear reactors can be used as neutrino sources. By the way, neutrinos can also be used in order to monitor the nuclear activity around the globe. OK. Studies of neutrino oscillations. So we can make this table here and ask ourselves, what kind of-- the experimental parameters are the length, the energy, and the sensitivity to a specific mass range. So for the solar neutrinos, you know the distance between the Earth and the sun is pretty much fixed to first order. The energy of the neutrinos coming out is in the order of 1 meV. We are going to look at the table. And so the mass range you can probe is 10 to the minus 10 in delta m squared. For atmospheric neutrinos, they're produced in the upper atmosphere, 10 to the 4, 10 to 7 meters. Energies can range-- have a large range, let's say 10 to the 2 to 10 to the 5 meV. And then reactors, typically meV range. It's kind of the nuclear range for the neutrino energies. And the range is given by how much space do you have around or away from a nuclear reactor. Similarly for accelerators. You build an accelerator or use an existing accelerator, and then you build your detectors, maybe close to it, and maybe another one far away. And that's limited by the size of our planet or wherever you want to build your detectors. Energy ranges there depends on the energy range of the accelerator. And that is in the order of 10 to the 3, 10 to the 4 meV. So you see that it's actually rather a straightforward study. Also, it's interesting to see-- and we'll see this next-- what kind of flavor of neutrinos, and whether or not we can study neutrinos or antineutrinos with other experiment is important. Let's go through this. So it's been a little bit of a history in how this all occurred. So the first question is, what happens to the solar neutrinos? So solar neutrinos are basically produced in the core of the sun, together with light. It turns out that the light of the sun takes about 10,000 years to come out of the sun, while the neutrinos come out immediately. So when first experiments tried to observe solar neutrinos, they had to theoretically estimate how many neutrinos to expect, and they saw less. And so one explanation would have been, or could have been, or was, maybe something happened at the core of the sun and we just haven't seen it yet, because the light which come out of the sun has a delay of up to 10,000 years. That didn't turn out to be the case. So here is the spectrum of the neutrino energies and the specific sources of neutrinos from the sun. In our nuclear physics discussion, we'll get to the point that we understand how the neutrino-- how the sun produces energy, and then some of this becomes more clear. The story to take away at this point is that there are certain-- there's several processes in the sun producing neutrinos. And they all come with their characteristic energy distribution. But the bottom line is you find meV scale neutrinos from the sun. There's a soup of electron neutrinos. They start interacting with the sun. And there's a little bit of a flavor evolution within-- when they go through the material of the sun. But, you know, what you want to really do is look for disappearance in detectors which are sensitive to electron neutrinos. And that has been done in a number of experiments. Most famous may be the Davis experiments which had a big tank of chlorine. And in the interaction, you were looking for finding argon in your detector, and you just every now and then went in there and saw how much argon was actually produced. And it turned out that those experiments, all of them, found a reduced number of neutrinos, reduced with respect to the theoretical expectation. So far so good. The assumption was that-- or there was no knowledge of neutrino oscillations or mixing at this time. So that needed to be explained. And one way to explain-- it's not just using the charge interaction, which allows you to probe the flavor of the neutrino, but also lose a neutral-- the neutral scattering, which then allows you to measure the total number of neutrinos. And if you do this-- this was done by the SNO experiment-- you find that the total number of neutrinos is in good agreement with the theoretical expectation. Hence, those neutrinos are not really lost, they're just more from one flavor into the next. So this was the first evidence for solar neutrinos to be oscillating. By now there's-- this first experiment was Homestake. By now, there is a larger number of solar neutrino experiments, and you see the long time of neutrino studies. Different materials are being used, different energy thresholds being tested, different scale of the experiments, and experiments become more sensitive the larger they are. And so this you can-- something you can see from this table. The next sort of neutrinos is the ones which are produced in the atmosphere. So they are produced in decays of pions and kaons and by the cosmic rays interact with the atmosphere, or the Earth's atmosphere. And so you find, for example, a pi plus decaying into a muon and a muon neutrino. And then the muon itself can decay into an electron, an electron neutrino, and a muon antineutrino. So if you, for example, build a ratio of muon/antimuon over electron/anti-electron neutrinos, you find it should be around 2. You have two neutrinos-- muon neutrinos here, and an electron neutrino. And also, this wasn't really observed. And you can see here, as a function of the column of the zenith, of looking up upwards towards the atmosphere or downwards, you find that there is an effect of this kind of oscillation. So the actual measurement depends on the energy range. And you can see that the muon neutrinos, the muon-like neutrinos, they disappear. You see here in this very clear plot the prediction without oscillation compared to the experimental results, so you see the muon neutrinos actually disappear. Moving on, accelerators can be used. And the big accelerator on the Earth at CERN or at Fermilab. The beamline at Fermilab is called NuMI, Fermilab National Accelerator Laboratory, FNAL. Or CERN, or in Japan. Those are the big sources of accelerator-driven neutrinos. And with those, there's big detectors, typically a detector very close to the accelerator and one further away. The close one probes the total flux of the neutrino at the experiment, and then the one which is away in order to probe the effect of the neutrino oscillation in order to study appearance or disappearance. And again here, you see this is a long program. But it basically took off quite a bit in the 2000s and after. So a lot of neutrino physics happened in those years. A lot of information about the neutrino was gathered in those years. And again here-- this is from the T2K experiment-- you see the comparison between unoscillated predictions and oscillated using some additional constraints about expectation of the total flux of the neutrinos, and that compared to the data. And you see very clearly that the-- that the neutrinos oscillate, that there is evidence of oscillation. All right. The last source are reactor neutrinos. We'll talk about nuclear physics starting from next week. Here neutrinos are produced in nuclear fission of heavy isotopes, mainly uranium and polonium. The flux can be calculated in various ways, for example by knowing the nuclear processes and the thermal power produced in the reactor, or by just looking at how much fuel is being-- nuclear fuel is being used by the reactor itself. What's being studied here is the anti-electron neutrino disappearance. And what you do here is you use this inverse beta decay, where you have a collision or scattering of a anti-electron neutrino with a proton, creating an electron-- a positron and a neutron. And again, there's a number of experiments. Basically, whenever you have a large neutrino experiment, it can probe surrounding nuclear reactors. There's many of them in France and Japan, also in China. And they're being used in those experiments. Again, you see that this topic became really hot in the 2000s. And again, a lot of-- a lot has been learned. So this part here shows you as a function of the energy-- the length over the energy-- so kilometer over meV-- the oscillation, the survival probability, meaning that you can actually see directly the oscillation of the neutrinos. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L05_Introduction_Early_History_and_People_in_Nuclear_and_Particle_Physics.txt | MARKUS KLUTE: Welcome again to 8.701. So this is the fifth section of our introduction. I'd like to talk about the early history and the people involved in nuclear and particle physics. I cover the period from 1820 to the beginning of the Second World War. other elements of the later history of the development of the standard model-- parity violation, CP violation-- those aspects will be covered when we talk about the actual physics involved. But I'd like to give you some more background. Especially since we start the discussion with particle physics, it's good to understand what was the starting ground, which shoulders did people stand on at the time. Important to realize here, at this point, I'm not an historian. I like to read about history. I just finished an interesting book on Einstein. I like to have a good understanding how the people of the time, the time itself, and the physics discovery interacted. It helps me in understanding the process of being scientific. When you look at history, you find a lot of places where progress was made by curiosity and by doing things which are not the common way to proceed. And so one learns this by looking at history. And I might give you a number of examples here as well. So diving in one of the questions at the time, going back, again, almost 200 years is how old was the Earth? How old is the Earth? And about 200 years ago, people started to argue whether or not the 10,000 of years, which was long thought to be the age of the Earth, are actually correct. And specifically geologists and biologists argued that this cannot be true. They observed how slowly geological and biological processes such as erosion and evolution occur. And if you just try to, by observation, put all of those ducks in a row, if you want, you find that the Earth must be much, much older than those 10,000 years. On the contrary, some physicists argued that the Earth cannot be as old as several hundreds of millions of years because it would be, by now, a very cold and dark place. One of the opposers of evolution was Lord Kelvin, or William Thompson. And he argued with classical thermodynamics calculations that the Earth cannot be as old as those 300 million years as Darwin writes in initial printing of The Origin of Species. Herman Helmholtz, a few years later, tried to use energy compensation principles to calculate how much heat from the sun would radiate if the energy comes from slow contraction. And he, by converting gravitational potential energy to heat, calculated age cannot be more than 18 million years. So putting those together, you find, on the one side, the physicists, theology might be a different dimension to this discussion, and then geologists and biologists. And a complicated question. I mean, really there was something to be learned. Something was not quite understood. And so we come back to this question. Next slide. But then progress was made in the understanding of physics. And here to be named are Henri Becquerel, for example, for the discovery of radiation from uranium and Ernest Rutherford for the discovery, by studying this radiation, that there must be at least two different sources of radiation. And he called them, simply by following the Greek alphabet, alpha and beta rays. In the same year, J. J. Thompson discovered a particle, the electron charged particle, or the electron. Becquerel's story is quite interesting, as he was trying to understand the material Roentgen studied. And he was interested in figuring out what fluorescence material can do. And again, this is one of the examples where he, by accident, discovered that it is actually not the fact that you have a material, you expose this to sunlight, you wait a little while, and it still radiates the light. So this has delayed the fluorescence of the material. It is not quite the full story to some fluorescent materials. So he discovered this by accident, by putting the mineral in a drawer, together with a photo plate, and found that there was only a very short, limited evidence from that photo plate to be radiated by the sun. But it was basically foggy from being in the same drawer as a mineral. But this was a rather accidental discovery. Marie and Pierre Curie proposed the new term, radioactivity, for materials which generally emit light. And they discovered additional materials to the uranium which was discovered by Becquerel. So they discovered thorium, for example, and later also the elements of polonium and radium. And they discovered that those elements radiate a lot of radioactivity. So Marie Curie was able to measure the energy being radiated, and found that a gram of radium can emit up to 140 calories per hour. And so you find that a gram of radium is able to power, basically, the energy you need in order to survive-- provide the energy. So moving a little forward, so then uranium specifically, but other radioactive materials, were studied. And Paul Villard discovered that there must be a third component of radiation which behaves different from the other two. And he called these gamma rays, again, simply following the alphabet and moving along to the third letter. Rutherford then connects these findings first to the question of the age of the Earth. And he simply suggests that it's those radioactive elements which are in ores in the core of the Earth which provide additional source of feed sufficient, because of the connectivity of the Earth, to keep the Earth geologically active. And he comes to the conclusion that the Earth might be as well a few billion years old, as we know now it is. So putting this in context, at the very same time in Bern, Switzerland, a clerk named Albert Einstein has a fantastic year. He, in one year, comes up with a sequence of theoretical discoveries. One is special relativity. And he uses the findings of special relativity to derive that there's an equivalence between energy and mass. And this equivalence, as we will see later, when we discussed nuclear physics specifically, is very important to understand nuclear decay, nuclear fission, and nuclear fusion, and figuring out why, if you have a component state that seems to be lighter than the sum of the individual components making up this particle. Rutherford was a pioneer with collider experiments in the sense that he used other particles a lot to bombard all kinds of materials. So what he found first is that, if alpha particles, when stopped, turn into helium. So the alpha particle itself grabs on to the electrons from the material it collides in. There was a technical problem. I'll just continue. And that turns into helium. His students, Marsden and Geiger, then perform a very famous gold foil experiment. So you all probably have heard about this, taking an alpha particle source and you shine it on a foil of gold. And then you look at the angular distribution of the particles, which go through or which have been backscattered from this foil. And Rutherford then takes those measurements and turns them into a solar-system-like model of atoms which are essentially made out of empty space and a very small, intense nuclei. So Rutherford then continues with this experiment and it's able to produce, by bombarding nitrogen with other particles, protons and oxygen. And this is, in fact, the first human-engineered nuclear reaction. So now we are in the year 1919, just after the First World War ended. On the theoretical side, this is the time that quantum mechanics is developed. And Dirac then combines relativity with quantum mechanics, which then leads to the so-called Dirac equation, which we're going to look at very shortly in this class as well. This equation is quite interesting because it predicts the existence of negative energy states. And so then that just comes out of the equations. And then you ask yourself, what's happening here? You can have an interpretation that, for an electron, of the electron which traveled backwards in time, or you interpret them as electrons with negative energies. And so this then leads to the prediction of antimatter. Pauli and Fermi, they're puzzled by a problem of energy conservation in the second case. And so this is something which is rather weird and is a big challenge to the physics of the time. And they've solved this challenge by proposing a new particle which is rather light and doesn't interact with the detectors they had available at the time. So it just escapes undetected. They call this particle the neutrino. A year later, neutrons are directly detected in experiments by Chadwick with beryllium and other particles again. And then the predicted anti-electrons, the positrons, were discovered by Anderson in tracts of photographic plates which looked like electrons but they curve in the wrong direction. So either they have the opposite charge or they travel backwards in time. They didn't have quite the time resolution to [INAUDIBLE].. All right, also on the theoretical side, it needed to be understood how neutrons and protons actually bind together in nuclei. And so Hideki Yukawa proposes the existence of a strong force which is really, really strong and binds those nuclear together to a degree that you cannot easily break them apart. And then Bethe calculates how nuclear fusion, rather than the fission process, can be used in order to power the sun. So for this, he proposes a three-step process, the so-called proton-proton chain, which I will not discuss here but we will certainly discuss later in this class. And then there's more developments in the area of nuclear physics. And this progress is made by, again, using all kinds of materials and bombarding with each other. So for example, by colliding neutrons with uranium, one discovers a process of nuclear fission. This was done by Lise Meitner and Otto Hahn in the late 1930s. So from there on, there's interesting further developments going on in the sense that many physicists at the time in Europe are rather concerned by the developments of the Nazi Party in Germany. In the '40s already after the start of the Second World War, Albert Einstein wrote a letter to Theodore Roosevelt pointing out that there is a real threat that the Nazis are going to develop a bomb based on nuclear processes. And so this then led to the Manhattan Program in the US and the development of the first nuclear bombs or atom bombs. And in August 1945, the first two bombs were dropped on Japan, which led then to the surrender of the Japanese empire and the end of the Second World War. With that, I stop the discussion of those early developments. I hope you got a first glimpse and use this as a starting point to read further. Those characters, Lise Meitner, for example-- I'm looking at her picture right now-- very, very interesting to see how those people were connected, how those people communicated, and in which environment they had to work. Lise Meitner, for example, was Jewish. And she left Europe, had to flee from the Nazis in the '30s, while making these kind of discoveries. Also interesting is maybe the historical introduction to elementary particles. I have this here in David Griffiths' book. This starts with this kind of classical era and then goes beyond the Second World War and introduces the findings in particle physics beyond what I explained to this point. So I hope you enjoyed this. This is basically the last of these introductory lectures which doesn't come with a set of problems, with a set of things you should be interacting with. So the next one will already do that. And we will use this in the Thursday recitation of the first week to have discussion. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L85_Neutrino_Physics_Results_of_Neutrino_Oscillation_Experiments.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture and also the next one, we'll look at some of the experimental findings of neutrinos. Given the sheer number of experiments and the long history in which we are trying to understand neutrinos and neutrinos' behavior, this can be rather confusing. So I'm trying to condense this a little bit and just give you the highlights, or the basic pieces of information. So this one here shows a summary of what we know from neutrino oscillation. So if we look at atmospheric neutrinos, we find that mirror neutrinos and anti-mirror neutrinos disappear, and they're most likely converting into tau neutrinos and anti-tau neutrinos. We look at accelerating neutrinos-- here we are using mirror neutrinos and anti-mirror ones-- we can show that they disappear over distances of 200 to 800 kilometers. From accelerators, we also know that they appear or reappear as electron or anti-electron neutrinos over those same distances. From the solar neutrinos, we know that electron neutrinos convert into mirror neutrinos and/or tau neutrinos. There is more detail to this story than I'm giving you here, where we would have to discuss the interaction of the matter effects of neutrinos. That is for a different lecture. That goes beyond the scope of what I want to discuss here. From reactor neutrinos, we also know that anti-electron neutrinos disappear as well. So the name of the game now is to take all of those pieces of information and extract information about the neutrinos' property. And in order to do that, one has to make assumption about the number of available neutrino generations, and in some part of the interpretation, also about the nature of the neutrinos. As you can simply figure out of the exercise we had before, it matters to the neutrino oscillation probabilities whether one assumes two or three or four neutrinos in the mix. But if you just focus here on three neutrinos, you still have the problem that we have degeneracies in the discussion. And they can be boiled down to two major kind of trends. One is where the spectrum of the neutrino mass follows a normal ordering, meaning that the mass of the first is smaller than the mass of the second is smaller than the mass of the third, or that the spectrum is inverted, meaning that the mass of the third might be smaller than the mass of the second and the mass of-- the first and the second. Data suggests that the difference squared in mass splittings between those states is such that delta m12 squared is much smaller than delta m31 squared, which is approximately the same size as delta m32 squared. So if you look now at the numbers for the normal hierarchy spectrum, we find that the mass of the first is much, much smaller than the mass of the second, which is a little bit smaller than the mass of the third. Numerically, we find the mass of the second is in the order of 8 times 10 to the minus 3 electron-volt, and the mass of the third in the order of 0.05 electron-volts, so really, really small masses. The inverted spectrum-- here the story is slightly different. Here we find that m1 is about 0.05 election-volts, which is similar to the square root of the mass splitting between 3 and 2, which is also 0.05 electron-volts. The information of the neutrino oscillation experiment, and then how to map into the individual parameters of our neutrino CKM matrix is summarized here, and also in the mass information. And I don't want to read the entire table. I'll just leave this here for you. So in order to understand this, one has to go back to the first slide and understand what kind of information we extract from various neutrino experiments-- for example, the solar neutrino experiments-- and then think about, is this sensitive to oscillations between the first and the second generation, or the first and the third generation? So that's kind of the mapping you have to do in order to understand this table fully. There are some experiment where the information just dominates the position of a certain measurement. In others, there is combinations of results coming out. The other reason why I put this table here in this lecture is to just illustrate how diverse the landscape of experiments is and why that's needed. In order to get a full picture of neutrinos and their properties, one has to identify the individual properties in the experiments and then put the picture back together in a global fit or in a general analysis of the data. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L31_Feynman_Calculus_Introduction.txt | [SQUEAKING] [RUSTLING] [CLICKING] MARKUS KLUTE: Welcome back to 8.701. So we are going to start a new chapter in this class, Feynman Calculus. The purpose of this very first introduction is to just set the stage. And we do this by recapping what we've discussed so far in the first three weeks of this course. So first we introduced the player in the field-- the elementary particles, the matter particles, and the force carriers. We have seen that there are three generation of fermions, that there's leptons and quarks. We have seen that they, eh, in principle, interact in different ways with those force carriers. We have seen that there are three kinds of charges, or three kinds of interactions-- the electromagnetic interaction, the weak interaction, and the strong interaction. Then we moved on and had a quantitative discussion of relativistic kinematics. So there's quite a set of useful problems we can look at. For example, we are wondering, how much energy does my proton need in a beam on a fixed target where we want to produce antiprotons? So those problems we were able to discuss and quantitatively figure out what the answers to those questions are. Then we looked at Feynman diagram. We are able to read them and understand what they principally mean. And then we had a discussion last week on symmetries. And we introduced parity, we introduced charge conjugation, we looked at CP and CP violation. We had a qualitative discussion on decays and scattering. We defined what a geometrical cross-section is. And now we are really starting the quantitative discussion of particle dynamics. We do this now in the next video by introducing Fermi's golden rule. And then we study a toy theory which is simplified such that the algebra involved is not going to be too much of a hassle, so we can focus on understanding the following Feynman rules in order to calculate decay rates and scattering cross-sections. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L09_Introduction_to_Nuclear_and_Particle_Physics_Spin.txt | MARKUS KLUTE: Hello, and welcome back to 8.701. In this class, we'll talk about spin. If you remember the discussion on relativistic kinematics we had last week, you noticed that I discussed the decay of a pion, say a neutral pion, into an electron and-- a positron and an electron. And we were able to calculate the velocity of those two particles, the electron and the positron, quite easily by knowing the mass of the pion and the masses of the electron and the positron, which is served into the rest frame of the pion, and we can calculate our velocity. I also told you that this decay is highly suppressed because of the spins of the particles involved. The pion has spin zero, and the electron and the positron have spin 1/2, but it is not easily possible to align the electron and the positron such that the spins align to 0. So therefore, this decay is not usually possible. Now, let's dive a little bit into this. In quantum mechanics, the spin of a particle with a vector is quantized, and in terms of its length and its components. You calculate the length of the spin vector [? S. ?] You find that it's square root of f times s plus 1 in units of h-bar. The components, and along any axis, actually-- and in this case here, the d-axis-- have eigenvalues, and they are listed here. And we find that there is 2s plus 1 possible values. So I'll pick here, just arbitrarily, the z axis. But the question-- it's an obvious question-- which axis is a sensible choice for this problem? So I want you to actually stop here and think about this. What are sensible options? If you want to get an eigenvalue with the physical state of particles, which axis are the right ones to choose-- or, sensible? There's no right and wrong in this discussion. Let me motivate this. If you look at the orbital momentum of a particle, that's given by r cross p, where p is the momentum vector of the particle. So now, if you're looking at the total momentum, we have to look add the angular momentum and the spin of the particle together. So as shown in this picture here, you see that the parallel component-- the component in parallel to the slide direction-- is 0 by definition, because the cross-product is defined-- the angular momentum is defined as a cross-product with the momentum. So this is a nice choice of coordinate system or of axis-- namely, that is the choice of the momentum of the particle. So you find that the total momentum perpendicular is the spin of the particle perpendicular, and the transverse component is its angular momentum, the orbital angular momentum, and the spin of the particle in the transverse direction. This, then, immediately gets us to a new definition, then of helicity. You can define the helicity of a particle as the spin of the particle dotted with the momentum of the particle, and then normalized by the momentum. So basically, for a fermion, which has a spin 1/2, you get plus 1/2 if the spin points in the momentum direction and minus 1/2 if it points in the opposite direction. So now, if you go back to our particle here, off by here, decaying into an electron and a positron, spin is at 1/2, but you find that an electron is a left-handed particle, so its helicity will point in this direction. And for the positron-- sorry-- for the positron, it points in the same connection. So the pion here is spin 0, and if you discuss this in the rest frame of the pion, the electron and the positron fly off in opposite directions, which means that the spin doesn't in align to 0. So that's why this is highly suppressed. It's not 0, because you can find-- you can put [INAUDIBLE] into the rest frame, where you're just basically looking at both particles from one side that was coming to you, and in that case, it's allowed. But the spintronic configuration is highly [INAUDIBLE].. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L13_Fermions_Bosons_and_Fields_Ranges_of_Forces.txt | MARKUS KLUTE: Hello and welcome back to 8.701. So in this class we're going to talk about the range of forces and specifically how the range of forces depends on the mass of the particle involved in transmitting the force. We have seen this table before, the different forces, the strong force, the electromagnetic force, and the weak force, and the boson which carries the force, gluons, photons, and the W and the Z bosons. But now we want to actually just look into the aspect of the range and how masses interplay here. So you all know the electromagnetic potential due to a point charge given by the Maxwell equation. We have seen this equation before and the time-independent potential here. You also know that the massless photon gives rise to an infinite range of electromagnetic [INAUDIBLE].. But the question now is, how would this change, how would this be modified if the photon were massive? But to do this we have to generalize just a little bit. First of all, we have to look at the time-dependent equation or the wave equation. And this is wave equation. You have also seen this before. Wave equation, and by adding a mass term. So we are using here an equation which has to be fulfilled by our particle simulation between energy, momentum, and mass. We have discussed this in the context of special relativity before. What we are trying to do now is build a Schrodinger-like equation by using the quantum mechanical operators for energy and momentum. So we just add this here and find a new equation, which is called the Klein-Gordon equation. So this equation has to be in all particle waves or particles have to fulfill the equation. So the question is, what kind of solutions does this kind of equation have? How does this look like? And if now start from, again, a time-independent equation, you find solutions which look like this. And you see again very similar, then, to before the charge over some constant as a function of radius, but you also see this exponential term. And what we see here is this potential, this form of potential is called Yukawa potential. And what's nicely shown in this plot here is, again, the potential or the function of radius in units of centimeters, the dependency of the mass [INAUDIBLE] the range of the force and the mass. So you see here as an example for the mass equal to here. Mass is going to 1 GeV and mass is going to 10 GeV. And you can easily see the range of the force is reduced by the fact that the particle actually had mass. Now, the gauge boson, the boson of the weak interaction, the W and the Z boson are quite massive. They have masses in the order of 80, 90, in the order of 100 GeV. So you can see that the masses actually leads to a reduction of the range of this force. And yet, we find-- here, just don't look. Don't look at the gravitational part here. It's not part of the standard model. If you look at electromagnetic interaction, where the range is infinite, we find that the range of the weak interaction is 10 to the minus 18 meters. So this is greatly reduced, because of the masses of the particles. If you just look at the charge itself, we would find that the weak interaction and the electromagnetic interaction are actually quite comparable. We will also talk about the strong interaction. We see here there's a strong interaction that the coupling, the alpha, is in the order of 1. And when we talked about Feynman diagrams, we talked about perturbation theory. If you do a perturbation theory for a interaction of order of 1, couplings of order 1, your vertices on the order of 1, you will see that perturbation theory might break down. So, OK. So what we have seen in this lecture is how a mass, or how masses, reduce the range of a force. We have simply built Klein-Gordon equation here, looked at a solution, and found that there is this diminishing of the range of the force. Later, we will look at one additional complication, and another equation which has to do with [INAUDIBLE] particles, which is the so-called Dirac equation, which has to be fulfilled or hold up by the fermions. But that's beyond the scope of this class. We'll look at this later in more detail. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L24_Symmetries_Charge_Conjugation.txt | MARKUS KLUTE: Hello. Welcome back to 8.701. In this short video, we will talk about charge conjugation. Charge conjugation is the transformation which switches all particles to their corresponding antiparticles and vice versa. So you have a particle p, you apply charge conjugation on this particle, and you receive its antiparticle. This changes all signs of internal quantum numbers-- the charge, the baryon number, the lepton number, strangeness, charmness, and so on. But at leaves the mass, the energy, the momentum, and the spin untouched. The electromagnetic and strong interactions, they obey charged symmetry. But the weak interaction violates charge symmetry. So charge conjugation, it's a multiplicative quantum number, like parity. You get identity if you apply charge conjugation twice. You make an antiparticle, and then you apply this to the antiparticle, you get the particle back. Only particles that are their own antiparticles can be eigenstates of this symmetry. You can see this here. When you apply this, you either get a positive or negative sign. But this is only valid for particles who are their own antiparticles. As elementary particles, that only leaves the photon. As composite particles, you will see later that there is a number of mesons which can be their own antiparticles. But we'll discuss this in subsequent lectures. By itself, there's limited use to the symmetry in order to learn things. There are some examples, and we'll discuss them in a recitation, where you can learn about possible decays, for example, in pions, neutral pions, from applying this symmetry, without knowing really what is the underlying physics. But in the next lecture, we then talk about CP, the multiplication of parity and charge conjugation, and some of the interesting effects which occur from this. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L12_Fermions_Bosons_and_Fields_Feynman_Diagram.txt | MARKUS KLUTE: Welcome to 8.701. So in this lecture, we'll give you the first introduction to Feynman diagram. This is part 1 out of a few sections on Feynman diagrams. So this is really meant to introduce the topic such that we can use the same language to talk about Feynman diagrams before we then later on are able to use them as a tool to calculate interesting processes. This brings me right to the essence already. What is a Feynman diagrams and what can it be used for? They arise from pertubative calculations of amplitude for reactions. And that's exactly how we're going to use them later on. It turns out that the mathematical terms in the perturbation series can be represented as a diagram. And then you can turn this around and use the diagram in order to perform a calculation. So each of the diagrams then indicates a particular factor in the calculation. Again, and then you have a rule which allows you to, after drawing, you can then put the pieces together in order to perform [INAUDIBLE] calculation. The derivation of those tools or rules is beyond the content of this course. But I will teach you how to actually use diagrams in order to calculate things. So here's one example of a diagram. Let me just put this down here so you can see this. So this is an electron radiating a photon. You see components like those lines here. Those represent particles with energy and momentum also what to consider the spin. And they meet at a point. This point here is called a vertex. And this is where the interaction takes place. And in this example. The vertex is labeled with a q or e, representing the charge, the electric charge, which gives us the strength of the coupling. We already discussed when we talked about units that we can express the strength of the electromagnetic-- the coupling in QED with the electric charge. And that's shown below again. The amplitude then turns out to be proportional to the charge or to this coupling. And the diagrams with n vertices for n of those components here get a factor e, the charge, to the nth power in the amplitude, and e to the second. Because if we're going to calculate a probability, you have to square the amplitude. You get a factor of e to 2n. Again, don't get confused-- e is charge. So for n vertices, there will be a factor alpha to the nth power for the probability. And so since alpha is 1/137, you see that if I want to do a calculation, and diagrams which have n vertices will be suppressed, will not contribute much to our perturbation serious because alpha is much, much smaller than 1. So this is already an interesting finding. Can restrict yourself to calculating diagrams which have a couple of vertices or n vertices, but you don't have to calculate the entire series. You want to measure your calculation with experimental findings. Interesting here-- antiparticles. If you have a specific vertex and you calculated it, it can be reused. It can be reused for example by replacing a particle with an antiparticle or by re-labeling. One thing I haven't explained to you yet, you have to define when you write them which is the direction of time. We'll come to this direction. And so in this case here, you have a particle and an antiparticle unrelating to a photon. So far, so good so. This is again a good point to stop and just try to read the diagrams. Note that what happens in this discussion when you actually change the direction of time-- forward down. You want either directions. So now if you want to calculate the reaction, it's not sufficient to just use one word vertex. Why? Because a single vertex will not be able to give us a reaction. You can simply see this when you look at something like an electron plus electron photon. This is not really possible because of energy and momentum conservation in this diagram. So you need a couple of vertices in order to make a reaction. So this here is, again, we have potentially the time going this direction. There's a scattering between an electron and a muon through the exchange of a photon. Both particles have electric charge of e. And then you can just calculate what is the probability for a process like this to occur. We'll see how to do this technically later on. But hopefully you have a first impression. Again, let's label this now very quickly. So you have an incoming particle, a second incoming particle, outgoing particles, and an exchange particle. So this exchange particle is a photon. And there's two vertices in this diagram. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L04_Introduction_to_Nuclear_and_Particle_Physics_Literature.txt | PROFESSOR: Hello, welcome again to 8701. In this short video, I'll talk about the books we are using and the literature we are using in this class. So let's dive right into. There's a sequence of textbooks I go back to when I prepare the material for the class. You know, the one which I use in order to derive the outline or the schedule for the class is Introduction to High Energy Physics by Perkins. But again, I use material from a sequence of textbooks and reading material for you guys as well. Nuclear physics is not covered in Perkins, so we have here Samuel Wong's book on Introductory Nuclear Physics. We spend about two weeks talking about nuclear physics towards the second part of the class. And a couple of basics, we talk about the introductory material really in the sequence. A book I like a lot is the Introduction to Elementary Particles by Griffiths. And you see me using examples out of that book a bit. Then on the nuclear physics side there is Kenneth Krane. It's an MIT book and a book which has been put together by MIT faculty and research scientists. And then there's Techniques for Nuclear and Particle Physics by Leo, which I like a lot. It's a little bit of an older book, but it goes into some of the technical details and material details which are important to understand how we build detectors. And then a more recent book is Modern Particle Physics by Mark Thomson. It dives right into particle physics, the energy frontier. And it's really nice to read. It's a modern book. And it's easy to read and comprehend. I recommend to have a look at the review articles by the particle data group. They are really concise articles which are for beginners or for introductory level maybe a little bit difficult. But as we go through the material in this class, you should be able to take those articles to review certain sections of this class. For example, QCD, or electroweak interactions, the Higgs mechanism. And while you do this, you also learn one of the latest results and measurement in this area. I'll be posting a set of papers as we go through the class. And you'll see in the course organization that I'll ask you to actually summarize some of those papers in our recitation section. So those are going to be important papers, for example, describing the experiment which was used to measure parity violation, or the paper on the Higgs discovery. That's it for literature. Please as always go ahead and ask me questions. You know this. You know if you Google particle physics or nuclear physics, you will find tons of literature available on as many good books. And you might find a different one from this listing which suits your appetite for reading and learning. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L65_Weak_Interactions_Neutral_Current.txt | MARKUS KLUTE: Welcome back to 8.701. In this short section, you're going to look at the weak interaction a little bit more, and specifically discuss neutral currents. We looked in some detail at charged currents-- specifically, the interaction with quarks. So here, I'm going to look at the Z boson specifically, and the weak interaction via the neutral current. So studying those two processes here, where there is an electron and a positron through some process including a Z boson and a photon and resulting in a muon and an anti-muon-- those processes have been studied in great detail at SLAC and at CERN, at the SLC, and the Large Electron-Positron Collider. So if we want to calculate the cross-section and study the cross-section of the center of mass energy, we see a number of interesting effects. At low energies, and at very large energies, the cross-section runs with 1 over the energy squared. But at the mass of the Z boson, we see this enormous resonance here. The cross-section at the resonance from the Z boson is about 200 times that of just a photon exchange. So this allows you to study the Z boson with great precision at those colliders. You have sizable cross-section when you are in electron-positron colliders. And then you can, with precision, look at, what is the rate into a muon/anti-muon? What is the rate into a quark/anti-quark? And so on. And you can study the mass, the width of the Z boson with an enormous level of precision. Again, so I will not go into too much detail here. And please have a look at chapter 9.6 in Griffiths, for example. But there's many other resources where you can learn more about [? neutral ?] currents. Neutral currents, electroweak neutral currents are specifically important in the study of neutrinos, as we will discuss more in the lectures as well. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L25_Symmetries_CP.txt | Welcome back to 8.701. In this lecture, we'll talk about CP symmetry or CP violation. In previous lectures, we discussed that the weak interaction is not invariant under parity and charge conjugation transformation. But now we can ask the question, how about CP-- so transformation which does change conjugation and parity transformation. The classical example to show parity violation is the decay of a pion. So we have here this charge pion with spin 0. And it decays into a muon and a neutrino, an anti-muon and a neutrino. And so since the neutrino is left-handed, the-- [INAUDIBLE] decay coming onto muon needs to be left-handed as well. So if you do parity transformation of this decay, you see that the outcoming muon would be right-handed. On the other hand, there is no right-handed neutrino. And therefore, this decay is not possible. So this is not-- so this mirror symmetry is not realized in nature, as a consequence of the weak interaction. So similarly, we could do a charge transformation, a charge conjugation, of this decay. So you turn particles into the antiparticles. And you find here this antineutrino, which is left-handed. And also, those don't exist in nature. So parity or charge conjugation doesn't really work on those decays, those [INAUDIBLE] decays. But what does work, if you apply the parity and a charge conjugation, we turn the positively-charged pion into a negatively-charge pion, the antimuon into a muon, and the neutrino into an antineutrino. And you see here that the antineutrino is right-handed. So is the muon. And so that decay is actually observed in nature. Good. So we saved the day. It seems like that CP, that the weak interaction is invariant in the CP transformation. However, that's not quite true. Gell-Mann and Pais noted that in systems of neutral kaons, there's an artifact. And the fact is that a particle, a K0, can turn into an antiparticle by changing the strangeness. And that's possible in this kind of box diagrams, which include a box with a couple of W's. And it's easy to see that if you could prepare a kaon, it will oscillate, because those diagrams are possible, into an antiparticle. So now what is happening now to CP here, if I apply CP on a kaon, I find a minus sign and an antikaon. So if you want to analyze this further, you might want to find the eigenstates to this. And so the eigenstates can be found, as well as K1 and K2, which are admixtures of the K0 and the anti-K0. And you find this symmetric, the symmetric and the anti-symmetric states. Good. So if you apply CP on the eigenstates, you find eigenvalues of 1 and minus 1. It turns out that the lifetime of decay 1 decay 2, those eigenstates, is very different. One is 10 to the minus 10, and one is 5 times 10 to the minus 8. So decay 1 decays much, much quicker than decay 2. So this, then, sets the stage to a test of CP violation. So what you're going to do is prepare a beam of K0's and let them decay. And only after some time, you study the beam again, which then should be made up solely of K2's. So if you in that beam observe decays of the K2 into two pions, you noted that there is an admixture again, which violates CP. So you have an admixture of K1's in a beam which should just be of K2's. So that mixture, then, will violate CP invariance. And exactly that was done. So Croning and Fitch picked up this idea. They set up an experiment in which they produced kaons. They had them decay. And then they studied later in the beam whether or not they could find two pion decays. And they did, indeed, observe 42-- 45 pion decays, two-pion decays, in a total of 22,700 decays. So that means that this K long beam, the long-lived kaon beam, is actually an admixture of K2's with a small additional component of K1's. So here they observed CP violation through the mixture of those states. And this epsilon gives you, you know, size of the strength of the CP violation. So here is a note of the paper. We'll have another discussion of this in class by a student presentation, Croning and Fitch. Here this is Croning, and this is Fitch. It turns out that Croning is actually a student, or was a student, of Enrico Fermi and also worked in Chicago. So that's quite an interesting family tree here, to which also Jerry Friedman belongs. Jerry Friedman is a retired faculty at MIT and discovered that protons are made out of quarks. So this is a very interesting family tree. If you have some time, you might want to look into this. But here's the experiment. So you take protons, you dump them into a beam. You try to, with this magnet, filter out a neutral component, get rid of all photons, and then let this beam decay, and look in the spectrometer for decays of two photons. Here's a bigger picture of the same spectrum. So this is actually a blow-up view of this. So you have your kaons, neutral kaons coming in, the K2's. And then you look for pion decays. The instrumentation and how we actually would do this is part of later discussions where we actually talk about detectors in more detail. All right. So we just saw that Croning and Fitch observed CP violation in mixture of states. But we can also observe CP violation in direct decays. And the classical example here is the case of the K long and semileptonic decays. So semileptonic here means we have a decay of the K long, a neutral particle and a charged pion, an electron and antineutrino-- or it might very well also decay into a pi minus, a positron, and a neutrino. And it turns out, when you really count those events and perform a precise experiment, that the K longs prefer decays to positrons over decays to electrons. And so the fractional amount of this imbalance is 3 times 10 to the minus 3. So this is a rather small effect, again, of CP violation here in direct decays. Since then, CP violation has also been shown in the decay of B mesons. And the program of studying B mesons is a big part of the LHC experiment at the LHC. There's also experiments in Japan going on right now which study B mesons in order to learn further about B systems. Tests are also underway, for those who listened to the colloquium on Monday, in the neutrino sector. So here we have a completely different part, so not quarks are involved in weak interaction but neutrinos. And so the question is whether or not in that sector of physics, that sector of the standard model, there is CP violation. Those are aspects we'll discuss later on when we talk about neutrinos specifically. Before I close, a few more remarks on the matter-antimatter symmetry. So one of the biggest mysteries in physics, I would claim, is the fact that we're even here to ask this question. So there is apparently more matter in the universe than antimatter. You start from a big bang, there was this symmetry, and now we live in a universe which is dominated by matter. So how is this possible? So in 1967, Zakharov proposed that this is possible in a system where baryon number is violated. So this is almost a trivial statement. If you start from an equal number of baryons and antibaryons, the sum is then 0. The baryon number is 0 of this system. And you end up in a system which is dominated by baryons, then baryon number needs to be violated. But there's also the need of CP violation in this. So we just saw that this is realized in nature. But the amount of CP violation we observe in the system I just discussed is not sufficient to explain the matter-antimatter symmetry we observe in nature. So there is more to be found. There's new physics to be looked for in CP violation on this overall question. And there's also a need for the actions to be out of equilibrium, meaning that you don't revert the processes as you go forward. Yet another point of discussion, which I will not go into much detail in this lecture, is that our quantum field theory, which describes quasi-standard model and describes the interaction of particles, is invariant, and the CPT transformations. That means that if CP is violated, time reversal cannot be a symmetry. So meaning, going backwards and forwards in time is not symmetric. And you can test this. You can design experiments which test [INAUDIBLE] the fact. You can also design experiments which test CPT directly. But this is-- those are all interesting questions, but we will not go into any of those in this lecture. We will, however, come back to understanding the origin of CP violation in the standard model when we talk in more detail about the weak interaction. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L01_Introduction_to_Nuclear_and_Particle_Physics_Course_Overview.txt | [SQUEAKING] [RUSTLING] [CLICKING] MARKUS KLUTE: Welcome to 8701. My name is Markus Klute. I'm going to be your instructor for this class. This class is taught in an inverted classroom, a flipped classroom environment. So this is the very first of a large number of small, short videos, which [INAUDIBLE] is the contents of slides I've prepared. For this first slide, it's not necessary that you even watch the video, because they should explain themselves. And I will not give you too much additional information. As we go deeper into the content, that's not quite the case. And that might be useful for you to watch the videos, learn, stop, record a number of questions, so if some of the problems which I pose on the slides, and so on. So this is the very first one. And the idea of this sort of slide is to just give you a quick overview of what to expect for this class. The first thing we look at is the calendar for the fall semester. The summer ended much more quickly than we were all hoping for. The weather is still very nice out there, but fall is coming. So Tuesday, September 1st, we'll meet and greet in the first session. That session is really just meant for an overview of the course organization, and so on. So this kind of information as I give in this video will be very briefly covered on that Tuesday, as well. As you can see from the calendar, we meet Tuesday and Thursday. The meeting or the visitation starts at 1:30 PM. I aim for one hour in each of those meetings, but we can stay a little longer, even half an hour longer, if there's open questions or need for discussion. This semester will give us a few holidays along the way. There's Columbus Day, where Tuesday is a student holiday. I will not have a recitation on Election Day. I want you to focus on the election, and I will focus on it myself. And then there is the week of Thanksgiving where we will not have any meeting, any class during that week. You can also see that there is a number of PSets in the second week. We'll post a PSet, and I'll give you about two weeks to respond to that PSet. More on how we evaluate in this class in a different recording-- just to say here already that there's going to be two oral exams towards the middle of the semester and towards the end, very short, little test, 15 minutes per student. We will set up individual meetings to go over this. And then there is Friday as an office hour. I haven't specified the time for this yet. I sent you a Doodle poll. And we'll find the time which works for most of you on a Friday afternoon or Friday morning. This class doesn't have a final, so it basically ends with the last day of class, which is oral exams in the 15th week of this class. Looking a little bit into the content, the first week is really an introduction and some historic remarks. We'll talk about relativistic kinematics. And then we'll go along the outline as given in one of our textbooks. Again, textbooks are being discussed yet in a different short presentation. But I really follow along this textbook, but not stick too much to only the content of that textbook. And I will supplement the information using other textbooks, as well, specifically when it comes to some of the problems and discussions which come along. You can see that we'll start with particle physics, talk about quarks, leptons, and interactions, talk about how this fits in a theoretical way with invariance principle and conservation laws. And we'll look at scattering, and then QCD before we go into weak interaction. And from there on, we build the standard model with the electroweak interaction, the Higgs mechanisms. For week eight, then, or the end of week eight will give us a break, where we then start talking about nuclear physics. This order of talking about particle physics and then nuclear physics makes a lot of sense to me. It doesn't make sense in historic context, because nuclear physics have developed before the standard model, for example, was developed. But in order to fully appreciate nuclear physics, it's useful to have introduced basic interactions. And therefore, this is the reason for this order. And then we'll try to bring in experimental methods with accelerators, colliding beams, and experimental methods on how particles interact with matter and detectors. Again, in this schedule, the Thanksgiving week is off. And we'll use the last remaining week to discuss physics beyond the standard model and the connection to cosmology. I said here the title the schedule will be tuned. I very much expect that we have a little bit slippage in this, but there is some room at the end to make sure that we don't overrun, and that I don't give you too much to learn, to read, to watch movies, or do PSets as we go week by week. This is it for this first recording. As always, you can reach out and ask any sort of question, either during the recitation, office hours, or separate. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L97_Nuclear_Physics_Fission.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture, we talk about nuclear fission. Already seen the process when we discussed empirical mass formula. Nuclear fission occurs in very heavy nuclei, as you can see in this plot here, fission processes, and this part of the spectrum. But what happens, and what can happen spontaneously, is that the parent nuclei just simply breaks up into daughter nuclei and maybe some additional neutrons. One example is the decay of uranium 238. One can also induce fission. Here, is a plot of the nuclear potential with the Coulomb [? law ?] here. And for example, you have this nuclei which sits here-- and this could be uranium 238-- and you are able to bring it above this activation and energy. Again, this can, in some nuclei, occur spontaneously, in others, it's being induced. And then you just break up the two, break up the nuclei into two daughter particles. So if you, for example, start with a neutron and you just bring it close-- let's say you bring a neutron close to uranium 235. This forms uranium 236. And the fact that the neutron is being absorbed excites, then, this zero-kinetic energy neutron excite the daughter compound nucleus. There's an excitation [INAUDIBLE] in this case, of 6.5 MeV. And because of that, it then quickly undergoes fission. So we basically add its thermal, or zero-kinetic energy neutron, and when it's being absorbed it immediately causes the fission process. The fission fragments then carry away some energy-- in this example, here, it's about 180 MeV-- and additional prompt neutrons. And so those additional prompt neutrons, depending on the specific decay process, the number can be varying between zero and six, and for uranium 235, the average number is 2.5. And then the fragments, they might undergo additional decay process, maybe better decays, alpha decays. And when they do that, they can also release additional neutrons. So interesting, now, is to see whether or not there can be a self-sustained reaction, a chain reaction. And whether or not this occurs depends on the number of neutrons being emitted. So if the number of neutrons produced in the n plus first stage of the fission process is greater than the number of neutrons produced in the nth stage of the process, the process is either critical or super critical, which means that it is able to, because it produces more neutrons and it needs to continue to have a fission process, it will add or create a chain reaction. If the number is less than 1, the process will die out. And this is exactly what's used in nuclear fission reactors. There's several types of reactors available. The example I want to discuss here very briefly is the one of a thermal reactor which uses uranium as fuel and low-energy neutrons to establish the chain reaction, as we just discussed. And so this is a sketch here. And the sketch has three different elements. The first one is a fuel element. The fuel element can be naturally-occurring uranium. And then, we have a moderator material. And the purpose of the moderator material is to slow the neutrons down. So if you really start from naturally-occurring uranium, you might have to find some sort of material which allows you to efficiently slow down the neutrons when they're being emitted in the fusion process. And this could be, so-called, heavy water, where the hydrogen in water is replaced in part that leaves this deuterium. But there's other examples for this as well. And then, you have those very important retractable rods. They are materials which have a large cross-section to capture neutrons. And so what you can do with them by mechanically adding them or removing them from this environment, is you can remove additional neutrons. So what you're trying to do is control this number k here, control whether or not there's enough neutrons available in order to sustain the chain reaction. And then, the excess energy is converted in heat. You can, for example, heat up water and then just have a turbine run in order to produce energy. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L33_Feynman_Calculus_Toy_Theory.txt | MARKUS KLUTE: All right, so welcome back to 8.801. We'll continue our discussion on Feynman calculus. And here we dive into toy theory. So this theory is a toy. And it's just an example to illustrate Feynman rules. What we do, the simplification we employ here is leaving out the spin of the particle involved. We consider the spin, we add another algebraic complication which is quite now confusing. So we leave this out for now. We come back later to this. So we're supposed to have three kinds of particles involved here-- particle A, B, and C. And so we can have a primary vertex where these all three particle are interacting like shown here. So particle A, might decay into particle B and C. You can assume that particle A is heavier than the sum of B and C, so this is schematically allowed. We might also have corrections involved here as shown here. And what we'd be interested in is now, for example, calculating the lifetime of this particle A. We might do this just for this primitive vertex. Or we might do this for this complicated set of corrections. We might also be interested in calculating scattering processes where particle A is scattered with particle A and produces particle B and particle B. Or we scatter particle A with particle B and so on. So in this theory, at the end of the lecture, we have all tools in hand to calculate this. No worries-- I will not leave you alone with this. This lecture, we go through the recipe. And then later on we'll see how we actually apply this. So let's look at this recipe. So the recipe has a number of steps. And the key is to just follow those steps in order to get to the desired result. The first step is to label incoming and outgoing four-momenta of particles. We label them with p1, p2, and up to pn. We also want to label all internal momenta. So we have an internal line, then we want to label this internal momenta with q1, q2, and so on. We want to add arrows to each line to keep track of what is a positive direction, as we discussed before, particles might travel backwards in time. Those are typically antiparticles. And for those we, have to make sure that we correctly account for the momenta. For each vertex, we have a factor. We write this factor minus ig, where g is the coupling constant. So this is a measure of the strength of the interaction involved. Then we have a propagator. So for each internal line, the internal lines are also called propagator. We write down a factor, i over qj squared minus mj squared t squared. Note that qj squared doesn't have to be mj squared c squared, meaning that the parity can be off shell, off mass shell. You also see that there is a complication in the integral when you actually have those vectors being the same. You want to make sure that energy and momentum is conserved. So for each vertex, you write down a delta function with the conditions. This is for this three vertex where momentum of the first one plus the second plus the third is equal to 0. Only then the value of the delta function is 1. Remember, there's a minus sign somewhere, most likely here for this k1 value. Then you want to integrate overall internal momenta. So for each internal line, we write a factor-- 1 over 2 pi to the fourth power. d4 is on your momenta. And then, this all will result in a delta function, which you just eliminate. And you do that by multiplying it. You erase this delta function and you replace it by a factor i. So this seems like very confusing. Why do you add delta functions first, and then you erase them later? Note in Fermi's golden rule, we use the squared of the amplitude. And you also saw that the [? phase-based ?] factors already have this kind of delta functions included. So we get out of the complications that we don't really know what to square of the delta function is by erasing, by adding the i and then keeping track of the momentum conservation, this conservation here when we apply the [? phase-based ?] factor. And then, voila, you just calculated a matrix element. All right, so those are the rules. Now the key is to practice how to apply them. So what we do next is to practice using this toy experiment in how to calculate the matrix element, the [? phase ?] base, and the z decay rates and cross sections. And then as a following step, we will see how this all unfolds. Then we have a real series, like QED, like the weak interaction and the strong interaction. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L48_QED_Cross_Sections.txt | MARKUS KLUTE: Welcome back to 8.701. So again, we have now all tools in place to do a next round of cross-section calculations. We have seen how to set up a matrix element. We have seen how to build spin average or to treat the spin, and then specifically to calculate spin average amplitudes using [INAUDIBLE].. All right, I'm not saying that this is all easy now, but you have seen all necessarily elements to calculate a cross-section for QED process. So let's summarize. So we have seen that we can set up some matrix element using Feynman rules for QED. We have seen how to set up the spin average matrix element squared using the traces. Now we would have to evaluate the traces in order to derive this formula here. So I'll spare you a precise discussion of this step here, but you can actually follow this quite straightforwardly. Let me just step back a little bit before we proceed. My goal for the class is not to have you calculate all kinds of cross-section processes, but to understand how you would do it, for the purpose of really understanding where dependencies come from and where this kind of calculation has its limitations. The first part is you want to see what is the dependencies on the couplings involved. You see this g squared, for example. That's a rather important effect. You also want to see, so Fermi's this golden rule, how we get actually into the cross-section from the matrix element calculation. So if you ever had to calculate a matrix element, am I going to ask you to do this once, maybe twice, as part of the homework set. I encourage you to open the book, follow the rules, look up tricks, how to work with traces. And then you should get to a reasonable solution in a reasonable amount of time. But here for the purpose of this discussion, we want to just have a look at a few specific cases where we make assumptions and simplifications to the discussion. So the first one is called Mott scattering. So here, again, we are at this example of a spin-half particle scattering with a spin-half particle-- a different spin-half particle, so an exchange of a photon. So we used the example of an electron-muon scattering, but this muon here could also be a proton or any other nuclei we spin off. The assumption for Mott scattering we are using is that the mass of this particle, the muon, is much heavier than the mass of the electron. And that's true the muon 200 times heavier than an electron. A proton is even heavier. Any heavier nuclei of this feels even heavier than this. In Mott scattering, we also make the assumption that the momenta involved are lower than the mass of the heavy particle and that the recoil of the heavy nuclei, or the muon, can be neglected. If we do that, we can then write the differential cross-section using Fermi's golden rule as a spin average matrix element squared divided by 2 pi M squared. OK. If you then use this kinematic information, you basically start from this matrix element here. And then you use those vectors, those four vectors, for your momentum of the first, second, third, and fourth particle. You find that many of the vectors are simplifying to ME. So p2 times p3 is ME. And so are many of the others. And there is a few important factors. For example, p1 minus p3 squared is minus 4p squared sine squared theta half. And similarly, p1 times p3. So you put this all in-- again, starting from this very formula we just had discussed before-- and you put all the simplifications and you get this matrix element, which already that looks much more manageable. There's an M squared, there's a p squared, there's a cosine squared theta half term, and some factor which depends on the moment times the mass. And if you then add this to Fermi's golden rule, you find this equation for your Mott scattering. Again, this is the scattering of two different spin-half particles where one is much heavier. The outgoing momenta are small. And the recoil of the heavier particles can be neglected. So this Mott's formula describes, for example, the Coulomb scattering, so the scattering this photon on the electric charge of a nuclei. And the scattering particle is not too heavy and not too energetic, like an electron. We also assume that everything involved here is point-like. We haven't had any discussion on the charge distribution of the nuclei or anything. We assume that this is a point-like particle. OK, we can further discuss now the case where the initial state particles are non-relativistic. So here our momentum formulas simplify. This is simply M squared, p amplitudes is 2ME. And alpha is q1 times q2. Those are the electric charges. And so then our differential cross-section further simplifies to something you've already seen. The Lorentzian cross-section is equal to q1 times q2 divided by 4 times the energy sine squared theta half squared. And we have seen that as already the Rutherford scattering cross-section when we discussed cross-section measurements in a geometrical kind of thing. So this closes a loop here in our cross-section discussion how we can think about those things. The Rutherford cross-section is nothing else but a big billiard ball being hit by a small billiard ball and looking at how the cross-section differentially kind of evolves out this setup. All right, in this sequence we have a little bit more of a discussion. What happens now if we induce higher-order terms and how can we think about those solutions? And then have two extra lectures and where we go back and discuss spin, and also how we can actually understand this in a Lagrangian setup. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L102_Instrumentation_Tracking_Detectors.txt | MARKUS KLUTE: Welcome back to 8.701. So in this section, we'll look at tracking detectors. And before we look at tracking detector technologies, we want to remind ourselves how we measure the momentum of a charged particle. And this measurement is possible because charged particles are reflected in magnetic fields. We have already seen that in a homogeneous magnetic field, a particle follows a circle. And so from the measurement of the radius and the knowledge of the magnetic field, we can infer the transverse momentum of the particle. Typical particles also have a longitudinal momentum. And so therefore, the trajectory is actually of the form of a helix. And so we then can get back to the total momentum of the particle by knowing the angle or the component of the longitudinal and transverse momentum and then just calculate total momentum from there. So we have seen that particles, when they go through a piece of material, that the energy loss through ionization or bremsstrahlung. And so what we have to do now is put some material in the way of the particles without really changing its momentum or changing its trajectory. So this is typically done with tracking detectors. And so what you then measure is not just the radius directly, but you measure individual points that's part of the trajectory of the particles. And so from the measurement of the points, you can then reconstruct the trajectory of the particles, and then therefore measure the curvature. And that has been used to measure the transverse momentum. So if you look at this picture here, the radius can be given as L squared over 8s plus s half. s is typically called the sagitta of the trajectory. If L is much, much larger than s, it simplifies to r-- about L squared over 8s. And the uncertainty on the sagitta the uncertainty on s, is limiting the uncertainty on the momentum measurement. And this is described in a paper which is very nice from the '60s. And the formula is called the Gluckstern formula. So what you see here is that the more measurements you actually perform along the sagitta, the better your measurement is performed. And this goes with the square root of 1 over the number of measurements. The total momentum uncertainty, or the relative momentum uncertainty, sigma pt over pt, is proportional to the uncertainty in the sagitta. And that also is then proportional to the pt of the measurement. So if you want to improve your measurement of your momentum, you want to decrease the uncertainty, then you can do this by having a larger L-- and you see this goes with the square of the length-- you want to increase your magnetic field, and you want to reduce the uncertainty on the second term. Those are the elements you have in play in order to improve the transverse momentum, or the momentum measurement of your charged particles. So this is a screenshot of this paper. The measurement error is not the only error on the transverse momentum, as we discussed previously already. Multiple scattering, so the scattering of the particle-- and it goes with momentum-- also reduces the uncertainty in the momentum measurement. And we can show that this component on the relative momentum uncertainty is flat. So what you find typically is that the transverse momentum measurement is limited by, at high pt through the measurement error, and at low pt through multiple scattering. We have those two components entering the measurement uncertainty. So the actual detectors, I just give you a couple of examples here. The first one-- and the first one, which actually really has allowed us to make measurements with a lot-- devices with huge statistic experiments where you have colliding beams, and you look at many, many of those interactions was a multi wire proportional chamber, at a reasonable resolution. Typically, we have hundreds or thousands of wires with a spacing of up to 1 millimeter. And this 1-millimeter spacing in a wire chamber limits the spatial resolution. If you don't have any knowledge of where the particle was in that area, then the uncertainty of the spatial-- on just one hit, uncertainty is given by d, the spacing between those two wires-- 1 millimeter in this case-- divided by square root of 12. So you find typically resolutions of 300 microns per measurement. But you may gain off the fact that you typically have many, many, many measurements. And as you just saw, this reduces the uncertainty on the measurement. You knowing that there was a hit on a wire gives you two-dimension information. If you then built the wire chamber such that there's angles between individual wires, you can use that in order to gain three-dimensional information in the geometry of where the particle went through. Wire chambers can be operated in different modes. And it depends on the level of voltage you apply on the wires. So the higher the voltage-- the level of the voltage goes down here, see a very limited voltage applied, then the ions-- you have ionization, and the ions recombine without being collected. In ionization mode, you collect the ions. And there's typically a gain factor of 1. You measure each ion, basically, separately. You can increase the voltage and run the wire chamber in proportional mode so you have a gas multiplication factor so the ions are being accelerated, and then they produce many more, more ions to be measured. And you typically have gain factors of 10 to the fourth. If you drive this further up, you go into the limited proportional mode, and then in Geiger mode, where basically you have an avalanche, and so you have no information about the initial ionization anymore. And typically, the chamber breaks down and needs to recover afterwards. The signal formation is quite an interesting concept. So you think that you do ionization, the electrons are there, and then they're being collected. Yes, that's true, and this happens very rapidly, typically in the order of nanoseconds. But the actual real signal comes from the ions themselves. And so what happens here is you have your wire here, you have your electrons. The ions then themselves, they build this cloud of charges. And this cloud of charges then gives you a signal on the wire. And this happens typically via influence, or you basically have a mirror charge on the conductor. And that is really where the bulk of the information comes from. We just discussed that the spatial resolution is limited by the spacing of the wires. You can actually get better measurements by using the timing information. If you are able to measure the time profile of your signal, that profile has information about the distance between the ionization and the wire itself. And that then helps you to improve the resolution further. So I really don't do justice to this, but nowadays, specifically in high dense-- particle-dense environments, you use solid-state tracking detectors. And they're used in many, many, many areas in particle physics. The idea is to use mostly silicon, but you can use other semiconductors as well. You dope them, meaning that you create more holes by using p-doping and n-type doping for more electrons. And then you bring n-doped and p-doped materials together. And you-- you know, it's the same way as in a diode in that then you apply a voltage in the opposite way to create a very large depleted zone. So in this depletion zone, there is no electrons and no holes available. But when a charged particle travels through, like is shown in this picture here, you create electron-hole pairs, and they then can be used in amplification to create a signal. That's the general concept in which solid states or silicon detector [INAUDIBLE]. The nice thing about silicon detector is that you can read them out with very fine spatial resolution. So from this 300 micrometers we just saw, you go down to several micrometers in spatial resolution when you measure particles going through the detectors. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L74_Higgs_Physics_Current_Status.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture, we have a very brief view at the current status of Higgs boson research. And I have to tell you that we could spend an entire week discussing this. I just give you the high-level overview, maybe the 30,000 feet kind of overview of what we know about the Higgs boson. On the Canvas page, you'll find a reference to a summary report, which gives you a little bit more information than I give you here. But I do think that there's a few things to highlight. And those are the ones which I'm going to talk about. The first part is that we have discovered the Higgs boson in decays to photons and also in decays to the Z bosons. Where the Z boson itself decays into a pair of leptons, electrons and muons. And the detectors we have available-- and here is an example for ATLAS and for CMS-- they are very good in measuring with precision the energy or momenta of photons, electrons, and muons. So this allows us to then reconstruct the mass of the Higgs boson as it goes through the decay. And you can see this here, this is 125 GeV reconstructed two-photon final state. So we have Higgs-- two photons from ATLAS. You see that there is a slight bump over an enormous background. So those are other sources of diphoton events which are produced in a hadron collider. But when you subtract those two spectra which is shown here, data minus background, you see this beautiful peak of Higgs to gamma gamma events. Similarly, Higgs to ZZ-- and I put a little star here, because one of the Z bosons has to be off-shell. Z boson mass is 91 GeV, the mass of the Higgs boson 125 GeV. So 91 plus 91 is 128. So one has to be a little off its mass peak. And then we look at decays into e plus e minus, and mu plus mu minus, and combination of those, for example for muon events and for electron events as well. Also shown here, and again, you have this beautiful peak consistent with the Higgs boson at a mass of 125 GeV. And again, you have other processes which contribute to this final state, namely the one where you have the four leptons coming from the Z boson itself and/or from a pair of Z bosons, as is shown here. There's also processes which mimic the leptons in the detector. Those have to be evaluated as well. And they're typically shown here in this plot here in green. OK. So as you can imagine, we can measure the cross-section very precisely, because we can identify those particles well. But we can also measure the mass. And this is shown here. This is a summary of measurements from ATLAS and CMS using those to final states, one with two photons or the one with four leptons. And you see that measurements are generally in agreement, and they can be combined to this measurement of 125 GeV. And the best, the combined value is shown here. So this is a precision measurement. And since the Higgs boson mass now is the only unknown parameter of the Higgs in the standard model, we know and can check all other properties. And one is the coupling strength to the bosons, which we just looked at already, or the coupling strength to the fermions. So this is two examples, one that's showing Higgs to tau tau events-- tau plus tau minus. And you see again, there's a lot of background processes which mimic-- have the same signature, the signal. But we can find, when you subtract the background from the data, the nexus of event, again at 125 GeV. So taus themselves decay, and they decay into neutrinos, which cannot be measured at these detectors. Therefore, the mass reconstruction is much harder than in the final stage we discussed before. That is easier in this final state, where we have a Higgs into two muons, mu plus mu minus. The issue here is that the weight is very small. The number of Higgs boson decaying to mu mu is very small. But there's also a large amount of background. So again, very similar to the Higgs to gamma gamma final state we looked at before. We have a huge amount of background. And here by eye, you don't even see the bump. You see the bump a little bit when you do data minus background. It's a fraction. You see this small axis here, which is consistent with 3 standard deviations. So the likelihood of the background to fluctuate without the presence of the Higgs signal is about 3 standard deviations. All right. This information then can be used to extract information about the couplings itself. And this is a beautiful plot here, which shows the Higgs coupling on one axis, either to the fermions or the boson, versus the mass of the particle. And so you find your favorite particle, the top, the W, Z, the bottom, the tau, and the muon. Those are the ones where we can actually measure the coupling. And you see whether or not this is in agreement with the standard model, which is this blue dotted line here. And you see it is. And that plot here tells me that the Higgs mechanism is responsible for the mass generation of this particle. So it might be that there is a mechanism which is very similar to the Higgs boson which results in the very same observations in nature, but it's not quite what we have in the standard model. In order to make those statements, one has to improve the size of the error bars here, or the statistical significance or the significance of the measurements. That is part of our program. That's what we're trying to do. All right. In summary-- again, this is a very high-level summary-- we measured the mass, we measured the spin and CP of Higgs boson, we measured the coupling to the Z, the W, the top, the bottom, the tau, and the muon. We have not been able to measure the coupling to lighter quark masses-- strange, charm, up, and down. We have not been able to measure the coupling to the electron. That would be spectacular. And we have not measured the coupling to itself. So in the standard model, the Higgs boson can couple to itself. So we have diagrams which look like this, Higgs, Higgs Higgs. And we have not been able to measure this. This is not free in the standard model, and this would be closing the argument that the Higgs boson requires mass-- or the Higgs boson is the particle predicted in the standard model giving mass to the W and Z boson. If you want, you can take the strength of the coupling here as this lambda. This is the lambda term in our potential. So that's what we are trying to measure in the future. But there's more open questions. Maybe there's more than one Higgs boson. The Higgs-- adding one duplet to the standard model with this potential is one possible solution to the problem of generating mass. But you could have very well added two, or m, or triplets, or more complicated things. And so the question is, are those maybe realized in nature or not? Will more precision tell us something about the Higgs boson? Are there decays of the Higgs bosons of nonstandard model particles, for example the Higgs boson decaying into those guys here, which could be evidence for that matter? We have looked. We have not seen this. But more precision might give us a different answer. That's basically the status of the summary of where we are with the Higgs boson. Again, much more can be said about that. And you will see this every now and then in seminars at MIT or other places. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L23_Symmetries_Parity.txt | PROFESSOR: Hello. In this lecture, we talk about parity and parity violation. So we talk about a discrete symmetry. A little bit of history in particle physics-- Lee and Yang in the 1950s wondered if there's any experiment testing parity invariance. There are many tests which had to do with a strong interaction and the electromagnetic interaction. But as it turned out, parity invariance had never been tested in the weak interaction. And so Lee and Yang were motivated in this because it was a puzzle at the time, so-called tau-theta puzzle, which turned out to be kaon decays into various articles. And then that puzzle, they solved the case with the same lifetime, but different spin states. And so they couldn't quite understand this, and were trying to understand whether or not one can have an experiment to test this. The experiments they proposed is one where you look at the nuclei and observe beta decay. So there's an electron coming out of the beta decay. If you are able to align the spin of the nuclei and put it in front of a mirror, you see that the physics going to the mirror state is not [INAUDIBLE] parity. You see that in the mirror, the spin is the opposite direction, but the momentum of the electrons, the electrons still come out on the bottom. So there's clearly some change in the physics going on in the mirrored state. So Madame Wu actually took up this idea, and immediately tested this in the same year. She was a Chinese-American physicist, born in China, and then studied in Berkeley together with famous people there. Lawrence is one of them. After the war or in the war, she joined the Manhattan Project and made very, very important contribution to the Manhattan Project based on her thesis work. As a fun fact, she was married to another Chinese-American, the grandson of the first President of the Republic of China. As I said, in 1956, she conducted the Wu experiment, and in the following year received the Nobel Prize in physics for the finding. So really briefly, we're going to have another discussion of the Wu experiment later on. What she did here is she studied cobalt 60 decay to nickel, beta decays. And what she was able to do experimentally with the magnetic field, align the cobalt 60, and then just count the number of electrons coming out. And it turns out that the number of electrons coming out of the bottom and the top are not the same. She did this by reverting a magnetic field. And the experimental data is shown here. You see as a function of time, you just look at counting, which is a function of time after injecting the probe, and you revert. You do the measurement by reverting the magnetic field. And you see very clearly here that there's asymmetry in the beta decays. And that asymmetry immediately tells you that there has to be parity violation in beta decays or in weak interaction. So she found that this specific picture here violates parity. And this is a dramatic signature of the weak interaction. And we will later see how we can actually understand it. Note that parity is conserved in electromagnetic interaction and strong interaction. One more word on parity inversion-- you can define a parity operator. And if you apply this parity operator, for example, on the vector, these x, y, and z components you simply turn the direction over here with a minus sign. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L45_QED_Feynman_Rules_for_QED.txt | MARKUS KLUTE: All right. So welcome back to 8.701. So we have all ingredients now to prepare Feynman rules for QED. So that's the toolkit we need in order to make calculations to calculate scattering processes and decays. And we've already seen Feynman rules for our toy theory. Again, now the situation is a little bit more complicated, because we can consider the spin of particles in addition to their energy and momentum. The rules' sequence of things are very much the same. There is, however, a few caveats to keep in mind, and I'll point those out. OK. So the very first thing is to be very clear in our notation. So this is an arbitrary or generic QED Feynman diagram. We have only pointed out the incoming and the outgoing lines. There is internal lines which I didn't mention here. Important to note the momentum and the directions. The directions are arbitrary. We just have to be clear on them and then treat them consistently. All right? So this is not different in our previous discussion. Then, here comes the difference. Our external lines either electron, positrons, or photons. All right? You can-- fermions and photons, charged fermions and photons. So we discussed how the solutions look like, our spinors u and v. And for outgoing electrons, for outgoing particles, we have this adjunct vector here, which is given by u dagger gamma 0. And similarly for the incoming antiparticle-- v dagger gamma 0. For the photon, we have the polarization vectors for incoming and outgoing photons. All right. Then we have a vertex factor. Here, now, g e is a constant and a dimensionless property. But we do have to have a gamma mu here as part of our vertex factor. For the propagator, our internal lines, we have a difference between electrons, positrons, and photons. And that comes from the fact that electrons and positrons are massive particles. So we have vertex vectors which now have this 1 over q square behavior, or 1 over q square minus m square behavior. So here, you can already see that there's going to be a complication later when we evaluate or integrate over momentum-- simply the same discussion I had before. And we already know how to solve this problem of infinities by renormalizing-- by having a cut-off and renormalizing it. Excellent. So the next step, then, is very much the same. There's no change. We have to make sure that there's energy and momentum conservation, and we enforce this by introducing delta functions. We have to integrate over each and every internal momenta, and each internal line gets one of those integration factors. And then after we integrate, we are left with a delta function, and we have to cancel that delta function. All right. In our toy experiment, the order of things didn't matter. Everything we had in there was scalar numbers, right? Here we do have a little bit more complicated problem. So there's an importance in the order of which we execute things. So what we want to do is form fermion lines. We just follow a fermion as we go from the left to the right. And then we find things which are always of the form an adjoint spinor, a 4-times-4 matrix, and a spinor. And the result of that is going to be a number. All right? Great. There is one additional complication, is accounting for duplications and making sure that the sign is [INAUDIBLE]. I'm just mentioning this here. This will become more clear as you work through examples. So there is an antisymmetrization going on, where we have to introduce a minus sign between different diagrams that differ only by the interchange or the exchange of two incoming or two outgoing electrons or positrons and/or the incoming electron with an outgoing positron. So if you have a diagram which is exactly the same, but the two incoming electrons are interchanged, you have to add those two diagrams. You have to add all matrix elements together for recalculating amplitude. But you have to introduce a minus sign when you change those two particles. So with that, we can now just basically calculate whatever QED process we want. All the tools are already here. And what we want to do now next, in the next video, and also in the recitation and homework, is to go through a few examples to get a little practice with this. There's a number of tricks which will come in handy, and I'll explain those in a separate video. They are just mathematical tricks which allow us to quickly evaluate the multiplication of spinors and matrix elements and so on. All right. That's it for this video. Again, there is going to be another two or three videos which deal with actually evaluating or calculating matrix elements. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L98_Nuclear_Physics_Fusion.txt | MARKUS KLUTE: Welcome back to 8.701. So in this lecture, we talk about nuclear fusion. And what we mean by fusion is the energy production by two light nuclei fusing together to produce a heavier one which is more tightly bound. And again, we can understand this from the empirical mass formula, OK? So the difficulty in nuclear fusion is that we now have to overcome the Coulomb barrier from the other side. So we have to bring two light nuclei together, have to overcome the Coulomb barrier in order to form this heavier and more tightly bound state. You might think that you can just take a two beams of protons like we do at the LHC, bombard them, and create heavier nuclei. But the problem is that most of the nuclei will scatter elastically, and will not lead to a new-bound state. So the practical way to overcome the Coulomb barrier is by creating a confined mixture and supplying heat, such that the thermal energy is enough to overcome the Coulomb barrier. You can estimate how much energy is needed. If you, for example, assume a Coulomb barrier of about 5 MeV, this implies temperatures of 5 times 10 to the 10 Kelvin. OK, so that's really, really hot, right? If you compare this to typical temperatures within stars, you'll find that those are only 10 to the 8 Kelvin. So now you ask, why does it still work? Why do we see fusion within a stellar medium? And the answer to this is, again, quantum tunneling, and to some degree, it's also the fact that when we have a medium of a specific temperature, the kinetic energy of the particle involved follows a mixture of energy distribution. So you find some particles which have enough energy to overcome the Coulomb potential. Even so, the mean value would be below. The processes within the sun, they are dominated by so-called proton-proton cycle, or PPI cycle. And this happens in a number of steps. You start with hydrogen or protons-- the core of the sun is basically a plasma. So we can forget about electrons in this context. So we have two protons fusing together to a deuteron, which is a proton and a neutron, via the weak interaction. Here, you find for the first time, again, the neutrinos being produced in the sun. Then, you supply the deuterium again. And together with the proton, you are able to produce the helium. And then the helium, again, is being used to supply the third step in this. Helium-3 is supplied to produce helium-4. So this then-- the end product is a helium-4 here, and energy. You combine all of those three steps, you find that you start with four protons. You produce helium-4 plus positrons, neutrinos, photons, and energy. In fact, as I was just saying, this all happens within the hot plasma. The positrons are basically annihilated with electrons, which are part of the plasma, adding another MeV of energy to this. All right, this is one, and the dominant energy production mechanism within the sun. But it's not the only one. Also quite interesting is the so-called carbon cycle. It's contributing about 3% to the sun's energy output. So here, carbon basically works as a catalyst. So you have a carbon and, again, a proton producing nitrogen. The nitrogen produces carbon-13, carbon-13 together with a proton nitrogen-14, 14, nitrogen-14 with a proton oxygen-15, and oxygen-15, nitrogen-15, and then, last but not least, nitrogen-15 together with a proton produces carbon-12 again, and helium. So you see that the carbon-12 is the catalyst here, which is used to produce helium and energy. So if you produce this cycle, that you combine this cycle, you find that, again, from four protons or four hydrogen atoms, you produce helium, positrons, again, neutrinos, photons, and energy-- very similar, it's visible, to this combined chain, with the exception that there is one additional photon. And again, here, the positrons supply additional energy and they annihilate with electrons. Great, so we have seen that we do two things. We create heavier forms of metal starting from hydrogen, and we produce energy. And so this is the energy production mechanism within the sun. This is nice. You produce-- you start with hydrogen or deuterium. Those are two elements which are very abundant. And you produce energy. So the question comes up whether or not you can actually use this on Earth, in a controlled environment, in order to produce energy and solve many of the ongoing issues we have on this planet. There are several efforts underway. And this goes back all the way to the '50s of the last century. The most prominent one currently is the so-called ITER project, which is international collaboration, and a project where one tries to build a fusion reactor in France. I can already tell you that the next stage for this is in about five years to complete the project, or complete the production of-- construction of the project, and produce energy for the first time with this project in this controlled environment. It will take another 10, 15 years on this road map of research in order to produce, or be able to produce, nuclear power reactors-- nuclear fusion power reactors which can be used in some sort of commercial way. There's a few other interesting projects which use different magnet technologies which might have a more faster pathway to success. But coming back to the story here, so you could start with protons again in a proton-proton reaction. But it turns out that this is a rather slow process, and not very promising for a controlled reaction. But deuterium or tritium are also very promising. Note that, for deuterium here, you have to overcome the same Coulomb barrier, right? The charges involved are the very same. But the cross sections are higher. And therefore, the likelihood for the process to occur is higher. And therefore, this can happen much faster. Deuterium is-- again, as I was saying before-- very abundant. You can just extract it from water. Tritium is a little bit more difficult to deal with-- to produce and to control because it's radioactive and has some really not so good features. But this is a model picture of ITER. Again, this is an international project. It has a rather bad reputation these days. It's by far the most expensive scientific endeavor. But again, I mean, I think this is investment to the future of this planet. And hopefully it will succeed in the next years with this project, and in the long run with having nuclear fission-- fusion avail-- sorry, nuclear fusion available for energy creation. What you see here is-- the key feature of this reactor is toroidal magnets which confine the plasma. And also, electric fields are used in order to heat the plasma up first. So you have to provide heat to a point that the heat being produced in the fusion process is sufficient to self-sustain. And so confining the plasma and providing enough energy, and the radiation in this and so on, this is all very difficult problems to solve. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L41_QED_Free_Wave_Equation.txt | [SQUEAKING] [RUSTLING] [CLICKING] MARKUS KLUTE: Welcome back to 8.701. So we switch gears now and talk about quantum electrodynamics, QED. And we start the discussion by going back to free wave equations. Now could argue that we are interested in collisions and we're interested in decays of particles. So why do we discuss free wave equations? But the theory we discussed last week, which we used in order to get a hold on Feynman diagrams and calculations, was very simplified. And one of the aspects not considered in this theory was the fact that particles carry spin. So we had a theory which not only was applicable to scalars. Now by walking through wave equations, we can see how we can incorporate or make use of the fact that particles actually do carry spin. So let's do this one by one. So we start off with our relativistic energy-momentum relation-- e squared is equal to p squared plus m squared. We express energy and momentum via quantum mechanical operators. And so immediately by putting this in, we find this equation here, which is the so-called Klein-Gordon wave equation. So if you look at this equation, we see that the second derivative here in time, there's no derivative in space. So there's an asymmetry between space and time. And that is a not really useful feature of our wave equation as we want them to be, Lorentz invariant, for example. So what we want is a first-order equation in both derivatives. So we'll just start writing this down in general terms, and then make sure that this equation holds to the relativistic information we just saw on the previous slide. We'll just write this down here. We have a first derivative time and a first derivative space. And we'll just say there's a constant between those two, relating those two. So the sigmas are just unknown constants. So if you now try to find by squaring this, trying to find the Klein-Gordon equation and relates the coefficients, you find this relationship here. So the sigma squared are all the same and equal to 1. But you also see that the sigmas, they're anti-commutate-- sorry-- which is not possible for numbers. So sigmas need to be matrices. You also see that this is only holding true here for m equal to 0. So this equation here is true for a massless particle. All right, so if we then try to find solutions for those relations, we find that they can be fulfilled by the 2 by 2 Pauli matrices. We might have seen this already hopefully in the discussion of atomic physics. And there, those Pauli matrices associate spin to electrons. So this is exactly what we have in mind here also. Now using this definition, we can rewrite the Weyl equations to energy times the field is equal to minus sigma times the momentum times the field. And to find a second equation, we'll just design flips. The chi here and the phi spinors, they're two-dimensional vectors and the sigma are our Pauli matrices. Good. So we have the relation of [INAUDIBLE].. So we can go a step further. Now we want to introduce mass term as well. Those hold for massless particles, so we're going introduce mass. So we can rewrite this equation and introduce its mass term here, again, with the coefficient. And we find now this alpha here being-- sorry. So this phi here is a core component spinor. And it stands for the particle, its antiparticle, and the two spin states. So that's combining that two equations we had here. So you'll see one is for particles and one is for antiparticles, for the two spin states. So we combine this in one equation and we added this mass term. So if you try to find the solutions here, you find that alpha is a matrix-- 4 by 4 matrix which has the sigma, supporting matrices on the off-diagonal elements. And beta is a diagonal matrix-- a 4 by 4 matrix with identity on the upper two components, and minus 1 in the lower two components. So now with this, this is already the Dirac equation. We can rewrite the Dirac equation in the covariant form where you have just defined a new matrix here, so-called gamma matrix which you build out of this matrix beta and alpha. Which are defined on the previous slide. Good, so good. So we have this new matrix, this new equation here, which is the Dirac equation. And it holds for particles with two spin states, examples with spin half states. And it holds for particles which have masses. So that's great. So this is now on starting point for the discussion. The next lecture we'll look at solutions of this equation. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L101_Instrumentation_Particle_Interaction_with_Matter.txt | [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: Welcome back to 8.701. We are starting a new chapter on instrumentation. And in this first section, we'll discuss the interaction of particles with matter. So what happens when particles traverse through a piece of material? The underlying principle of detection is that we do have to have some sort of interaction of the material of the detector with the particles going through. And there needs to be some sort of transfer of energy which can be identified. Then, that piece of energy can be amplified, separated from noise, and so on. But this first part of any detection process is this interaction of this particle with matter. We can ask, what kind of particles can we actually identify? Electrons, muons, pions, kaons, protons, neutrons, heavy ions, and photons. But the key here in this list of particles is that those particles have to be stable. So we cannot directly identify tau, as the tau decays before it has a chance to interact with the detector. The same for our top quark, the Higgs boson, and so on. Interesting, neutrinos. Neutrinos interact with the detector very, very rarely. When they do, they actually detect a signal that is not of the neutrino directly, but of the products of the interaction. So we will split this discussion up in the interaction of neutral particles and charged particles, and we start with the photon. So the photon interacts with detector material. With material in general, we have three leading effects. The photo effect, Compton scattering, and pair production. In the photo effect, we have a photon interacting with an atom, and then kicking out an electron. And then, your detector has a chance to identify the energy and the momentum of the electron. This concept is used in photo multiplier tubes where then the kicked out electron is further amplified. And that leads to a shower of electrons which can be measured like the photodiode [? species ?] effect. We have discussed the Compton effect quite a bit. Here, the energy of the scattered electron can be measured out of the energy of the scattered photon. And then there's pair production. Pair production dominated high energies. And typically, it's part of an initiation process of electromagnetic showers in calorimeters. That's great. So what happens in the calorimeter-- and we'll talk about this more later-- is that an incoming photon or electron causes this photon to convert into pairs of electrons and positrons. And then there's this cascade of electrons and positrons, and additional photons being produced. In tracking detectors, this is unwanted. So therefore, build tracking detectors rather thin. We don't want to have this confusion of additional charged particles, and we try to measure the energy of a proton. So this plot here shows you the cross-section as a function of the photon energy. And you see here very nicely those three effects contributing to the total cross-section. So for low energies-- in the range of some 100 keV, the photo effect dominates. And there is this intermediate range from about 100 keV to about 10 MeV where we see the effect of Compton scattering. And everything above this is dominated by pair production. And this here shows you that there are some differences in what kind of material you interact, of course. Photon or electron interaction. Again, the main energy loss mechanism for high energy photons and electrons in matter is through pair production, and also bremsstrahlung. Bremsstrahlung is the effect when an electron or positron radiates a photon. You can characterize the materials by introducing a concept of radiation length. And there's some confusion sometimes. In the definition, they are very similar, but they're not quite the same. Radiation length can be defined as the length after which an electron loses about 1 over e of its energy by bremsstrahlung. And you often find the definition through the mean free path lengths. And in X0, the radiation length is defined as 7/9 of the mean free path length for pair production by a photon. So those are the two definitions. And they're typically used in the regime where the process is dominant. It's a very convenient property because of quantity, because you don't have to worry about when you're thinking about the interaction of the detector, about the specific thickness and what it means in terms of energy loss, and simply know that your piece of lead is a fraction of a radiation length. And that tells you how many of your photons or how much of the photon energy is being lost. Typically, when you build detector concepts like a collider experiment like ATLAS or CMS, you want the tracking volume to be of low radiation length. And for ATLAS and CMS, this depends on the rapidity or the forward direction, but it varies between 30% and 200% of the radiation length. And for calorimeters, you want that all the energy is deposited in the calorimeter. Nothing has leaked out in the back, and therefore, you design calorimeters typically with 20 or 30 radiation lengths [INAUDIBLE].. So again, when me think about how a photon or an electron leaves a footprint in a calorimeter, you start from this first electron and photon. And then, this particle evolves in an electromagnetic shower. So there's this cascade effect as the particle tries to move through this material. The shower maximum is given here. Slightly depends on the energy. It uses logarithmic dependency. I introduce here the critical energies. This is where the energy loss through ionization is equal to the bremsstrahlung. And you see this in this plot here-- It's rather small-- as a function of energy and the energy loss. Again, you see, this effect here is from ionization. And this effect here is from bremsstrahlung. The critical energy is defined as where those two energy loss mechanisms give you the same result. So this is just a normalization factor. But you see that there is this logarithmic dependency of the energy loss. You can also wonder how wide a shower actually becomes. And this is given by the width. The transverse width of the shower is given by the Moliere radius. And that's approximate. You find 21 MeV over the critical energy times the radiation length gives you the size of the transverse sides of your shower. And in this example, this is 8 centimeters, compared to a shower length of 46 centimeters. This is a very quick summary of electromagnetic showers. You can also have nuclear showers, of course. You have a neutron or a proton entering your calorimeter. Here, the physics is a little bit more complicated, but you can introduce similar concepts. This concept of radiation lengths for strong interactions of the hadron with the nuclei. So as for the electromagnetic shower, there is this cascade developing. However, if in the cascade, for example, you would choose a neutron. That neutron can travel without leaving an interaction for quite a distance. So you don't have this continuous kind of flow of energy, and you have little clusters of energies. And in those clusters, you have not just nuclear interaction, but you can also produce new pions. And those new pions decay into a pair of photons. And then, the photons, they leave electromagnetic showers. So hadronic showers have typically two components-- a hadronic part, which is charged hadrons, pions, kaons, protons, neutrons, and an electromagnetic part which is [INAUDIBLE] coming from the decay of the neutrons. From the decay of the neutral pions, which are photons. So here, just to give you a feel four orders of magnitude, radiation length given-- the nuclear and the electromagnetic radiation is given as a function of Z. And for a gas, we're talking about hundreds of meters. For light material-- aluminum and silicon-- we talk about 10 centimeters. And for heavy material-- specifically lead-- we're talking about sub-centimeter radiation length. Moving from the neutral particles from the photons and electrons-- sorry-- to the charged particle interactions. Here again, just summarizing or giving a summary first, and then going through the individual components. The interaction mechanisms are multiple scattering-- elastic scattering with the atoms. This is a process which is not very much wanted because, when you try to monitor the trajectory of the particle, you don't want it to scatter and change randomly its direction or momentum. Ionization is a basic mechanism for tracking detectors. Photon radiation is an important part through bremsstrahlung but also through Cerenkov radiation or transition radiation. And then, in scintillators, you can excite the material. And then, if you have a wavelength shifting fiber material, you can cause scintillation light to be shifted in wavelength. And then you can read this out in order to gain information about particles going through. All right. Let's start with multiple scattering. So after passing a layer of thickness with L, a particle with some displacement r and some angle of deflection. So that is problematic because you lose information through the random process. You see here this random Gaussian-like distribution which is rather annoying. So the key here is to minimize the radiation lengths of the particle going through. The next part is an ionization. Again, this is a primary source of information we gained from in tracking detectors. Typically, you have a number of primary interactions per unit length which are Poisson distribution. So it's a random process whether or not the particle sees an atom which it can ionize. And typically, in a gas, you find about 30 of those primary interactions per centimeter. You have more in denser materials. If you have kicked out an electron in your ionization process, that electron itself can again lead to secondary ionization. And once those electrons reach sufficient energy, they're sometimes visible as individual tracks themselves. They called delta electrons-- new particles which are visible in your tracking detector. Energy fluctuations can be really, really large through ionization. Sometimes, you have a really tough interaction and you transfer a lot of energy to the electron, while the mean number is well under control. So, interesting. Just to give you a feel, again, you have about 30 primary interactions per centimeters in gas. The total ionization energy you find is typically 3 times the primary ionization energy. So you cause those seeds of ionization, and then the energy to move away from this initial track. When you look at the energy loss distribution, this is a nice plot here I made many, many years ago of the energy loss in a piece of silicon of 100 gb pion. So this pion loses its energy primarily through ionization. And this is a small piece of silicon which we used here. So you see this typical distribution. It's called the Landau distribution with a most probable value and then a very long tail. And this tail here is dominated by those delta electrons I was talking about. In bubble chamber or cloud chamber pictures, you see those delta electrons here as little curls of ionization along the main part of your particle leading an ionization track. The energy loss of charged particles can be calculated using the Bethe-Bloch formula. And it's a very good description in a specific energy range-- in the energy range which is dominated by ionization. And so, the formula is given here. We discuss this some more in our recitation section. But you see here in this medium energy range, you are dominated with Bethe-Bloch formula or by ionization where, when you go into higher energies here, you find additional energy loss-- energy loss with radiation. So we can study the details of this Bethe-Bloch formula. One interesting point is the particle dependency of the energy loss-- and you see this here shown for a muon, for a pion, and for proton. If you measure the energy loss of a specific particle in a reasonable momentum range, you can use that information in order to learn which particle travels through your detector. So you can use energy loss in some cases in combination with the momentum measurement in order to identify particles. Last but not least, more radiation effects. Cerenkov radiation is a very neat feature to also measure particles as they go through a specific material. They can also be used in order to identify particles again. So the idea here is that Cerenkov radiation is emitted when a particle passes through a dielectric medium with a speed larger than the speed of the light in that medium. And that causes a radiation cone. It's like a sonic boom when you have airplanes passing by. And the simple picture is one of the classical pictures. It's one of this wave front cone under a specific Cerenkov angle. And then, last but not least, transition radiation. This is a process which were predicted by Ginzburg and Frenck in the 1940s. His idea is that a photon is emitted when a charged particle transfers through the boundary of two mediums. And so, if you have a medium here in a vacuum, for example, if the particle travels through here, it polarizes the medium when it exits. And that polarization then leads to an electric dipole which then starts to radiate. And you get a photon from this type of radiation. So if you measure this type of radiation, you might be able to identify that the particle traveling through the transition of two materials was an electron. All right. So this is the first introduction to the topic. So in the next part, we now have to understand how we use those phenomena in order to build detectors. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L44_QED_Photon.txt | MARKUS KLUTE: Welcome back to 8.701. So we continue our discussion, our development of QED as a discussion of the photon. We have already seen how we can describe electrons and positrons, anti-electrons. And now it's time to actually look at the quantum of electromagnetic fields. A few general remarks first, quantum electrodynamic is a quantum field theory-- the quantum field theory of electrodynamic processes describing how light interacts with matter. Specifically, all processes where a photon is used as an exchange particle involving electrically-charged particles can be described by QED. The photon is in the limit theory particle. And it's a quantum of the electromagnetic field. But the real power in QED lies in the fact that we can describe it as a perturbation theory. We'll see that we can write down Feynman diagrams, calculate them, and use those calculations to describe processes we can observe in experiment. And since we can do this with a very high precision, we can use QED in order to make forecasts, in order to understand, in order to understand inner dynamics of processes we can measure. Feynman called QED our pride and joy. And it's in the really unmatched precision of this theory where the pride and joy lies. But stepping back one step. Let's start with just local election dynamics. And let's start from Maxwell's equations. And I'm going to use this just to make a few points and remarks. This is really not in the direct path of our development, but it connects the dots to something you have already started, meaning classical electrodynamics. So you all have seen Maxwell equations. They can be expressed in integral or differential form. And you can write this even more compact than it's given here. So you see Gauss's law. And you see that electric charge generate electric fields you can see that you can produce currents by time changing electric fields or by spatially changing magnetic fields. Gauss's law for magnetic fields saying that there is no magnetic charge, no magnetic monopole, at least as far as we know. We have not observed those. And then there's Faraday's law as well. You can express the magnetic field and the electric field through vector potentials. And if you go one step further, you see this very nice form using four vectors for the potential and the vector potential and the charge and the current. So what we're really doing here right now is we're just rewriting the very same equations. And so when we use this [INAUDIBLE] operator, the form of this equation looks like this. You can already see this form is very similar to the Klein-Gordon equation we just looked at. So let's go down this path a little bit more. So this here is yet another form to write the Maxwell equations, where f is our field strength tensor. And the field strength tenor has all the physics involved. You see that you're describing the electromagnetic field and its components. And then the simplification of the Maxwell equations are sitting here. So we have the Maxwell equation in this form. We have the Maxwell equation in this form here. So there's one interesting thing when we use potential in order to describe electromagnetic processes or properties, is that we can actually choose the specifics of the gauge. There's a degree of freedom which we can choose, which doesn't have any impact on the physics, on the reality of the physics. And you can see this here. If you do the choice that this component is 0, this is called the Coulomb gauge. We basically simplify again our Maxwell equations to this point here. Great. So there's a number of things to be said. So what we are basically doing here, if we fix a gauge, so if you fix our potential, we tie the choice of the potential to the inertial frame we're using here. And you could say that's not nice. That doesn't seem [INAUDIBLE] invariant. It is actually OK to do this. The issue with that is that you tie-- if you go from one frame in the other, you have to actually change the gauge as you go along with changing the reference frame. But there's nothing bad, it's just a little bit awkward. All right, then moving back so we have this equation now. And this equation, obviously simplifies. If there's no current or no charge around for free photons. So this one, and for this you can find the solution is again the free rates. This was the goal of this lecture, finding this free weight. So you could have probably written this down before any of the discussion, but I just want to makes some connection. So now this in the QED, this a mu, becomes our big function for the photon. Again, we have to this is as a result of the we gauge made. So we made specific choice in our reference frame. And then we can describe the photon with our wave function. That is epsilon here, epsilon is our polarization vector. And is a normalization factor. We always have to normalize our wave function to a specific set of unit. All right, that's good. We can now analyze this. We find that those conditions are fulfilled here, basically saying that photon decays like a photon. The energy of the photon is equal to the momentum times C. That's great. But it's also not a surprise because the form of this equation here is exactly that of the Klein-Gordon equation not for massless particles. In the Klein-Gordon equation we got from this very same relationship, so I was not surprised that this works out. All right, one more word on the polarization vector. So the choice you just made-- I'm resulting the choice here. This is our Coulomb requires that the zeros component of our polarization vector is a0. And that the polarization vector is orthogonal to the momentum vector. So in principle, you have the three vector. The three vector is perpendicular to the direction of motion. And that allows us to find two polarization states which are independent of each other. So different to our electrons before. Now we don't have four states. You only have two states with independent solutions for a given momentum. All right, so with this now, we have elections. We can describe those. We can describe photons. The next step now is to look at the Feynman rules which allows us to describe the interaction between those two. |
MIT_8701_Introduction_to_Nuclear_and_Particle_Physics_Fall_2020 | L86_Neutrino_Physics_Mass_Scale_and_Nature.txt | MARKUS KLUTE: Welcome back to 8.701. So this is our last video in the chapter on neutrino physics. And we'll talk about mass scales and the nature of the neutrino particle very briefly. When we think about how we can measure the neutrino masses, there's a number of methods which come to mind. The first one is to just look out into the universe and try to understand how much matter in total could come from as a source of neutrinos. And one has to make assumptions about the model, the cosmological models at hand. But if I accept those potential biases or model dependencies, one finds that there's a potential reach of this kind of measurements of 20 to 50 MeV, millielectron volts. And the current best limits are in the order of 0.1 to 1 electron volt. A second source, and I'll talk more about this later, is the study of neutrinoless beta decay-- double beta decays. Here, the current best limits are on the order of 0.2 to 0.4 electron volts. And there's a chance to reach 20 to 50 millielectron volts. This kind of measurement will also answer the question whether or not the neutrino is a direct particle or a Majorana particle as we discussed in earlier lectures. And then there is the more classical approach of measuring the mass of an neutrino from the end point spectrum of beta decays. And so here the current best limit is from the Kartrin experiment. And I talk about it in the next slide. And it's in the order of one electron volt. And there's a potential reach to go down to 40 millielectron volt. So currently the range of limits is in the order of 1 electron volt or a bit better, and when we'll be able to go down to limits in the order of 20 to 50 millielectron volts. So here is a cartoon of how those measurements are being conducted. One starts with tritium. And it uses beta decay. And this lecture overall is a good first entry into the nuclear physics program where we discuss beta decays and other nuclear decays in more detail. What we find here is that you find an electron and the neutrino-- antineutrino in this case-- being emitted. And so the name of the game is now to measure the electron energy as precisely as possible, and then find a sensitivity off the neutrino mass in the end point spectrum. And those small differences here in the end point spectrum then that leads to understanding of the mass of the neutrino because the total energy in the collision needs to be preserved. And so the entire story here is about how precisely can we measure the energy of the electron in order to infer the neutrino mass in that. And so the latest results came out last year from the Kartrin experiment and shows that the result is consistent with a neutrino mass of 0, and that we can set an upper limit at 90% confidence level. That electron neutrino is of mass of 1.1 electron volt. Just as a reminder, we measure the mass of the electron neutrino in this decay, which is the sum of the individual components, mass eigenstate, which make up the electron neutrino. To just have historical context in this discussion here, we find that this latest result is an improvement of the order of factor of 2 compared to previous result by other experiments, which had a very similar job to measure the electron energy in beta decays, in the end point spectrum of beta decays. There's a new approach, which has been proposed by Joe Formaggio here from MIT, which changes the way the electron energy is being measured. So the idea is to have the decay happen in magnetic fields, and use the cyclotron radiation of single electrons. So the advantage here is that one doesn't have to move the electrons somehow into a spectrometer, but can immediately measure the energy of the electron. And the measurement of the energy then turns into a measurement of the frequency and basically measures the cyclotron frequency of the electron circling around in a magnetic field. And so it turns out that one moves the measurement of the energy of the electron into a measurement of a frequency. And thus frequency can be measured with very, very high precision. So there's some hope that this kind of measurement lead to very, very precise results of the energy of the electron and with that the mass of the neutrino. So the last slide here is now how can we figure out whether or not the neutrino has Dirac or Majorana nature. And this can be done, or the high sensitivity comes from so-called neutrinoless double beta decays. So one starts with nuclear decays where two electrons are emitted, but no neutrino. And so this requires that in this process there's a transition which includes the neutrino where the neutrino has to be its own antiparticle. And that just means that the neutrino is of Majorana nature. This is being done by measuring, again, the energy spectrum. You typically have all kinds of background contributions, but also backgrounds from double beta decays with two neutrinos. So you see this spectrum here. And then you look at the end point of this part here and find that there is this peak, a precise sharp peak of the energies of the two electrons. The issue is that forecasting where this peak is requires proper knowledge of the dynamics inside the nuclei here. And those measurements are being conducted. There's many of them conducted in various nuclear transitions or decays. And they haven't yielded a positive result yet. Research is still going on on this end. |
Hidden_Figures_Black_History | An_unsung_hero_of_the_civil_rights_movement_Christina_Greer.txt | On August 28th, 1963, Martin Luther King Jr. delivered his “I Have a Dream” speech at the March on Washington for Jobs and Freedom. That day, nearly a quarter million people gathered on the national mall to demand an end to the discrimination, segregation, violence, and economic exclusion black people still faced across the United States. None of it would have been possible without the march’s chief organizer – a man named Bayard Rustin. Rustin grew up in a Quaker household, and began peacefully protesting racial segregation in high school. He remained committed to pacifism throughout his life, and was jailed in 1944 as a conscientious objector to World War II. During his two-year imprisonment, he protested the segregated facilities from within. Wherever Rustin went, he organized and advocated, and was constantly attuned to the methods, groups, and people who could help further messages of equality. He joined the Communist Party when black American’s civil rights were one of its priorities, but soon became disillusioned by the party’s authoritarian leanings and left. In 1948, he traveled to India to learn the peaceful resistance strategies of the recently assassinated Mahatma Gandhi. He returned to the United States armed with strategies for peaceful protest, including civil disobedience. He began to work with Martin Luther King Jr in 1955, and shared these ideas with him. As King’s prominence increased, Rustin became his main advisor, as well as a key strategist in the broader civil rights movement. He brought his organizing expertise to the 1956 bus boycotts in Montgomery, Alabama —in fact, he had organized and participated in a transportation protest that helped inspire the boycotts almost a decade before. His largest-scale organizing project came in 1963, when he led the planning for the national march on Washington. The possibility of riots that could injure marchers and undermine their message of peaceful protest was a huge concern. Rustin not only worked with the DC police and hospitals to prepare, but organized and trained a volunteer force of 2,000 security marshals. In spite of his deft management, some of the other organizers did not want Rustin to march in front with other leaders from the south, because of his homosexuality. Despite these slights, Rustin maintained his focus, and on the day of the march he delivered the marchers' demands in a speech directed at President John F. Kennedy. The march itself proceeded smoothly, without any violence. It has been credited with helping pass the 1964 Civil Rights Act, which ended segregation in public places and banned employment discrimination, and the 1965 Voting Rights Act, which outlawed discriminatory voting practices. In spite of his decades of service, Rustin’s positions on certain political issues were unpopular among his peers. Some thought he wasn’t critical enough of the Vietnam War, or that he was too eager to collaborate with the political establishment including the president and congress. Others were uncomfortable with his former communist affiliation. But ultimately, both his belief in collaboration with the government and his membership to the communist party had been driven by his desire to maximize tangible gains in liberties for black Americans, and to do so as quickly as possible. Rustin was passed over for several influential roles in the 1960s and 70s, but he never stopped his activism. In the 1980s, he publicly came out as gay, and was instrumental in drawing attention to the AIDS crisis until his death in 1987. In 2013, fifty years after the March On Washington, President Barack Obama posthumously awarded him the Presidential Medal of Freedom, praising Rustin’s “march towards true equality, no matter who we are or who we love.” |
Hidden_Figures_Black_History | Why_should_you_read_Toni_Morrisons_Beloved_Yen_Pham.txt | A mirror that shatters without warning. A trail of cracker crumbs strewn across the floor. Two tiny handprints that appear on a cake. Everyone at 124 Bluestone Road knows their house is haunted— but there’s no mystery about the spirit tormenting them. This ghost is the product of an unspeakable trauma; the legacy of a barbaric history that hangs over much more than this lone homestead. So begins "Beloved," Toni Morrison’s Pulitzer Prize-winning novel about the suffering wrought by slavery and the wounds that persist in its wake. Published in 1987, "Beloved" tells the story of Sethe, a woman who escaped enslavement. When the novel opens, Sethe has been living free for over a decade. Her family has largely dissolved— Sethe’s mother-in-law died years earlier, and her two sons ran away from fear of the specter. Sethe’s daughter Denver remains in the house, but the pair live a half-life. Shunned by the wider community, the two have only each other and the ghost for company. Sethe is consumed by thoughts of the spirit, whom she believes to be her eldest daughter. When a visitor from Sethe’s old life returns and threatens the ghost away, it seems like the start of a new beginning for her family. But what comes in the ghost’s place may be even harder to bear. As with much of Morrison’s work, "Beloved" investigates the roles of trauma and love in African-American history. Morrison writes about black identities in a variety of contexts, but her characters are united by their desire to find love and be loved— even when it’s painful. Some of her novels explore when love challenges social conventions, like the forbidden affection that grows between the townsfolk of "Paradise" and their fugitive neighbors. Other works examine how we can be blind to the love we already possess. In "Sula," one character realizes that it’s not her marriage, but rather, one of her friendships that embodies the great love of her life. Perhaps Morrison’s most famous exploration of the difficulty of love takes place in "Beloved." Here, the author considers how the human spirit is diminished when you know the things and people you love most will be taken away. Morrison shows that slavery is destructive to love in all forms, poisoning both enslaved people and their enslavers. "Beloved" examines the dehumanizing effects of the slave trade in numerous ways. Some are straightforward, such as referring to enslaved people as animals with monetary value. But others are more subtle. Sethe and Paul D.— the visitor from her old plantation— are described as trying to “live an unlivable life.” Their coping mechanisms are different; Sethe remains mired in her past, while Paul D. dissociates himself completely. But in both cases, it’s clear each character has been irreparably scarred. Morrison also blends perspectives and timelines, to convey how the trauma of slavery ripples across various characters and time periods. As she delves into the psyche of townspeople, enslavers, and previously enslaved people, she exposes conflicting viewpoints on reality. This tension shows the limitations of our own perspectives, and the ways in which some characters are actively avoiding the reality of their actions. But in other instances, the characters’ shifting memories align perfectly; capturing the collective trauma that haunts the story. Though "Beloved" touches on dark subjects, the book is also filled with beautiful prose, highlighting its characters’ capacity for love and vulnerability. In a stream-of-consciousness sequence written from Sethe’s perspective, Morrison unspools memories of subjugation alongside moments of tenderness; like a baby reaching for her mother’s earrings, spring colors, and freshly painted stairs. Sethe’s mother-in-law had them painted white, she recalls, “so you could see your way to the top… where lamplight didn’t reach." Throughout the book, Morrison asks us to consider hope in the dark, and to question what freedom really means. She urges readers to ponder the power we have over each other, and to use that power wisely. In this way, "Beloved" remains a testimony to the destructiveness of hate, the redeeming power of love, and the responsibility we bear to heed the voices of the past. |
Hidden_Figures_Black_History | How_one_journalist_risked_her_life_to_hold_murderers_accountable_Christina_Greer.txt | In March of 1892, three Black grocery store owners in Memphis, Tennessee, were murdered by a mob of white men. Lynchings like these were happening all over the American South, often without any subsequent legal investigation or consequences for the murderers. But this time, a young journalist and friend of the victims set out to expose the truth about these killings. Her reports would shock the nation and launch her career as an investigative journalist, civic leader, and civil rights advocate. Her name was Ida B. Wells. Ida Bell Wells was born into slavery in Holly Springs, Mississippi on July 16, 1862, several months before the Emancipation Proclamation released her and her family. After losing both parents and a brother to yellow fever at the age of 16, she supported her five remaining siblings by working as a schoolteacher in Memphis, Tennessee. During this time, she began working as a journalist. Writing under the pen name “Iola,” by the early 1890s she gained a reputation as a clear voice against racial injustice and become co-owner and editor of the Memphis Free Speech and Headlight newspaper. She had no shortage of material: in the decades following the Civil War, Southern whites attempted to reassert their power by committing crimes against Black people including suppressing their votes, vandalizing their businesses, and even murdering them. After the murder of her friends, Wells launched an investigation into lynching. She analyzed specific cases through newspaper reports and police records, and interviewed people who had lost friends and family to lynch mobs. She risked her life to get this information. As a Black person investigating racially motivated murders, she enraged many of the same southern white men involved in lynchings. Her bravery paid off. Most whites had claimed and subsequently reported that lynchings were responses to criminal acts by Black people. But that was not usually the case. Through her research, Wells showed that these murders were actually a deliberate, brutal tactic to control or punish black people who competed with whites. Her friends, for example, had been lynched when their grocery store became popular enough to divert business from a white competitor. Wells published her findings in 1892. In response, a white mob destroyed her newspaper presses. She was out of town when they struck, but they threatened to kill her if she ever returned to Memphis. So she traveled to New York, where that same year she re-published her research in a pamphlet titled Southern Horrors: Lynch Law in All Its Phases. In 1895, after settling in Chicago, she built on Southern Horrors in a longer piece called The Red Record. Her careful documentation of the horrors of lynching and impassioned public speeches drew international attention. Wells used her newfound fame to amplify her message. She traveled to Europe, where she rallied European outrage against racial violence in the American South in hopes that the US government and public would follow their example. Back in the US, she didn’t hesitate to confront powerful organizations, fighting the segregationist policies of the YMCA and leading a delegation to the White House to protest discriminatory workplace practices. She did all this while disenfranchised herself. Women didn’t win the right to vote until Wells was in her late 50s. And even then, the vote was primarily extended to white women only. Wells was a key player in the battle for voting inclusion, starting a Black women’s suffrage organization in Chicago. But in spite of her deep commitment to women’s rights, she clashed with white leaders of the movement. During a march for women’s suffrage in Washington D.C., she ignored the organizers’ attempt to placate Southern bigotry by placing Black women in the back, and marched up front alongside the white women. She also chafed with other civil rights leaders, who saw her as a dangerous radical. She insisted on airing, in full detail, the atrocities taking place in the South, while others thought doing so would be counterproductive to negotiations with white politicians. Although she participated in the founding of the NAACP, she was soon sidelined from the organization. Wells’ unwillingness to compromise any aspect of her vision of justice shined a light on the weak points of the various rights movements, and ultimately made them stronger— but also made it difficult for her to find a place within them. She was ahead of her time, waging a tireless struggle for equality and justice decades before many had even begun to imagine it possible. |
Hidden_Figures_Black_History | What_is_Juneteenth_and_why_is_it_important_Karlos_K_Hill_and_Soraya_Field_Fiorio.txt | One day, while hiding in the kitchen, Charlotte Brooks overheard a life-changing secret. At the age of 17, she’d been separated from her family and taken to William Neyland’s Texas Plantation. There, she was made to do housework at the violent whims of her enslavers. On that fateful day, she learned that slavery had recently been abolished, but Neyland conspired to keep this a secret from those he enslaved. Hearing this, Brooks stepped out of her hiding spot, proclaimed her freedom, spread the news throughout the plantation, and ran. That night, she returned for her daughter, Tempie. And before Neyland’s spiteful bullets could find them, they were gone for good. For more than two centuries, slavery defined what would become the United States— from its past as the 13 British colonies to its growth as an independent country. Slavery fueled its cotton industry and made it a leading economic power. 10 of the first 12 presidents enslaved people. And when US chattel slavery finally ended, it was a long and uneven process. Enslaved people resisted from the beginning— by escaping, breaking tools, staging rebellions, and more. During the American Revolution, Vermont and Massachusetts abolished slavery while several states took steps towards gradual abolition. In 1808, federal law banned the import of enslaved African people, but it allowed the slave trade to continue domestically. Approximately 4 million people were enslaved in the US when Abraham Lincoln was elected president in 1860. Lincoln opposed slavery, and though he had no plans to outlaw it, his election caused panic in Southern states, which began withdrawing from the Union. they vowed to uphold slavery and formed the Confederacy, triggering the start of the American Civil War. A year into the conflict, Lincoln abolished slavery in Washington, D.C., legally freeing more than 3,000 people. And five months later, he announced the Emancipation Proclamation. It promised freedom to the 3.5 million people enslaved in Confederate states. But it would only be fulfilled if the rebelling states didn’t rejoin the Union by January 1st, 1863. And it bore no mention of the roughly 500,000 people in bondage in the border states of Delaware, Maryland, Kentucky, and Missouri that hadn’t seceded. When the Confederacy refused to surrender, Union soldiers began announcing emancipation. But many Southern areas remained under Confederate control, making it impossible to actually implement abolition throughout the South. The war raged on for two more years, and on January 31st, 1865, Congress passed the 13th Amendment. It promised to end slavery throughout the US— except as punishment for a crime. But to go into effect, 27 states would have to ratify it first. Meanwhile, the Civil War virtually ended with the surrender of Confederate General Robert E. Lee on April 9th, 1865. But although slavery was technically illegal in all Southern states, it still persisted in the last bastions of the Confederacy. There, enslavers like Neyland continued to evade abolition until forced. This was also the case when Union General Gordon Granger marched his troops into Galveston, Texas, on June 19th and announced that all enslaved people there were officially free— and had been for more than two years. Still, at this point, people remained legally enslaved in the border states. It wasn’t until more than five months later, on December 6th, 1865, that the 13th Amendment was finally ratified. This formally ended chattel slavery in the US. Because official emancipation was a staggered process, people in different places commemorated it on different dates. Those in Galveston, Texas, began celebrating “Juneteenth”— a combination of “June” and “nineteenth”— on the very first anniversary of General Granger’s announcement. Over time, smaller Juneteenth gatherings gave way to large parades. And the tradition eventually became the most widespread of emancipation celebrations. But, while chattel slavery had officially ended, racial inequality, oppression, and terror had not. Celebrating emancipation was itself an act of continued resistance. And it wasn't until 2021 that Juneteenth became a federal holiday. Today, Juneteenth holds profound significance as a celebration of the demise of slavery, the righteous pursuit of true freedom for all, and a continued pledge to remember the past and dream the future. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.