playlist
stringclasses
160 values
file_name
stringlengths
9
102
content
stringlengths
29
329k
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_21_Thomson_Scattering_NonCollective.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So Thomson scattering-- as you remember, we set up the problem so that we had some scattering particle here. We had some incoming wave with a load of parallel wavefronts at some plane wave. It's got a wave vector Ki in the i hat direction here. We said that there was some origin to our system, and the particle was at a distance R from our origin. And we were going to observe the scattered light in another direction with our spectrometer here. And that scattered wave is also going to be a plane wave that's going to have a wave vector Ks in the s hat direction. There's going to be two vectors, one from the particle and one from the origin, pointing towards our observer here. And we call those R and R prime. But one of the first approximations we said is that imagine that plasma where all of our particles have some vector R is relatively small compared to the distance to our observer. And then we can say that this-- our vector is roughly the same as the R prime vector here. The position of our particle, R of t, is equal to whatever it started off plus its velocity times time. And so we're assuming that these particles are going in roughly straight lines. And we also wanted to distinguish between the time t prime at which the particle is scattering the radiation and the time t at which we're observing the radiation. And we found out through a little bit of algebra that t prime is equal to 1 minus i hat dot beta over 1 minus s hat dot beta, where beta in this case is defined as the velocity over the speed of light. And this was effectively the origin of our double Doppler shift. The particle sees a Doppler shift of the wave that's incident on it, and then the observer sees a Doppler shift of the emitted wave because the particle is moving. And we found that the electric field-- if we said that we had some incident electric fields E sub i at R and t prime, which was equal to some electric field strength and some polarization. That's why this is the little vector. And we're going to assume it's a plane wave, so it's just cosine of Ki dot r minus omega i t prime, like that. So it's this wave with the incident K vector and the incident wave frequency. Then our scattered radiation that we observe at R at time t, which is not the same as t prime, was going to be equal to the classical electron radius over the distance, because this thing is scattering into 4 pi steradians. And then there was a factor that looked like s hat cross s hat cross the incident radiation-- whoop-- at r prime and t prime like this. So we scatter some of that incident radiation. We scatter it into a shape that's given by this s hat cross s hat cross. And later on I'll start referring to this, including that second cross product, as just a tensor capital pi, which transforms the incoming electric field here. And we said that if the velocity of our particle-- this v here-- is equal to 0-- if we're just dealing with a stationary particle-- then our scattered electric field that we observe at our detector is simply going to have a very simple form, which has-- well, rE on Ri acting on the strength of the electric field. That gives us that nice doughnut-shaped scattering pattern. And then we'll have a cosine factor here that simply looks like Ki dot r of 0-- r of 0 because the particle isn't moving, so that's actually r of all time-- minus omega i t plus a factor Ki dot r. And this final Ki dot r factor here is for all our particles because we basically said they all pick up that phase as we go here. But actually all the particles are clustered in a little plasma around about here. So this was quite boring. We just get a scattered wave, which has the same frequency and K vector as the incident wave. But if we then allow our particles to move around, we found out that we get Doppler shifts, as we would expect. And we're also dealing with a regime where we're looking at elastic collisions. So this elastic collisions means that our incident K vector has roughly the same size as our scattered K vector. So we don't change the momentum of our photon very much. And this is-- bless you-- valid when our photon energy is much, much less than-- I've written it in terms of particle velocity-- interesting-- of me ve squared. But I imagine this is also true when it's non-relativistic as well, so I'll just write that down as well. If we violate this limit-- if we start scattering energy photons-- we get Compton scattering, not Thomson scattering. We have to do a full relativistic treatment. So this elastic collisions constraint comes down to us scattering off fluctuations with a K vector, which is defined as the difference between the scattered K vector and the incident K vector and a frequency, which is equal to the difference in the scattered frequency and the incident frequency. And this difference is also equal to the scattering K vector-- the difference between the scattered and the incident dotted into the velocity of the particle. So this looks like the classic Doppler shift that you're used to seeing. So the punchline of all of this is if you input a wave with Ki omega i, you put your laser beam through the plasma, and you measure some scattering in some direction Ks-- which, remember, you have control over because you get to choose where you put your spectrometer around your plasma here-- and you measure it on a spectrometer with some frequency omega s, then you can infer that, in order for you to be able to see this scattered light from your original probe signal, there has to be something within the plasma capable of carrying away a momentum K and an energy omega, or giving a momentum K and an energy omega to your photon here. And the strength of the signal, so the intensity at omega and K, is going to be proportional to the number of scattering things-- scatterers. And these scatterers could be, for example, particles, so electrons, or they could be waves, and I'll often refer to these waves as modes. And we will get on to-- and these modes in this case have an apparent velocity, which is equal to omega/K, so that's our phase velocity here. So this was a very hand-wavy way of trying to derive all of this. We're going to make it more rigorous by going through some Fourier transformations of all of these equations. But that's just a recap of what we talked about last week, so-- questions? Yes? AUDIENCE: [INAUDIBLE] JACK HARE: So there is no vector omega. Omega is a scalar [AUDIO OUT]. That's OK. What I like to think of them as is the momentum-- if you're thinking of it as in a particle picture, then h bar K and h bar omega is the energy and momentum of the electron that scatters them. If you're thinking about it as a wave, you don't have to times it by h bar to get that. You can just say at some wave with K and omega-- so like a sound wave-- omega equals sound speed times K, for example. And we'll talk a lot more about that later on, so. But, yeah, what these represent is something that can scatter this light. If you have no plasma and you fire a laser beam through a chamber, you really shouldn't expect to see any scattered light at some random angle there because there's nothing there to scatter it. So if you see some scattering, you have to say, aha, there must be somewhere something in that chamber to scatter it, and it must be able to change the momentum of my photons by K and their energy by omega. And that limits the sorts of things that can exist inside your plasma, so. Especially for the waves and modes, they'll only be scattering at very discrete values of omega and K given by the dispersion wave. Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: Oh, you mean like these? AUDIENCE: yeah JACK HARE: We'll get on to that in a great deal of detail later on. So these four lectures are like a wave washing on the shore. So we're going to derive Thomson scattering three times. We did the first time last week, we're going to do the second time today, and we'll start on the third time as well. And each one-- the previous derivation is just a simplification of the next derivation. And previously, we got this very, very rough picture that, if we have a single particle, we might be able to say something about the particle's properties, its velocity, if we see scattering at a certain frequency and wave vector. Next, we're going to do incoherent scattering, which is where we scatter off the distribution function of electrons. But these electrons are not correlated. They're all moving incoherently. And so we basically get scattering of each of the electrons individually. And then, finally, we're going to do collective scattering, where we scatter off the collective motion of the electrons where they're all moving together, which is a wave, so yeah. Other questions? Anything online? So before we get going with Fourier transforms, I'm going to write down some assumptions. And these assumptions don't necessarily have to be true for every plasma that we're dealing with. But they are assumptions that we've made in order to do this Thomson scattering derivation. If you want to violate these assumptions, you need to go and check where this assumption comes in and re-derive your Thomson scattering. So first of all, we are scattering from some volume V, which has N electrons inside it. The electrons have a charge of e minus, and there are N/Z ions with a charge of Ze. This is pretty standard. But this is effectively a statement of quasi-neutrality. Secondly, we are mostly going to drop relativistic corrections. So if we want to look at the fully non-relativistic case, we will drop terms on the order of V/c. So if we do some sort of expansion and we see a term with V/c, we'll just drop it. So we're only keeping the zeroth-order terms. If we want to do this sort of quote marks "relativistic," then we will keep the order of V/c terms, but we'll drop the terms on the order of V squared upon c squared. So you can see this is like a Taylor expansion. We're going to claim that they're much less than 1. If you want to do the really relativistic, because obviously this still neglects some relativistic terms, then you should just go and derive Compton scattering instead, which we're not going to do. I cannot stress this enough. There will be no quantum. I don't like it. There's no scattering from ions. We discussed this already. Because of the mass difference, the ions scatter much, much less. We will no longer think about the scattering directly from the ions, but we will think a lot about the scattering from electrons which are dragged around by the ions. We're going to assume that our observer is at a distance R which is much, much larger than the volume within which the plasma is contained to the third power. So this is a length scale now-- V to 1/3. So this is effectively the max value of this little r vector here. So the plasma is small, and it's far away. And this in turn-- this length scale here-- is much, much larger than the wavelength of the incident light. So we're seeing a large number of charges within a wavelength. They're not scattering off single particles. This is equivalent to saying, of course, that there are many electrons in V-- in the volume-- and that V is small. And this is what we needed to justify that our R prime is approximately R-- our argument here. These two vectors are roughly the same. What are we up to? Five assumptions. Assumption 6-- we're going to assume that the frequency of our incident light-- the light which is being scattered-- is greater than-- significantly greater than-- the electron plasma frequency. Don't get fooled. There are going to be lots of i's showing up here. That's always for "incident." I am not thinking about the ions anymore. They're too slow and too heavy. So if we make this approximation, I can approximate the refractive index of the plasma as basically being 1 so I don't have to think about those effects. In reality, you might want to put in a small correction for this, and it's actually quite easy to do. We also have the assumption-- I think this should be number 7, frankly-- no probe attenuation. So the optical depth at tau evaluated at our probe frequency is much, much less than 1. So the beam doesn't get absorbed. This just makes it much easier to do the calculations. If your beam was getting absorbed, you'd get different amounts of scattering depending on where you are on the beam. It's a pain, so we're not going to do that. And then, finally-- we touched on this earlier-- the velocity induced from the electric field of the incident radiation we are going to assume is much, much smaller than the thermal velocity here. And so this was saying that our particles are going in straight lines at some velocity, like the thermal velocity. And the electric field is just making them wobble ever so slightly along those lines here. And this is equivalent because this equals this velocity here from the electric field-- you can work it out as being e times the strength of the electric field over m e omega i-- if you set that to be less than V thermal, then you find out that the power-- the intensity of your laser beam-- which is equal to c epsilon 0 strength of the scattering electric field squared-- it limits this power. I won't derive it directly, but-- so you can't use-- in order for this assumption to be valid, which we'll use a lot in our derivation, our laser cannot be too strong. So if you end up using a very strong laser, you have to re-derive this using the full orbit of your particle, which includes this strongly perturbing electric field, and that's a pain. So obviously, that's not a good reason not to use a very strong laser. You want to use a nice, strong laser in order to get this Thomson scattering effect. But you should be careful about whether your Thomson scattering formalism is still valid if you use a very strong laser. Any questions on any of these assumptions? Yes? AUDIENCE: [INAUDIBLE] We are just dealing with these particles as like points. So we're going to use a Klimontovich distribution function at some point, which has delta functions representing the position of the particles in space and in velocity space. And so we don't have to worry about uncertain things like that. And so we're not thinking about particles as waves. We're not thinking about a semi-classical treatment of electromagnetic radiation scattering particles. So this is a classical treatment. AUDIENCE: [INAUDIBLE] JACK HARE: I would have thought it's when you get to short wavelengths, so where you're getting to short wavelengths, which are getting close to the classical electron radius, so not when you're getting close to the Debye length. We'll actually talk about the fact that the Debye length is really important to this. We can definitely have wavelengths which are above or below the Debye length. But I would have thought when you get down to wavelengths on the order of the classical electron radius, then you'd need a quantum treatment. Yeah. [INAUDIBLE] Depends. If your plasma is very hot, the relativistic corrections are hard to avoid. And I'll talk a little bit about the "relativistic" corrections, which are normally used in a tokamak when you get to like 10 kiloelectron volts, because then your rest mass of your electrons is 500 kiloelectron volts, and 10 kiloelectron volts is not nothing, and you need to have a small correction. I would have thought this one is often quite hard to obey, but it's also quite easy to fix up. You just need to make sure that the dispersion relationship for your incoming light and your scattered light obeys this equation as well, so that it obeys a dispersion relationship where N is not equal to 1. That's really hard to achieve in a lot of plasmas if you have inverse Bremsstrahlung. Because your plasma is relatively cold and dense, then you have very strong probe attenuation that's been seen in lots of plasmas. Yeah. Quasi-neutrality is pretty good. Quantum's pretty good. No scattering from ions is pretty good. This one is pretty good. It's actually hard to get a laser that intense, so yeah. Other questions? Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: Optical wavelengths, because they are nice to work with. [CHUCKLES] But people do X-ray Thomson scattering as well, which is really complicated, and we won't get into it. But that's where things get a bit more quantum. But, yeah, optical wavelengths are nice to work with. People have done Thomson scattering with CO2 lasers, which are about 10 microns. So that's about 10 times longer than optical wavelength. And then the first Thomson scattering measurements were actually done off the ionosphere by Salpeter, and he used radar. So he did Thomson scattering off the plasma that surrounds the Earth and measured its properties, but-- that's when the theory was first derived. But in a lab, we don't use radar because our plasmas are smaller, so-- yeah? AUDIENCE: [INAUDIBLE] JACK HARE: No, this was just-- the previous derivation was just to give you a heuristic feel for a problem-- yeah. The previous derivation was just a single particle. So none of these really talk about single particle-- well, no, they do. Look, we've put lots of electrons inside our volume, yeah. AUDIENCE: The last [INAUDIBLE] JACK HARE: Yes. Yeah. Yeah. We're going to use this as the orbit-- the "orbit"-- of the particle as its equation of motion. And if that's not true, then you need to solve. It will be very-- it's already quite non-linear. It'll be very non-linear at that point because it will depend on how the electric field scatters. The scattered electric field will also distort the orbit of the particle. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah, exactly. Any questions on that? So let's talk about scattering for multiple electrons. So imagine I have now my incident wave coming in, Ki, and I'm still observing along Ks. And now I have a collection of particles here, and they are all oscillating, and they are all scattering light. And so I'm getting a load of e scattered from each of these. And so now the total scattered power, which is, as you remember, dP d omega s d-- oh, no, just dP d omega s OK, fine, so we're integrating over [INAUDIBLE]. Doesn't seem quite right. [VOCALIZING]-- I'll throw this back in. What did we do in the previous lecture? No. I'll leave it as that. That is now going to be equal to R squared c epsilon 0. Previously, we had a term that looked like e scattered dot e scattered star. That was for a single particle. So now we replace this with a term that looks like the sum over j and l to N of ej dot e l star. This is basically summing up all of these electric fields, and these electric fields now interfere with each other. I shouldn't have put these vertical arrows here. This is a time average, remember. There we go-- like that. And it turns out we can split this into two terms. So we split this into the term-- they're both R squared c epsilon 0. And then we have a term that looks like N times es squared. And this comes from the sum where we have j equal to l. So this is effectively the scattered light of each of the electrons summed up-- and there is N electrons scattering-- plus a term that looks like N times N minus 1 ej dot e l. And this is for j not equal to l. So we've split this sum up into the terms where we're just talking about the electric field from the same particle interfering with itself, which is obviously constructive, and the terms where these are not equal, so when the scattered electric field from one particle interferes with the scattered electric field from another particle. You might notice that there's an N minus 1 here because once we've picked which particle is j, there's only N minus 1 particles to be L. But you may also realize that in a real plasma, the difference between N and N minus 1 is very small because there's a lot of particles. So we can just write this as N squared most of the time. And so in this expression, this first term here is called the incoherent term because this just depends on each of the particles independently. And this second term is called the coherent term. And it's called the coherent term because, unless there is something correlating these particles so their electric fields have some correlation, you would expect them all to be random with respect to each other. And then when you dot them into each other and average them out, you would expect that randomness to average out to zero. So to be coherent, we need correlation, and we'll talk a lot about that later on. Now, what's the smallest length scale in a plasma? Yeah? The Debye length-- yeah, exactly. That's what makes it a plasma. So if you are probing this plasma on length scales smaller than the Debye length, you're effectively going to be looking at something that isn't a plasma. And by probing, I mean if our wavelength of our incoming light here, which is lambda i, which is 2 pi upon Ki-- if lambda i is smaller than the Debye length, then we end up with incoherent scattering. So this wavelength here is smaller than the length scale on which the particles can organize themselves and actually look like a plasma. So instead, we just end up looking like a random gas. But if we end up in the opposite limit, where the Debye length is less than lambda i, we get coherent scattering. Because on this length scale here, we end up being able to see the plasma as a coherent object. So this coherence is expressed in the form of waves here. And so we end up with a very important parameter that we call alpha, and we define alpha as 1 upon K lambda Debye-- yeah, kind of fibbed here. We'll get back to that in a moment. And if you have alpha less than 1, you're in the incoherent regime. If you have alpha greater than 1, you're in the coherent regime. This shouldn't have been lambda i. This should have just been lambda, and this lambda here is 2 pi upon K, where K is that Ks minus Ki. So this is saying, if the mode that you're scattering off has a wavelength, which is less than the Debye length, then you're just scattering off individual particles. If the mode you're scattering off has a wavelength that's greater than the Debye length-- you're scattering off the full plasma-- you're scattering off the waves. And you get to choose which regime you're in because-- well, you don't get to choose how dense and how hot your plasma is, which is what goes into Debye length. But you do get to choose this K by where we place our scatterer. Let's see. I had that nice formula for K previously. Remember that K is equal to 2Ki sine theta upon 2, like that. So, by choosing my angle theta, I can make this Ki-- multiply by something arbitrarily small, I can make K arbitrarily small, and so I can make alpha arbitrarily large. So I get to choose by where I place my detector not just the scattering angle, but, also, I get to choose which regime I'm in. Now, it may be that, for some plasmas, this theta is so very, very, very close to the direction that the laser beam's pointing in that we can't realistically do coherent scattering. But there is still a regime where the scattering will be coherent there. And if this doesn't make sense, again, we're going to do it mathematically later on. But this is just to give you an idea of where we're headed. And the next thing that we're going to be focusing on is this incoherent regime. Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: It is-- it comes from the sum of all of these together. Yeah, exactly. Effectively, the wave correlates the electron density fluctuations such that, when you scatter off those electron density fluctuations, all of the electric fields from them end up at your spectrometer in phase, and they constructively add up. So if you've done X-ray spectroscopy and you've looked at Bragg scattering, some people like to look at this in terms of Bragg scattering. But I find most people these days haven't done X-ray spectroscopy and Bragg scattering, and so it's not a useful way to describe it. But if you've seen it like that, the wave makes a grating, and that grating, therefore, gives us constructive interference. Other question? AUDIENCE: [INAUDIBLE] JACK HARE: Just how it's defined. You'll see later on-- if I think we didn't put it in there, there'd be lots of 2 pi's showing up everywhere. So when we derive this more formally, you'll see this factor alpha show up in the equations. And it will literally be a parameter alpha that multiplies the incoherent term and like a 1/alpha over alpha that multiplies the coherent term-- or the other way around-- other way around. And so, if alpha is largest, incoherent term gets suppressed, and the coherent term gets enhanced. And if I put it in with your lambda over lambda E, which is perfectly reasonable, I'd have like a 2 pi squared or something somewhere, which I don't want to do. So this is a better definition. Yeah. Yeah? Yeah. So, again, we will get to that. I don't think anything I've said here reaches the level of rigor where we can really say that or not. So this is just like a guide to where we're going next. But, like I said, we're going to derive the equation where alpha shows up explicitly as a parameter. And then we'll say, OK, depending on the value of alpha, you see more incoherent or coherent. And it's important to note that, when alpha is about 1, you have both of these spectra at the same time showing up-- both of these effects. So you're probably going to end up seeing some of both types of scattering. Yeah. Any questions on that? Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: Oh, yeah, that's fine. Don't worry about it. What's 2 pi between friends? So this is-- again, this is like, what does it mean to be much less than 1? What does it mean to be much greater than 1? Does a factor of 6 matter if I tell you something is much less than 1-- if it's much greater than 1? Yeah. If that was the point, yes, fine, but-- yeah. [CHUCKLES] I can rewrite these, I guess. I can rewrite like a k for Debye length, but then that's weird because you don't know the formula for that, so yeah. Let's keep going. I'm going to need a lot of board space. Now we're going to do the Fourier transforms. So we're looking at incoherent scattering. You won't even notice where I've made the assumption of incoherent scattering. You'll only spot it, probably, when we go back and do coherent scattering. But just to be clear, we're only going to get the incoherent component out of this treatment. So our scattered electric field measured at our detector at time t is equal to r e over R-- classical electron radius-- over the distance to the spectrometer-- this tensor pi, which, remember, is equivalent to s hat cross s hat cross this, is just something that tells you the shape of the emitted field here. And this tensor is acting on the scattered-- or the incident electric field at position r and at time t prime. So if I do the Fourier transformation of this and I want to know what the frequency is-- the scattered frequency-- and because-- [CHUCKLES]-- so we're going to be swapping again very freely between omega s mu s-- so, once again, factors of 2 pi flying everywhere. But it-- so I'm going to write this as a function of mu s because this is what we measure, and this is what Hutchinson uses in his book. But what I'm doing with Fourier transforms-- I'm going to be using omega s. So just feel free to keep adding 2 pi's and getting rid of them as you see fit. So this is just a standard definition of a Fourier transform. We have the electric field in time times by e to the i omega s c t dt. Now, we already mentioned we have a discrepancy between t prime, which is going to be inside this, and t out here. So t prime is equal to t minus R minus s hat dot lowercase r upon c. And we can also say that the dispersion relationship for our scattered wave is just the vacuum dispersion relationship. So K is equal to omega upon c s hat. By the way, this is one place where, if you want to modify this to take into account the O mode dispersion in a plasma, that you can modify this here. But it doesn't actually make a big difference to the derivation. This is going to allow us to replace this s hat here like that. We're also going to say that dt is equal to dt prime times a factor of 1 minus s hat dot beta, like this. And we said that anything on the first order in v/c is much less than 1-- we drop it. So, in fact, we're going to just switch between t, e t, and dt prime. We can't do the same for t prime and t. They are not the same time coordinate, but the amount they change by is the same because we're not dealing with relativistic time dilation in this treatment here. So now I can go back to this, and I can start replacing all of my t's with t primes here. And I'm going to end up with r e upon R-- am I really going to make that much of a jump? [VOCALIZING]-- I'm not going to make that much of a jump. I just need to go back up here and remind you that this Ei has a factor that looks like cosine of-- oh, this isn't going to work. I don't want to do it like that. And a factor of cosine Ki dot r minus omega i t prime. If I make the replacement of t prime with t, then this becomes cosine of Ks r, which is the shared phase for all of the particles. This is just boring-- just the phase you pick up as the wave travels to the spectrometer-- minus omega s of t minus Ks minus Ki dot r of 0 that we saw last time. So when we go back down here-- I'm going to bring this boring phase term out the front. I'm going to turn everything into exponentials. So I'm just going to replace this cosine with an exponential i times this phase factor here. I'm going to bring this phase factor exponential of iKs dot r out of the front here. But what I'm going to leave inside the integral is the integral of the vector-- the tensor pi dotted into the incident electric field at r and t prime. And all of this is multiplied by this factor-- the Fourier transform factor. But now I've taken t, and I've replaced it with t prime using this equation. And so this exponential now has a term, which looks like i omega s t prime minus Ks dot r. And this Fourier transform is now in terms of dt prime instead of dt. I then remember that this electric field here has this dependence-- Ki minus omega i t prime. If I take that as an exponential, it will add with all these exponential factors here. And we will end up with an equation that looks like r e upon R, exponential of this boring phase factor term iKs dot r, integral of i dot Ei0. So this is just the strength of the electric field and its polarization-- exponential of i omega t prime minus K dot r dt prime, where, again, I'm using those definitions of omega and K that we had previously. So this omega is omega s minus omega i, and this K is Ks minus Ki. So I've now got something where all of my t parameters are t prime parameters. So I can now do this integral. I couldn't do it up here because I had a mixture of t primes inside the scattered electric field and t's for where I really am. So you've really got to take care of doing all of these Fourier transforms carefully. This pi dot Ei0-- that can go outside the integral as well. It's just constant in time. Remember, that's just s cross s cross something like that. That's set by where our spectrometer is. And we're assuming we've got an infinite plane wave but intensive electric field change. So what is this Fourier transform? Fourier transform of an oscillating sinusoid. Just a delta function. Thank you. There's a factor of 2 pi from the Fourier transform when we get the definition of the delta function. And then we've still got this tensor dotted into the electric field vector to give us a scattering pattern. Now we have a delta function K dot v minus omega. And if you want to follow along this in Hutchinson's book, this derivation here is Hutchinson's 7.2.15-- like that. So this is actually nothing new. This is-- once again, we have just shown that our scattering is going to be off-- the scattering of the particle is going to give us a scattered frequency omega, which is K dot v. But now we've done it in terms of Fourier transform. Previously, we did it in real space and real time, and we just hand-waved our way through it. Now, we've got the spectra actually in a much more-- slightly more rigorous way. So this is still just a scattering off a single particle. If we want the scattered power off this single particle, then we again go to c epsilon 0 r squared. And now we need to do the time averaging. So to time average, we integrate between minus t/2 to t/2, and we divide by 1/t here. We're going to take the limit of t going to infinity in a moment so it will start to look more like a Fourier transform. And this average power, again, is defined in terms of the time. So Es is a function of time. Es star is a function of time, integrated d time. But if we take the limit as t goes to infinity, we can use something called Parseval's theorem, which says that this quantity here is equal to the integral Es in frequency space, dotted with Es in frequency space, integrated over frequency, still with a 1 over time out the front here. So this is a kind of-- [VOCALIZING], I'm not quite sure about the 1 over time out the front there-- Parseval's theorem. Dimensions don't work. I think I should have paused earlier and asked for questions, because this is actually an excellent derivation. So let's just go back-- draw a line under this here-- see if anyone has any questions on how we got to here. OK, then, we'll go on. So now we're trying to calculate-- yeah, all right. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. Yeah, sure, sure, sure. I will pause in a second [INAUDIBLE]. I just want to get through this section first. AUDIENCE: [INAUDIBLE] JACK HARE: No, this is just the incident-- i Ei0, which is just the strength and the direction of the electric field, without any of its-- so it's-- the incident electric field is Ei0 times by this cosine. So I put all of the fluctuations and interesting time and space variation in the cosine. But this is just our strength for our electric field here. So all this is saying here is that the scattered electric field is linear in the incident electric field. All of the funkiness with time and space has gone into the delta function. So here, we want to be able to evaluate this function, but we don't actually have a great expression for Es a function of time. But we do now have a great expression for Es a function of frequency. So we use Parseval's theorem to convert between the time-power spectrum and the frequency-power spectrum. And that means we can now write down our scattered power dP into some scattering volume or scattering solid angle, omega s, and some scattering frequency. And this is equal to c epsilon 0 r squared of K-- the limit of t going to infinity. So we're taking a long time slice. This allows us to use our Fourier transforms where the limits here are plus and minus infinity here. And we do the Fourier transform of this-- sorry. Don't do the Fourier transform. We've taken d nu from this side here, and we've put it down here to form our scattered power per frequency bin here. So we're getting rid of this integral now. And now we're trying to evaluate this term, which looks like Es dot Es star instead. And we have that equation up here. So we want this term-- I'm going to take it from here. So, first of all, we have a term that looks like r E squared upon R squared. Then this term will cancel out because we'll have it times by its complex conjugate, so we can get rid of it. Good riddance. There will be a term that looks like 4 pi squared here. We'll have a term that looks like pi dotted into the incident electric field squared. And then we'll have a term that looks like delta squared K dot v minus omega. The trouble with this is that we have a delta function squared. It's a pretty odd object because a delta function is already a pretty odd object, and squaring it makes it just odder. So we're going to do the sort of thing that makes the pure mathematicians wince. And we're going to say, [VOCALIZING], this is a little bit like a sinc function. So a delta function-- it's sort of like a sinc. And if our sinc function of t-- or, in this case, I guess, nu-- as this nu gets very, very small. So we have a function that looks like a sinc, and then we approximate the delta function as just a very, very, very narrow sinc. And if you do this, you can rewrite this delta squared as looking like t upon 2 pi, just times the delta function itself. So delta function squared of frequency is equal to t upon 2 pi delta function of frequency. This is by far the least satisfactory part of this derivation, by the way. But Hutchinson doesn't even do it. He just skips over it. So I'm trying to at least demystify where this might come from. But I don't know how to do this very, very rigorously. Let's just go with it for now. We're going to replace this delta function with just a delta function not squared because those are the sorts of things we can work with. Then we can run through all of this, and we can find out that we have r e squared c epsilon 0 limit of t going to infinity of 1/t times t upon 2 pi-- that's this factor here-- times by 4 pi squared. We still have this i dotted into the electric field strength squared. And now we just have-- I'm running out of space-- this delta function of K dot v minus omega not squared. The convenient thing about doing this is we've generated a factor of t which can cancel out with this t. So we've canceled out an infinity or a 0 that we've accidentally introduced. And now we've also got a 2 pi, and that will cancel out some of these 2 pi's. The whole thing starts to look a little bit more tractable. And so the result is-- I don't quite have space for it on this board, which is a real shame after we've come all this way. I shall write it here-- dP d omega d nu is equal to r e squared c epsilon 0 Ei0 squared. What the hell is this? I'm going to keep it as it was-- i dotted it into Ei squared times 2 pi delta K dot v minus omega. The point is that this is what we should call our spectrum. We're looking at the power scattered in a certain direction into our spectrometer per unit frequency. And so we can now evaluate this. And once again, we can say, if we're looking at power per unit frequency-- or rather, dP d omega-- the power scattered for a certain solid angle-- this tells us that, from a single particle, our spectra will just have a little delta function then. And that delta function will be at a frequency omega, which is equal to K dot v. So if our initial laser line was at some wavelength like this-- this is omega i-- then this is at omega s equals omega i plus omega-- and we say, aha. This is blueshifted. It's of higher frequency. And so means the particle is-- got some component of velocity towards us. Again, this is all just single-particle scattering, but we're now working in the Fourier domain as opposed to in real space, which means we can actually write down equations for the spectrum, which we couldn't do when we were working in real space. We just had a hand-wavy argument for it. Now I will pause. Oh, one more thing. Just so you can have contact with the book again, this is Hutchinson's equation 7.2.19. So in five equations, he does all of this. I have tried to spend a little bit more time doing it. But I will agree with anyone who says this isn't [INAUDIBLE]. Questions? Yeah? AUDIENCE: Where did that 1/t [INAUDIBLE]? JACK HARE: [VOCALIZING] Yeah. So what I have written in my notes is 1/t here. But if you set these two equal to each other, the dimensions don't add up because of the limits here, or because of what we're integrating over. So in a panic, I changed it to be t because that made the dimensions work. But it makes the rest of the derivation not work. So probably, there's some reason why we have this 1/t in front of both of them that I've missed. And I'm very happy to go and have a look at the derivation again in Hutchinson's book-- try and work out what's going on. But what we are trying to do is replace this integral here over the electric field in space with this integral over the electric field in frequency, but the dimensions don't quite work out. So that's where the 1/t was meant to come from, and that 1/t is the one that gets canceled by the t over 2 pi that we get from our shoddy treatment of the delta function. Yeah. Other question? AUDIENCE: Yeah. So this whole [INAUDIBLE] JACK HARE: If you're doing a Fourier transformation, you have to do it over infinite time. So this is just simply-- but if you put that infinity in straight away, then you start getting 0's. So we as a t, where it's a really big t. But then eventually, we cancel out, so it's like you don't actually care how big it is. So to do this rigorously, you actually have to do Laplace transformation because the laser is not-- cannot be a plane wave. It's not traveling through the plasma for all time. It turns on at some time. And so you have to do Laplace transformations where you only take into account for t greater than 0. But the mathematics of that becomes harder, so I'm doing these Fourier transforms, and there, you need infinite time. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. Sorry? AUDIENCE: [INAUDIBLE] JACK HARE: This one? AUDIENCE: Yeah. That represents the time average? JACK HARE: Yeah. This is where-- the time where we're working out the average power because we're working out the instantaneous power, we're integrating it over some interval T, and we're dividing by T. Yeah. AUDIENCE: And then we're [INAUDIBLE]. JACK HARE: And then afterwards being like, yeah, but-- how long do you need to do this for? Well, to do this for a plane wave, you should do it for a few cycles. But to be rigorous, you want to extend it out-- well, maybe not rigorous in this case. To be slightly more rigorous, you want to extend that integral to all time if you're really doing a Fourier transform type approach. Yeah. And this is the requirement of using Parseval's theorem, which I clearly don't know as well as I thought I did because of this problem with the units here. Yeah. Yeah. Yeah, I'm confused, because this is definitely something that has dimensions of electric field squared. This does not have dimension of electric field squared. Any other questions? The nice thing about our treatment is we said that, if we want to get the scattered spectrum from many particles, we are not interested in how the scattered electric fields in those particles interact with each other. That's the coherent part. This is the incoherent part. So if we now have the scattering spectrum from a single particle and it's showing up at a frequency that is to do with that single particle's velocity, if we want the scattering spectrum from many particles, we simply sum the scattering from each of the particles V together. So this now, you're probably thinking, starts to look like integration over a distribution function. So let's go do that. Do I need to write that down? No, I'm happy with that. Actually, I'll write it down again here so we've got it on the page as we look at it. So our scattered power into some solid angle into some frequency looks like r e squared times i dot-- why am I writing it -- put an i everywhere else. I'm just going to call it i again. So I've taken the electric field strength and polarization-- I've now split up even further into a term that's the electric field strength and its polarization. You get the polarization inside here. I've taken out this factor of Ei squared because, when times that by c epsilon 0, I can put all of that together and write it instead as the incident power of the laser over the area of the laser. So this is now an intensity in watts per meter squared here. And we often think about our lasers in terms of their intensity. This is nice units. And then, finally, the interesting bit, which is this 2 pi delta K dot v minus omega. So this is a scattering from one particle. And to be clear, it's one electron. So, again, nothing particularly new here but just now in frequency space. So now we want to integrate over some distribution function f of v d3v d3r. And remember, when we have a distribution function, we say that the density is the integral of f of v over velocity space here. So this is the density as a function of r-- the zeroth moment of the distribution function. What we're going to end up doing is we're going to be integrating this f of v times this delta function K dot v minus omega d3v. When we do that integral, this delta function is going to pick out certain things about this velocity distribution. It's only going to pick out particles which have a velocity which is equal to omega/K and it's only going to pick out particles which have a component of velocity parallel to K. That's the v dot K part as well. So when we do this integral, we end up with a scattered power spectrum that looks like-- scattered power, again, into some solid angle-- some frequency-- is equal to 2 pi r e squared, intensity of the laser beam, a factor which accounts for where the scattering light goes to, and then the interesting bit. And the interesting bit is-- as opposed to having the full distribution function, we now have a distribution function f sub K evaluated at omega/K. And then there's also a factor of 1/K out the front here. So let me unpack some of these bits here. This f of K-- imagine we have some full distribution function in three dimensions where we have some coordinate that is perpendicular to K and we have some coordinate, which is along K. And what we're doing is the integral of this function, integrating over all the perpendicular velocities. So our f of K is simply a one-dimensional slice through that distribution function. You can think of-- maybe, for example, in-- just in two dimensions here, we could have the K direction and the perpendicular to K direction. And our distribution function could look like-- and I'm drawing contours of f here. So, for example, this is f equals 1, 2, 3 type thing. These can be much bigger numbers, but this is just to give you an idea of some function which is peaked around about the center here, but it's not isotropic. And what our Thomson scattering is picking up is just a slice through in one direction of that distribution function here. So if I take this cut, I get out the distribution function that looks like f of K versus vK, and maybe it looks like this. So I'm only sensitive to one-- if a particle's moving in one direction-- in the K direction here. And this 1/K-- if you're wondering where that comes from, this is because dv is equal to d omega upon K. So when I change this integral from being in terms of velocity space to being in terms of frequency space, I'm going to get out this factor of 1/K here. So why did I write all of that? No, this is right. I don't need anything else. So, just to be clear, this is a 1D distribution function. Now, if your system is isotropic, this doesn't bother you at all because you can-- if you think there's lots of collisions, there's no magnetic fields, my distribution is isotropic, then I can just make one measurement of the distribution function in one direction, and I've got all I need. But if your distribution function is anisotropic because there's magnetic fields or electric fields, then you won't be able to pick up the full distribution with just one measurement. You'll need lots and lots of different measurements. For example, you'll need another K like this and another K like this. And you can effectively do a tomographic reconstruction in velocity space by taking lots of different slices at different angles here. I will just deal with a few consequences of this, and then I'll take some questions. So our scattered spectrum-- which, again, we've been writing this scattered spectrum in our differential notation as d2P d omega d nu. But every time you see this, you think, that's what I measure. I am measuring some power into some solid angle represented by my spectrometer, and I'm measuring it resolved in terms of frequency. This is what I measure here. This measures-- if I can spell "measures"-- the distribution function f of v parallel to that K scattering vector here. And so, if I have multiple Ks-- I have multiple K-- I get f of v for multiple directions. So, again, I have a plasma like this. I put my scattered light through here. This is my laser beam. It's got some Ki like that. And then I measure the scattering, for example, in this direction. We'll call that Ks1. Then my K1-- remember, K is equal to Ks minus Ki-- is equal to Ks minus Ki like that. So I measure scattering along this K1. If I measure instead at 90 degrees-- sorry, 180 degrees to that-- this is Ks2. Then I go Ks minus Ki. I end up there. This will be measuring direction K2. And these two will be measuring different cuts through our velocity distribution. So if it was anisotropic, they could be different. By the way, if these are at 180 degrees, these two should be at 90 degrees. They're not, because I'm bad at drawing that diagram, but that's where it should be. So you could measure orthogonal components of a velocity. And, of course, if it's fully three dimensional, you might want to scatter out of the page here. And then you would measure some scattering K that's also out of the page. So if you have these multiple lines of sight, you can reconstruct your three-dimensional f of v. The other cool thing about this-- is you might be able to measure non-Maxwellian distributions, even in one dimension. So if I just write it as f of v without the vector-- so we're thinking about this one-dimensional cut here-- if we look at our spectrum nu, and this is dP d omega-- our scattered light-- if we look at this on a log scale, then this dP d omega is just about log of f. On a log scale here, our Maxwellian would just look like a parabola. It would go as minus v squared, which is minus nu squared, like that. So if we had a Maxwellian distribution, we'd expect just to see a spectrum like this. But if we have a non-Maxwellian distribution, perhaps we've got scale, some tails coming out of the side here. And so, in principle, using this incoherent scattering, because we measure the full distribution function, if that distribution is not Maxwellian, we might be able to measure non-Maxwellian tails out here. Now, in general, these tails are weak scattering. There's not very many particles out in the tails. It's hard to make plasmas which are very non-Maxwellian. So it can be very, very hard to measure this. But if you are able to repeat your experiment lots of times and build up your signal like they did in these low-temperature plasma experiments, you may be able to measure the presence of fast electrons-- non-Maxwellian electrons out here. So this is two very cool uses for incoherent Thomson scattering. I will pause there and take questions. Maybe I'll put this back down so you can see what we were talking about before. Any questions online? Yeah? Yeah, so we saw that the scattering cross-section was like-- you expect 10 to the minus 8 of your particles-- photons-- to scatter. So the chance of it getting scattered twice would be 10 to the minus 16, which is very unlikely. So we don't worry about double scattering. I guess I could have put that on my list of assumptions, but yeah. So we don't worry about double scattering. Just one quick note-- a relativistic correction, because if you-- generally, in tokamaks, they do incoherent scattering because it's very hard to get into the alpha greater than 1 coherent scattering regime. And in tokamaks-- if you've got a nice hot tokamak, you may well have a relativistic plasma. So just a quick note here on what the relativistic correction looks like. First of all, remember, you may need to think about relativistic beaming. So this is where the radiation pattern of your scattered light beams forward. This is v/c much, much less than 1, and this is maybe v/c less than or around about 1. So, first of all, you might want to think about this relativistic beaming. But the other thing you might want to think about is, how does this affect your spectrum? Well, there's a long derivation of this, but there's a short version of it, which is that your spectrum-- which, again, we're writing as d2P d capital omega d nu-- the relativistic version looks a lot like your non-relativistic version, which we have written several times over on the side here. But it has a correction factor to it, and that correction is 1 plus 3 omega upon 2 omega i. This means that the whole spectrum gets shifted up. Remember that omega here is equal to K dot v. If you're wondering why this is relativistic, that's where the v is coming in. So this is a correction on the order of-- and this omega i, by the way, is just cK. And so this correction here is on the order of v/c. These are the ones we said we'd keep for our relativistic correction. What this correction means is that we get a blueshift. So the spectrum is always blueshifted-- always ends up as being at a higher frequency. You can see that by the shape of this and the fact that it depends on the sign of omega here. And, in particular, this correction is for temperatures-- electron temperatures-- on the order of the keV range. So if you're working below that, you really don't need this correction. But if you're working in a 10 keV plasma, you do need this correction. Because of this blueshift, it's important to measure both the positive shifted frequencies and the negative shifted frequencies. Because if you don't measure those, then you're expecting your spectrum for a Maxwellian to be nice and symmetric. But if you are doing the relativistic one, then your spectrum might look like this. And if you only measure one half of your spectrum-- this is frequency-- we only measure the positive frequencies or negative frequencies-- we might misinterpret what's going on here. If you're starting to work with relativistic plasmas, you may be tempted with your spectrometer, because you only have a finite number of frequencies you can measure at, to only measure positive or negative frequencies because the spectrum is symmetric. But in the relativistic case, it's not. This is like d2P. OK. And this is Hutchinson's equation 7.2.28. And he does the derivation of this properly. I am just giving you the result. Any questions on that? Just like a side note. We have just enough time to introduce coherent scattering, but not enough time to actually do all the fun mathematics of it, so we will leave that for next week. So that was incoherent scattering. We derived the scattering off a single particle. Then we integrated over the distribution function and said that the scattering off each of these particles does not interfere with the scattering off any of the other particles. Or, if it does interfere, another particle's scattering will add up, and that sum over the scattering between different particles will eventually cancel out. That is not going to be the case in coherent scattering. So, again, we're just going to sketch what's going on. Before we go into the mathematics, I want you to have a heuristic understanding of what's happening first so that you have some faith that, when we wade through the math, it'll all be worth it in the end. So this is just a simple sketch for you. Again, we're talking about scattering off fluctuations which have a wavelength lambda, and that lambda is greater than the Debye length here. So these must be coherent fluctuations because we can no longer see individual particles. We can only see particles that are Debye shielding other particles. And so these particles must be moving in concert with the other particles. And just a note-- as we said before, this is the condition that alpha is greater than 1. The coherent fluctuations are modes or waves. I'll use both words. So let's consider what sort of modes and waves there are in a plasma that we can scatter off. So we consider modes with frequency omega, which is equal to, as we keep saying, omega s minus omega i. We're dealing with the difference between these two frequencies. And, again, we're looking at Doppler shifts which are on the order of K dot v. And so this is going to be a relatively small quantity. So this omega s minus omega i-- our omega is much going to be much, much less than omega i, which is basically the same as omega s. So we're looking for modes with a relatively low frequency-- much lower than the sorts of waves that we see propagating in free space, like our laser. But, despite the fact that these modes are low frequency, they may not actually have a low wave number. Because, remember, our scattering diagram-- we have some incoming wave, we measure it in some scattering direction Ks like this, and we said, because this is elastic scattering, that the size of Ks is roughly the same as the size of Ki. They didn't have to be pointing in the same direction, though. And so that means that K, which is equal to Ks minus Ki-- again, we just look at this and do this as a vector Ks minus Ki like this. And so the size of K is actually on the order of the size of Ks or the size of Ki. So, although the frequency of the mode is much lower than the frequency of a free space propagating plane wave, the momentum, or the wave number of the mode, is roughly the same here. AUDIENCE: [INAUDIBLE] JACK HARE: Nothing I've done here actually is talking about the Debye length yet. AUDIENCE: [INAUDIBLE] JACK HARE: Yes, though, again, none of this mathematics so far has said anything about that. We're just-- what all of this means together is-- we're looking for a mode, which has v phase which is equal to omega/K. We know that this omega is much, much less than omega i, which is equal to CKi. But we know that this K is on the order of Ki. So if we take the ratio of omega and K, we can see that this will be much, much less than the speed of light, which is-- actually, this would be the same if I replaced this with the electron velocity here, if I was doing incoherent scattering of individual electrons. But this same condition now holds here. So what we need to do is we need to identify-- find modes with-- so what modes are there in a plasma that satisfy this? They've got a low phase velocity compared to the speed of light. They have relatively low frequencies, but they have high momentum. We're not dealing with any magnetic fields here. So we're not going to scatter off alpha [INAUDIBLE]. Sound waves? What is-- and can you give me a posh name for a sound wave? An acoustic wave. There we go. So what is the dispersion relationship for an ion acoustic wave? I'm just going to take the square root there. And what is cs? AUDIENCE: [INAUDIBLE] JACK HARE: Right. Well, we're working in a plasma that can be multiply ionized. So I'm going to put a z in there-- z in there-- and it turns out, in reality, it's z t e over t i over M i. Is this a wave which has low frequency and high momentum, which is the same as saying, does it have a phase velocity much less than the speed of light? What's its phase velocity? This isn't a trick question. AUDIENCE: [INAUDIBLE] JACK HARE: Why? OK. [CHUCKLES] So the phase velocity of it is just the sound speed. And that sound speed is going to be much less than the speed of light. And, yes, maybe it is because ions are slow. So you'll need, effectively-- this is a relatively heavy mass. You calculate this, and this number is going to be like 10 kilometers a second or something like that. It's going to be much, much less than 3 times 10 to the 8. So these are great modes. We can definitely scatter off these modes. These are actually very low frequency modes. These are the lowest frequency modes in a plasma. What's a slightly faster wave? Yes? You're cheating. You've done this before, OK. [CHUCKLES] Go on-- Langmuir waves. I'm going to call these electron plasma waves, but they are sometimes named after Langmuir. So what is the dispersion relationship for a Langmuir wave? And we could go and work out the phase velocity for these. It's a little bit more difficult. But we'll find out that these do have a low phase velocity, but it's not quite as low as the ion acoustic waves. So these are just the low frequency waves. So if we're scattering off the ion acoustic waves, we expect to see small frequency shifts. If we're scattering off the electron plasma waves, we expect to see slightly larger frequency shifts.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_20_Thomson_Scattering_Basics.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So today, and for the remainder of this semester, we will be discussing Thomson scattering. Please hold your applause. You won't like it. [LAUGHS] So Thomson scattering is an extremely powerful and subtle diagnostic, and it can be used over a huge range of different plasma parameters to provide local information about our plasma. And so that makes it extremely powerful in comparison to a large number of line integration techniques that we've seen so far. The basic idea of Thomson scattering is very simple. We have our vacuum chamber with some plasma inside that we would like to diagnose. And we take a laser beam, and we focus it inside the plasma. And the light from the laser scatters off the particles in the plasma, and it will scatter off in lots of different directions. Our job, then, is simply to place some sort of detector to look at this scattered light here. That detector is most likely to be something like a spectrometer. If we use a spectrometer, we can see that the initial laser that we put in, which we will choose to have a very narrow line width-- so it will be at just one wavelength or just one frequency-- that narrow line width will undergo Doppler drift due to the velocity of the plasma. And it will undergo Doppler broadening due to the thermal motion of the plasma. So this will give us out two key quantities. Cool. Could you mute, please, Yang? AUDIENCE: Yes, I can hear you. JACK HARE: Yes, but could you mute yourself? Because we can hear something in the background. AUDIENCE: Oh, sorry. JACK HARE: Thank you very much. Ah, tediousness of the Zoom. OK, good. And so this is going to give us things like the flow velocity within the plasma and the temperature of the plasma-- so the ion temperature and the electron temperature. And so this seems simple enough. And yet we're going to spend four lectures on it. And so it is not particularly simple at all. And, in fact, if you keep analyzing the Thomson scattering, you can also get out quantities like the average ionization state of the ions. We can get out the electron density. We can even start measuring the local current within the plasma. We can measure the magnetic field locally within the plasma. And we may be able to get out the full electron and ion distribution function, which is particularly important when these distribution functions are not Maxwellian. If they were Maxwellian, of course, they would fully characterized, once we knew the temperature and the flow velocity and the density. Right. So the nice thing about Thomson scattering, from my point of view, is it unites a lot of plasma physics. We have electromagnetism that we need to understand to do this. We need this to get the particle orbits and the radiation from these particles. We need to understand our statistical mechanics. In particular, we need kinetic theory because, although we're going to end up a lot of the time with gross fluid quantities like temperature and flow velocity, in order to get them, we still need to first do the theory using distribution functions, even if those distribution functions turn out to be Maxwellian. And, of course, this involves lasers. Lasers are just simply very cool. It also unites a lot of plasmas. So I'll give you some ideas of the sorts of plasmas that we can measure stuff in. If I have a little table here with the electron temperature and the electron density, we can start out with a pretty low-temperature plasma, 1 electron-volt, and also a pretty low density, 10 to 17 per meter cubed. This sort of density could be a low-temperature plasma used, for example, for converting carbon dioxide into carbon monoxide and oxygen. This is an interesting plasma-catalyzed process. It could be used for doing carbon capture. The plasma catalysis here is because the electron distribution function is non-Maxwellian. And it's actually the fast electrons in the tail of this distribution which do the plasma catalysis process and make it thermodynamically and economically feasible. But in these sorts of plasmas, people have been able to characterize the non-Maxwellian nature of this distribution function extremely well by repetitively pulsing this plasma and doing Thomson scattering and acquiring signals over hours or days. And so even in this very low-density, low-temperature plasma, Thomson scattering has given us a lot of information. We could go up to a plasma which is much hotter, 6 KeV, and a density of around about 10 to the 20. This will be a plasma-- the specific example here is TFTR, which was a fusion reactor prototype at Princeton. And this was a D-T plasma. And it produced a significant number of alpha particles. There was quite a lot of fusion going on. It got to very high temperatures and moderately high densities. Here they did time-resolved Thomson scattering. And they were able to image an event called a sawtooth crash, which is a catastrophic loss of confinement due to reconnection. And they made a movie of this, effectively using the Thomson scattering. So they were able to better understand this process. And then we can go to similar temperatures here but much higher densities-- 10 to the 27 per meter cubed. And this is a measurement using Thomson scattering inside an inertial confinement fusion hohlraum, which is that little gold cylinder that has 192 laser beams hitting the inside of it to produce an X-ray bath. And it was in this hohlraum with all the lasers coming in. There's a large amount of plasma on the walls. Remember, the eventual goal is that the X-rays compress this capsule. But the plasma on the walls is pretty hard to diagnose. But folks were able to use Thomson scattering to diagnose the temperature and density here, which is very important if you want to know what your X-ray flux is going to be onto the capsule. So this is 10 orders of magnitude in terms of density, but it is fundamentally the same technique. So how are we going to tackle this? Oh, no, I've got another bit of reasons why you should care about Thomson scattering. Let's keep going. OK, so Thomson scattering also changed the course of fusion research forever. Now, I may have hopped on about this a little bit already. Those of you who took 2262 with me will remember the famous or infamous ZETA Reactor which was in the UK. So this was back in the '50s or so. 1958, they say they've got fusion. It turns out that the reason that they think they saw fusion is they didn't understand their diagnostics. So this is poor diagnostic interpretation. They interpreted their neutron spectra as being due to isotropic neutrons when, in fact, it was beam-target fusion. This, of course, was rather humiliating. But it did lead to a rather good result, which was the development of Thomson scattering, because it was clear that we needed much better diagnostics in order to measure our plasma conditions and actually show that fusion is happening. We need to know if the plasma is even hot enough to do fusion. The problem with the ZETA machine is it was far too cold anyway, wherever we've got fusion happening. And the British should have realized this before they claimed that fusion was occurring here. But they did develop this Thomson scattering diagnostic with some of the first lasers. I'll point out that the laser was just invented in 1961. So they didn't really have a chance to do it before then. But after then, there was really no excuse. And this was very handy because only a few years later, there was a new device, T3. This was in the USSR. And it was something that they were calling a tokamak. Some of you may have heard of this. And in 1968, these guys claimed that they got temperatures in excess of 1 kiloelectron volt, which was significantly better than any other machine at the time. So this was really remarkable. And, of course, there was a huge amount of skepticism, both from the UK, who felt a little bit burnt after the ZETA incident, and also from the US. And so there were needed to be a way to verify the claim of the Soviets. And what happened was kind of a remarkable piece of Cold War history, which was a team from the UK went to the USSR with their Thomson scattering diagnostic in a load of crates, consisting of a laser and a load of spectrometers here. And they used it to confirm these temperatures. And there's a beautiful Nature paper by Peacock, the name of the lead scientist on this group, showing the Thomson scattering spectra from the T3 tokamak. And this caused an absolute firestorm across the world of fusion. Instantly, everyone wanted to build tokamaks instead. So at the Princeton Plasma Physics Lab, they had the model C stellarator, which sort of looks like this kind of racetrack-shaped thing, like this. And within one year, they'd gone, all right, get rid of that. Now look. It's a tokamak. They converted the machine into a tokamak and immediately started to get very good performance out of it. So if you work on tokamaks, which many of you do, you should thank Thomson scattering. And it's the reason why everyone was so convinced straight away that they were the way to go. All right, question? AUDIENCE: What diagnostics did they [AUDIO OUT]? JACK HARE: Mostly neutron diagnostics. They had neutron detectors, and they were looking for neutrons because they were like, if we have neutrons, there must be fusion. And there were fusion reactions happening, but they were beam-target fusion reactions caused by [INAUDIBLE]. They were not isotropic. If they had more neutron detectors, they could have detected the anisotropy, and that would have been a key signature that they didn't have thermonuclear fusion. Yeah? AUDIENCE: Did the [INAUDIBLE]? JACK HARE: Well, I mean, I don't know. But I do know that Peacock wrote a book about it called Lasers over the Cherry Orchard or something like that, which I've not read but I'm told is quite good. And I should read it at some point. So maybe he would have more insight than I do into geopolitics in 1968. OK, other questions? Good. I haven't really said anything yet. So let's do some math. Is it math next? Yes. OK, good. OK, so I gave you a very handwavy explanation of it through plasmas and lasers. Let's break this down to just a single charge. Here it is. And this charge is initially at rest. And there's some wave packet. So think of this as roughly a plane wave, but I'm just going to draw it as a wave packet. It increases in amplitude and decreases in amplitude again. And this wave packet is traveling along in this direction. So this is our laser beam going through. It's got some finite extent because the pulse of the laser beam has some finite width. There would obviously be many, many more oscillations within the wave packet. But then you won't be able to see them when I drew them on the board. So just to be clear, particle, and this is an EM wave. So, for example, this could be electric field here. What does this particle do in this electric field? Yeah? It will oscillate. This particle will see the electric field going up and down, and it will join it. It will continue going up and down. OK, now let's imagine some time a little bit later, we now have our particle oscillating still. And the wave packet has passed on. What does this particle do there? Hmm? It's an accelerating charge. So it will continue to radiate away. Presumably, it'll be radiating away little wave packets of its own. Yeah? AUDIENCE: [INAUDIBLE]. JACK HARE: Yeah, I guess we're still dealing with a state where there is still some electric field trailing off. So this is radiating while the electric field is going past. But because it's being accelerated by the electric field, it will be emitted. So this is the geometry of the problem that we're going to consider here. We're going to consider just plane waves here because it's much easier than doing the Fourier-- well, it's much easier when we go into doing the Fourier analysis. So we're just going to consider some plane waves. We're going to consider a plane wave coming in. And this plane wave has some wave vector Ki, which is in the i hat direction. That is just simply Ki divided by the size of Ki. And there's a load of parallel wavefronts like this. So these are just constant phase. You can't get away with a geometric optics picture of Thomson scattering. We've got some particle. And this particle is moving in some direction with a velocity v, which is a vector. And it's doing it at a time t prime. We'll get back to this t prime in a moment. There's some subtleties here that we have to deal with. We also need to have an origin in our system. This is the point where we measure all of our displacement vectors from. And so we can say that this particle is at some distance. I'm going to draw it somewhere slightly different so I don't have an accidental coincidence between my vectors. This particle is at some position r t prime, like this. So this specifies this single particle very well, and this specifies our incoming electromagnetic wave. Now, we assume that there's going to be scattering in lots of different directions. We haven't exactly solved where all the scattered light is going to go to, but maybe it goes lots of different directions here. So we're just going to consider what happens if we have an observer down here looking at the scattered light from this particle. Well, if we have an observer, that means that the light that we are seeing must be heading towards it with a wave vector Ks that is in the s hat direction. And these are more plane waves coming towards our observer. And we also need a position vector for our observer. And that is a position capital R. We can assume that the observer is stationary in time. And so I don't need to put a time coordinate here. We're not going to move our spectrometer during this experiment. And there also has to be a vector which joins the particle through the observer here. And that is just capital R prime, like that. And so just from standard vector algebra, we have R equals R prime plus r of t prime is just forming this little triangle here. And now we need to think a little bit about the time because when the observer sees things, it's happening at a time t. But because it's taken the light some time to get from the particle to the observer, the observer is observing the state of the particle at a time t prime. And that time t prime is t, but it's earlier by a factor of R prime upon c, like this. So this is just simply the distance. Whenever I drop the vector notation, I'm just taking the size of the vector here. So this is no longer a vector. OK, and this is often called the retarded time. We have sort of seen this setup before when we're talking about radiation from a moving charge. I just want to run through it again because it's very important for understanding Thomson scattering. So what we do at this point is we write down two of Maxwell's equations again-- curl of curl of E plus 1 upon c squared, second partial derivative of the electric field with respect to time. And this is equal to minus mu 0, partial J, the electric current, partial t. Normally, when we're solving Maxwell's equations for light in a vacuum, we just get rid of this left-hand-- right-hand side and set it to 0 because there's no current in a vacuum. But now we're dealing with a plasma here. And the current in this system here-- well, there's only a single particle. So this J is simply equal to qv t. And I guess we should really have a delta function saying that the current is localized at that point there. I haven't specified whether this is an electron or an ion, but we'll get on to that later on. OK, questions? AUDIENCE: [INAUDIBLE] JACK HARE: [CHUCKLES] We will be more precise later on. Does it matter? Yes. [LAUGHS] But this is on the subtlety. So this is one of the things where-- if you go and look at Jackson, he's very precise about all of these. Here I'm going to be a little bit handwavy until it really matters. But yeah. Oh, yeah? Well, that's true for this single particle. What if I have a second particle and I want to look at the scattering from two particles? I can't put the origin in both the same place, both of them. They're in different places. So I'm better off just starting with a more generalized way of looking at it. OK, yeah. We could do this. This is slightly more general. What if have multiple observers? So, for example, if I'm doing Thomson scattering, I might observe from multiple different angles. But then I couldn't get away with that trick. So this more general formula will probably be useful. Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: No. So, really, at the moment, we're just being like, say we've got some v of t, which, of course, has to be changing. It has to be changed in time for us to get radiation at all. And we're going to go back, and we'll solve it for the oscillations due to the electric field. So this is really the very general "imagine you have a moving accelerating charge, what's the radiation from it," which you have probably seen before. But we're just going through it to set up the mathematics and the diagram and all the vectors so that I can then use them for doing the Thomson scattering derivation. But yeah. Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: It doesn't matter for this because as we found, the refractive index only varies by a very, very small amount from 1. So, in fact, you're still going to-- for most reasonable cases, if we're operating below the critical density-- if you're operating near the critical density, then your laser beam will probably refract before it gets to the center of the plasma and can scatter. So this is-- I guess we're making an assumption here. Yeah. No. Yeah. And I will say, actually, in this example I'm giving you right now, there is no plasma. There's just vacuum. So it is just it. Yeah. But that's a good point. Maybe when we get into plasma, we should be a little bit more careful about what we're doing. And, in fact, we will substitute out a dispersion relationship in the plasma that takes into account how the wave propagates through it. Yeah, OK. Other questions? OK. So, again, we're not going to solve these equations. We're just going to remind ourselves that we have a near-field term where the electric field drops off as 1 upon R cubed. And we're going to have a far-field term where the electric field drops off as 1 upon R. And we're going to be far enough away that we don't see the near-field term. We only see the far-field term. We're also going to make some approximations here. We're going to say that the length of R prime is approximately equal to R. I have a terrifying feeling that this means that I've effectively put my origin on the particle. So maybe Brian was right all along that. We could have just saved some time here. But this has the effect of us being able to simplify our expression for the retarded time to be t prime is equal to or approximately equal to minus R s hat dot R over c. This, again, is just looking at this and using some of the vectors we've got. You see there's an R here, s here, this sort of thing. I think this is actually saying that this is an isosceles triangle, right? So we're just saying these two distances are similar. We're trying to calculate this small term here. This is similar to putting our observer far away from the particle and the origin of our coordinate system here. OK, and if we do all of that, we can go back and take a formula from something like Jackson for the far-field radiation. So the scattered radiation-- so this is the one coming towards our observer. And I'm going to use subscript s to show that it's the scattered radiation observed at the observer R prime at some time t-- so at the observer's time, not at the particle's time-- is going to be equal to q over 4 pi epsilon 0 capital R, which, again, is approximately equal to half of R prime. And then we get this interesting term, s hat cross s hat cross s hat minus theta. I'll define beta again in a moment, but some of you will remember it from before. And all of that inside these brackets is crossed with theta dot. And I'm going to put a little subscript right on here. We'll get back to it in a moment. And this is divided by 1 minus s hat dot theta cubed, like this. OK, so two things in here that I didn't define straight away-- beta is simply defined as the velocity of the particle normalized to the speed of light. This is particularly useful because a lot of the time, we can just take a nonrelativistic approximation, and this makes the betas quite small. And we can drop pesky terms in the denominator here. And this is evaluated at the retarded time because we're seeing the light some time later at our observer, which is emitted some time earlier at our particle. And if we do take this nonrelativistic approximation, where v upon c is much less than 1, then we end up with a formula for the scattered radiation, which is Es at R prime t is equal to q upon 4 pi epsilon 0 c s hat cross s hat cross theta dot over R, where the numerator is still evaluated at the retarded time. The key thing here is that in order to get the scattered electric field, we now need to know this beta dot, which is the particle's trajectory or orbit. And this is where, as Sean was asking, what exactly v are we using, we'll then go back and we'll work out how the particle oscillates in the electric field of our probe. But you might already be thinking, what if my particle is also gyrating due to some magnetic field? We'll not get on to that straight away, but it is a good question to ask yourself. OK, how are we doing so far? OK, let's keep going. I'm just going to move this word "trajectory" because it's bit of board. So now we want to find the trajectory of the particle beta for our incident electric field. So this is the electric field that is coming in and setting the whole thing in motion. It's got a subscript i for incident. So Ei at some position R at some time t prime-- so this is looking at the retarded time here, this is where things will get complicated-- is equal to whatever the strength and polarization of that electric field is. And we'll just have it be a plane wave. So it's oscillating sinusoidally, and it's got a phase factor Ki dot r minus omega i t prime. We know that the equation of motion for this particle is m dv dt is equal to minus q Ei. I don't see the need to write all the rest of it. Good? And we will assume that the particle is initially stationary. So v at time 0 is equal to 0 here. So this is equivalent to saying the thermal motion is pretty negligible compared to the motion that we get from this. And we will relax that assumption later on. If we want to measure the temperature, we're going to need to care about the thermal motion. But for now, we're just going to deal with this [INAUDIBLE]. And so we solve this to get out the beta dot, which is effectively the acceleration of the particle is equal to minus q upon mc times the i here. The particle is being accelerated with the electric field, parallel or antiparallel, depending on the charge here. And we can put all of that back into our previous equation, and we can get out that the scattered light now that's observed at a time at a position prime at time t is equal to q squared upon 4 pi epsilon 0 mc squared times s hat cross s hat cross Ei 0, which was the strength and polarization of the initial electric field. This is all evaluated at the retarded time. And then there's also a factor out here, cosine Ki dot r minus omega i t prime. This term here we're going to say is roughly constant. And the reason is that r is not going to change very much. Remember, our particle is just oscillating up and down. We assume it doesn't oscillate very far. Then its mean position doesn't change very much. And so, therefore, we can just take this as a constant term here. So the only term we have to worry about actually changing here is omega i t prime. OK, someone tell me something interesting about this formula. Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: Yeah, I'm trying to work out if I've dropped it or whether it disappeared for some other reason. I can't see a good reason to drop it. So I'm going to put it back in for now. Thank you. We would have trouble conserving energy if we didn't put it there-- so probably for the best. OK. Tell me something interesting about this equation. Do I care more about scattering in a plasma from the electrons or the ions? Why? OK. So electron scattering dominates. And we will no longer-- from this point onwards, we will not consider the ion scattering at all, because it'll be smaller by a factor of at least 2,000 in the electrons. It's irrelevant. It may be there, but it won't be easy to measure over the electrons. Later on, we will now be talking about scattering from the ion feature and the electron feature. I want to be very clear. Whenever we're talking about Thomson scattering, we are scattering off electrons. Some of those electrons know about what the ions are up to. Some of those electrons know about what the electrons are up to. It's to do with the bi-shielding. We'll talk about it more. But if anyone ever asks you, what is Thomson scattering from, it is from the electron. You cannot measure easily Thomson scattering from the ions. The second interesting thing about this formula is, of course, it's a mess. We're trying to evaluate this at some time t. But there's t primes here. There's a retarded time. So if we want to make any good use of this, we're going to have to sort all of this out and write it in terms of some consistent time here. But before we get on to that, I want to just look at how much power is being scattered from this. So this is the electric field which is being scattered. But we also want to know what the power being scattered is. Is that all I have to say on that? Yeah. So this is the scattered power. So this has units of watts per steradian, per solid angle. And we often write it as dt d omega s. This omega here is some differential solid angle. So this is ds like that. We have some scattering source here scattered through some area at some distance. That's the solid angle. And so this ds can change. We can have-- for example, if our electron is oscillating up and down like this, there's no good reason to expect that it's the same amount of energy scattered into this volume as there is scattered into this volume. We'll talk about that in just a moment. So this is what this means. You have to actually know which direction you're scattering to. And there's an equation from this that you can find, Jackson or similar, which looks like r squared c epsilon 0, and then the time average of the scattered electric field averaged with itself, dotted with itself. Yeah, it's a good thing I did put that r back in there because that's going to cancel out with this r squared. So now I don't have to think about it anymore. And the key thing here is we're going to end up with a term that looks like s hat cross s hat cross Ei 0 squared. And if we say that our electric field is in this direction, which is also the direction the particle is oscillating in, and we define some angle with respect to that electric field, phi here-- so this is like a polar angle, I don't care about the azimuthal angle, just the polar angle here-- then you can do some vector algebra and find out that this can be written as sine squared of phi Ei 0 squared, like that. So our total scattered power looks like-- did I skip the thing about the classical electron radius? Oh, I did. Excuse me. So if we say that these are electrons-- so we say that q equals minus e and m equals m e, then this whole term here looks like the classical electron radius. So this is, using classical physics, how big you would estimate an electron to be. There is nothing that is actually the size of the classical electron radius. It's just a length scale. And this is a useful length scale that we'll be using here. Notice this kind of has to be the case because dimensionally, we've got electric field here. We've got electric fields here. These are unit vectors. We've got some of these dimensions of length. So to make this balance, you knew this was going to have some length scale. And we just call it the classical electron radius-- just some length scale, not very big. And so then we can write the scattered power as classical electron radius squared sine squared phi speed of light epsilon 0, size of or strength of the incident electric field here. This amount of light we're scattering into some solid angle in some direction. And if we sketch this-- so this is the electric field-- and we do a polar plot of this, we find out that most of the light is scattered, actually, at 90 degrees electric field. You imagine this being rotated around. This is sort of like a red blood cell type shape, where you have most of your emission out here, you have a very small emission in this direction, and you have no emission along E-- so dp d omega s parallel to E is equal to 0. This is for the case with nonrelativistic. Does anyone know what happens if we look at the scattering from a relativistic particle? Yeah? AUDIENCE: You get more scattering [INAUDIBLE]. JACK HARE: Yeah, it sort of starts looking like this. So this is often called beaming. And it goes in the direction of the electric field and, therefore, in the direction of the motion of the particle. I guess when the particle reverses, this also reverses as well. So this is important if you're doing Thomson scattering in plasmas which are relativistic. You might want to think about this. If you're not in a relativistic regime, if you're a nice classical regime, then it's important to note that your choice of polarization of your laser beam dictates where the light will be scattered. If you fire a laser beam like this, and it's polarized in this direction pointing upwards, and you put a spectrometer above looking down, you will see no light. And I have done this experiment, accidentally. And it's very frustrating. So you want to make sure-- honest-- that if you want to collect upwards, you want to have your polarization like that. Otherwise, you just won't see anything at all. But it doesn't matter, with respect to the electric field, where you collect, because this has no as azimuthal dependence here. So I have my electric-- my light going through this, my electric field pointing up, I'm going to get scattering in every direction like this. And so I can put lots of spectrometers at lots of different positions around the plasma, and they will all see light. OK, question? AUDIENCE: So where is [INAUDIBLE]? JACK HARE: Well, there's this t here, and there's this t prime here. AUDIENCE: [INAUDIBLE] because if the cosine term is [INAUDIBLE]. JACK HARE: Yeah. AUDIENCE: --the cross product-- JACK HARE: This is being evaluated at t prime as well. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah, but this is the equation at t. So we still need to do a little bit of substitution to get between the two of these. And this just gets a little bit subtle because we're going to start Fourier transforming them soon. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Yes, and we see stuff that happened R upon c earlier, right? But you could imagine, especially-- this becomes especially important if you don't have a plane wave, which is just a continuous wave in time, but if you have a pulse. Then it starts to get very complicated to keep track of all these t primes. So you're going to see a lot of them come up, and I'm going to do my best to get it right but will probably make a mistake somewhere. Yeah? AUDIENCE: What [INAUDIBLE]? JACK HARE: Yes. But, again, it becomes complicated when you do the Fourier transform because what we need to do is find out a spectra. So at the moment, this is a time-varying electric field. But we don't tend to measure that. So imagine that your laser beam is going through the plasma. It's a visible green laser beam. And so its frequency is extremely large. We don't have a detector that can digitize that. What we have is a spectrometer that disperses light depending on its frequency. So we have to do all of this instead in the Fourier domain. I'm setting it up in the time domain, but we're going to Fourier transform later on. And that's when you have to really start thinking, like, am I doing E to the i omega t or E to the i omega t prime? And if it's t prime, all of a sudden, there's an R inside there. So now I'm doing a Fourier transform with respect to space as well as time in the same go. This gets complicated. So I'm just sort of setting you up to think this will be hard, though I agree right now it looks quite trivial. Yeah. Was there a question online? Oh, maybe not. Other questions? AUDIENCE: [INAUDIBLE] JACK HARE: Yeah, so you're saying that as the particle starts to oscillate, it will have some velocity, and then it should feel the magnetic field. We are going to assume that the velocity is still quite small so that we don't have to worry about that. So that's kind of saying that the particle doesn't actually get accelerated very far Before the electric field reverses and sends it the other way. So its velocity never gets large enough that we have to worry about that, because that sounds like a nightmare. But, yeah, you could, in theory, extend this treatment to include that. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Right, yeah. So the electromagnetic wave has a magnetic field. And the force on it is v cross B. So if I don't want that force to be large, I just keep v small. I keep v small by making my frequency large enough that the electron never really has a chance to accelerate very fast or the electric field turns around and turns it the other way. And this very simple treatment to get us through just scattering of a single particle is 0. And, of course, very soon, we need it to be nonzero. So, yeah, great. Any other questions? Ooh, boy. OK, another quantity we want to calculate is the total cross-section. So this is the differential cross-section here in terms of power. Ah, sorry. This is the-- what's the right word for this? This is the scattered power. We also want to calculate a quantity called the differential cross-section. This is where I wished had a bit more space, but I think I'm just going to use this. So the differential cross-section is the probability of this scattering event happening and scattering a photon in a certain direction through a certain solid angle, d mu s. And it's basically equal to the scattered power, but we're no longer interested in the power. So we just divide through by c epsilon 0 Ei squared. So that's the incident power. And we're just dividing it out. So now we'll have something that just simply looks like r e sine squared phi. The reason to do that is to say, well, this is the probability of a photon scattering into some solid angle in some direction omega s. What's just the probability of it scattering? Well, that probability, the total probability of any scattering event happening, is then equal to our E squared sine squared phi d omega, integrating over all the solid angle, that the omega is 2 pi sine phi d phi. So this is just the infinitesimal area element for a cylindrically symmetric system. And when you perform this integral, you get out 8 upon 3 pi r e squared. OK, this is extremely important because these are all constants. It doesn't depend on your laser or your plasma or anything. This is just the chance of scattering off a particle, off an electron. And this number is 6.7 times 10 to the minus 29 meters squared. So if we have a plasma with a density of any, any particles that scatter off, and it's got some length L-- so where the laser beam is going through some distance L with some number of particles per unit volume ne, and we times this by the scattering cross-section here, and if I put this in some numbers here-- so ne is maybe 10 to 14 per meter cubed. I don't know why I picked this. It seems kind of dense for a tokamak. But-- I don't know, not very-- yeah, kind of dense we're talking about, not very dense for other things. That's not that dense. Yeah, it is meant to be sent to me centimeters because then the number I write down doesn't work either. OK, that's a pretty dense for a tokamak, right? Yeah? OK. And then let's say this length looks like 1 meter. So this is looking a lot more like a tokamak here. Then this is equal to 10 to the minus 8. So this is scattered photons-- scattered-- over incident photons. So if I put 10 to the 8 photons in, I get 1 photon out. Can someone tell me something interesting about Thomson scattering? So that's very true. So, yeah, we definitely don't want to put our detector somewhere where the beam will hit it. What if we put the detector somewhere else? Is this an easy technique? Is it a hard technique? I'm putting a lot of laser-- so if I put in a joule of laser energy, I'm going to get out 10 to the minus 8, 10 nanojoules of laser energy distributed in 4 pi. So when I put my detector there and it's got a solid angle that is not 4 pi, that number will go down even further. So this is a very hard technique. Thomson scattering is difficult. There's not very much scattered light. You need a very powerful laser source in order to do it. Can someone tell me something else interesting about this result? Yeah? So denser plasmas, it's easier, definitely. Yeah. That's true. But, of course, yeah, it depends on this length as well a little bit, and how you're collecting the light because you're not collecting it-- if you have a spectrometer looking here, you're only collecting it over a relatively short length. You're not probably collecting it over the total length of the plasma. So this is an upper bound on the total number of photons scattered. And as we said with solid angle and your small collection volume, you'll scatter many fewer photons. What about double scattering? Should I ever worry about a photon scattering off an electron and then, having done some weird shift in frequency, scattering off a second electron? Do I need to account for that in my calculation? Never, ever, ever, ever, right? I mean, maybe if you got to extremely high densities, but this is not something that we need to worry about. So we're just looking for scattering of single particles. This is very important. And this is why this is a local measurement, because if you scatter off a particle here, the photon will leave the plasma without scattering again. It's not like radiation transport, where there was a chance of the scattered photon being absorbed. Well, it could still be absorbed, I guess. But it's not going to be scattered twice. So we know if we see a scattered photon that it came from the volume we think it came from and then straight out the plasma without scattering again. OK, any questions? We're going to relax B 0 equals 0. Yes? AUDIENCE: [INAUDIBLE] JACK HARE: Yes. AUDIENCE: Why is that? JACK HARE: I think if you take the nonclassical version, there is a dependent frequency there. But in this treatment here-- yeah, again, this is very classical treatment. This is the classical electron radius. And we dropped relativistic effects to get this formula here. So we don't know about quantum. We don't know about those. I suspect there is. Thomson scattering turns into Compton scattering and the limit of high-energy photons. So we're not going to cover Compton scattering. But this is technically a limit of it. So I guess I could derive Compton scattering and then get Thomson scattering from it. But this is hard enough already that we don't need to inflict that extra pain. Yeah. OK, any other questions? OK. Yeah? Go on. Yes. Yes. Almost all of your laser light will pass directly through the plasma. And then, on the other side of the plasma, you have to work out what happens to that beam. If it reflects off the wall of your chamber, the reflected light will be quite large, even from a Thomson scattering perspective, because the electron density of metal is extremely high, 10 to the 30 or so. And so that reflected light will bounce a few times and eventually end up in your spectrometer, where it will completely drown out scattered light. So instead, you have to do something with that unscattered light, which is most of your light. You have to have a beam dump, which is normally a very, very long pipe with a very, very dark box at the far end, which you hope absorbs all the light and stops the backscatter. So, yeah, that's a quite important practical consideration. Yeah? AUDIENCE: So if the scattering probability is so low, how could you do time-resolved Thomson scattering? How could you get sufficient [INAUDIBLE] time? Because-- JACK HARE: A big laser. AUDIENCE: OK. JACK HARE: Yes. OK. Let's make life more complicated. Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: So you could, in principle, do something clever with it. I've seen people do Thomson where they also look at the rotation of the polarization by magnetic fields in the plasma. So they've done Faraday at the same time with the same proton beam. So there are some things you can do. In general, it's better to have dedicated laser beams with different angles. Yeah. OK. So let's now say we have 0 is not 0. Our position at time t prime is now equal to wherever we started at at time 0 plus velocity t prime later. And this means that our retarded time is now equal to t minus R minus s dot r over c, like this. By the way, we've assumed at this point this particle is just going in a straight line. So this is before the electric field gets here and mucks things up-- just going to let this particle move. Now let me see what I'm saying here. No, that's not true. This particle is going roughly in a straight line, but the electric field is causing it to deviate only a very small amount here. So there's some small deviation delta v from the incident electric field. But that delta v is much, much smaller than this velocity here. So the particle is roughly going in a straight line, and it's oscillating ever so slightly. OK. Now, I have a load of algebra here, which I worked out because I couldn't find it in the book. But I'm not going to go through it. You can have the joy of doing it yourself. You can rewrite this equation. And this is actually much less trivial than it looks as t minus R minus s dot r of 0. We're taking this expression for r, substituting it into here. All of this is divided by c. And this whole thing is divided by 1 minus s hat dot beta. This term down here is a Doppler shift. We'll see another one of those occur in a moment. So now we want to calculate the phase factor inside the cosine that we had for our incident electric field. Remember, previously, this was Ki dot r of t prime minus omega i of t prime. And we said that this was roughly constant because the particle didn't move. Well, now the particle's moving. So it's not constant anymore. This factor becomes equal to Ki dot r of 0 plus v prime of K dot v minus omega i. This is from substituting this into here. I think that should be a Ki. I'm going to put it as a Ki. It doesn't have a subscript for my notes-- definitely a mistake. OK. And then we can rewrite this altogether as Ks R. This is just some phase that doesn't change in time, which is just due to the fact that the scattered wave has some distance to go to get to our detector. So this is just another constant phase-- minus a term that looks like scattered frequency, omega s times t minus a term that looks like Ks minus Ki dot r of 0. And I've introduced several new terms here. One of them is this omega s. And this is the frequency of the scattered wave which is seen at the observer. So by switching from t prime to t, I've had to take into account this change in time. And the Doppler shift starts coming in here. So this scattered frequency looks like 1 minus i hat dot beta over 1 minus s hat dot beta times by omega i. This is a scattered frequency, and it's interesting because it's got two Doppler shifts inside it. It's got a Doppler shift due to the light coming in being Doppler shifted by this particle's motion. So the particle sees a different frequency of the incoming light, and then the particle scatters that incoming light in some other direction. And that light is Doppler shifted again to the observer. So we get two Doppler shift factors here. And the other new term I introduced is this Ks here. And this is simply from the dispersion relationship for electromagnetic radiation in a vacuum. So it's omega equals tK, and this is going in the s hat direction here. So if you put this equation into this equation, turn through all the mathematics, you find out you get a new phase term. In a moment, I'll explain what's going on here. OK, any questions on that? This is the phase factor inside the cosine for our scattered radiation. Previously, we just had this term here. So our scattered radiation just looked like a plane wave with the same wavelength and frequency as the incoming wave. But now it's oscillating at the frequency of the scattered wave, which it should do because that's how we've defined it. But interestingly, we've got this extra phase term that comes from the difference between these two. So we can now define two more variables which don't have subscripts. So this is K, and this is simply equal to Ks minus Ki, which is this term here. And so it looks like a plane wave which is actually moving with a wave number K because we've got the difference between these two. And we can also define a frequency, omega with no subscript, which is equal to the difference between omega s and omega i. And this is simply K dotted with v. So this is the Doppler shift. This is the shift in frequency between the incident wave and the scattered wave-- that's omega. And that Doppler shift, as we expect, is the wave number, this difference wave number K, dotted into the velocity. This is what we normally expect for the Doppler shift here. What's interesting about these equations is this is a vector equation. And it implies that we have some particle with velocity v, like this. We have some input wave of light, our probing laser beam, that crosses that particle in this direction. And we observe the scattered light at some other point, Ks, like this. Then it appears that what we're actually measuring are the properties of some wave that goes from-- well, has this wave number K, which is the difference between these two wave vectors here, and it has some frequency, omega, which is that K dot v here. So in reality, what we're doing when we measure with our spectrometer at omega s and at Ks is we're measuring the properties of this wave, which is transferring us from the incident radiation to the scattered radiation. So this looks an awful lot like an equation for the conservation of momentum and energy. If I just times these by h bar and think about them as photons, you can see that straight away. This is the mathematics. We'll get to an interpretation of it quite soon. Hold on if you're struggling. We will get there. Any questions on the mathematics so far? OK. So a couple of things to say about these new vectors we introduced-- the size of this new vector K, which is simply equal, again, to the size of the difference between these two vectors, is equal to Ks squared plus Ki squared minus 2 Ki Ks cosine of theta to the half, where theta is the angle between these two. So this is just vector algebra here. Now, for nonrelativistic particles, we find that the incident light is going to have a frequency very similar to the scattered light. And so these two wave numbers will also be similar in magnitude here. So let me write that more explicitly. For nonrelativistic systems, the Doppler shift is small. So omega equals K dot v is much, much less than omega s or omega i. That means omega s is roughly omega i. We haven't seen a significant Doppler shift here. And that means that the size of these two K vectors, Ks, is roughly equal to the size of Ki, like this. And if we put that into this equation, then we find that the size of K is roughly equal to 2 times Ki sine theta upon 2. This is a very useful formula to remember because it relates the scattering angle that we're observing with respect to the incident laser beam and the wavelength of the incident laser beam to the wavelength of the mode that we're scattering on. And we'll get back to what this mode with scattering off means physically in a moment. Let's make it clear that this is sine of theta upon 2 within the brackets here. And the other thing to note is that we actually choose several of these things, right? We choose our initial laser frequency. And we choose what direction our laser beam goes in. And we choose what direction the light scatterers in. You might say, no, we don't choose that. It scatters everywhere. We choose which light to see. We choose where to place our spectrometer. So we set Ks, right? So all of these three things, we choose these. Again, I've got some plasma with some laser going through it, going along Ki, and I set up my spectrometer to look at Ks. And I'll redraw these so they're the same length. I have controlled most of the parameters here. And so that means that because I've chosen these two things, when I'm measuring my spectrum, I know that I'm measuring modes, which, again, correspond to this K here. So just to write that again, if we launch a wave with omega i, Ki, and we detect a wave being scattered at a certain frequency in a certain direction here, then this implies that some wave omega K, where omega is omega s minus omega i, and K is Ks minus Ki, implies that this wave exists. But there is something within the plasma that is capable of taking away or adding the momentum and the energy we need in order to be able to see this mode here. So, for example, if we have a spectrum on our spectrometer here-- and this is in units of omega, omega s minus omega i, so I'm just showing you the shift-- then our incident laser is going to be at omega i, which is just 0. So if we accidentally reflect this laser beam off the metal inside the chamber and catch the scattered light on our spectrometer, we will see it at omega i unshifted, like that. If we now put a detector at 10 degrees, we might, for example, see scattering here. And this scattering here, omega equals K dot v, equals 2 Ki sine theta over 2 v, will be proportional not only to the velocity of the particle that we scattered off, but it will also be related to the angle. So now if we put another detector at a different angle, we will see an omega which is larger because the angle is larger as well. So what exactly are we scattering off? Well, in this picture here, what we've drawn above, we're talking about something called incoherent scattering. And we'll get back to what this means later on. But incoherent scattering, we scatter off a particle. That's what we've drawn here. There's some particle. And that particle has a velocity v. And that v causes a shift, omega equals K dot v. So the particle is responsible for balancing this momentum and energy equation. It is the K, and it is effectively the omega here as well. There's also another regime of Thomson scattering, though, and this is called coherent Thomson scattering. And in coherent scattering, we scatter off waves in the plasma. And waves also have omega and K. And they have some distribution which links these-- for example, omega equals Cs K. This Cs, the sound speed, square root of ZTe plus Ti upon Mi means that if we now measure a shift on our spectrometer, we can interpret that omega as being due to, for example, this sound wave. And from the shift that we see, we can now infer the plasma temperature. So this very roughly is giving you an idea of how we will use coherent waves. It will make this very, very mathematical. OK, there's a lot going on there. Question? Yes? AUDIENCE: How-- you said that [INAUDIBLE]. JACK HARE: Yes. AUDIENCE: How [INAUDIBLE]? JACK HARE: Yeah, absolutely. So there's your background. You've got bremsstrahlung, and you've got line emission, and you've got synchrotron. You've got all sorts of things. You pick a probe, as in you pick a laser with an incident frequency which is not near any other strong features in your plasma. Bremsstrahlung is a nice background. So if you have bremsstrahlung-- this is wavelength here. If you have bremsstrahlung, and you're doing Thomson scattering, you should see a very clear peak from it. If you've got bremsstrahlung, and you've got lots of lines, then it is a bit difficult to tell the difference between the lines and your Thomson scattering. So you want to pick your Thomson scattering probe so it's not close to any other features there. Then it's usually pretty clear because it's like, what else could it be? If we see scattered light there, we know there aren't any atomic lines. So it is likely [INAUDIBLE]. AUDIENCE: [INAUDIBLE] very common. JACK HARE: Yes. Yes. There are also advantages. For example, a lot of the radiation that's emitted from the plasma is unpolarized, whereas the Thomson scattering radiation is polarized. And so we can use a polarizer to help reject some of that unpolarized light-- half of it. So there are various techniques you can do. The Thomson scattering probe is usually very short in time. So if you have a gated camera, you only collect light while the probe is active. So you don't collect so much of the self-emission from the rest of the time that the plasma is there. But it is challenging to get above the background. And, again, this is the answer to Lansing's question. You need a big laser. You need a lot of scattered light. Yeah. Joules in nanoseconds? Yeah. That's really big. [LAUGHS] Yeah. Yeah, for example, on the [INAUDIBLE] tokamak, where they're trying to do this in a sort of time-resolved way, they have a huge bank of Nd:YAG lasers that they trigger sequentially and send down the same beams. They're like, pulse, pulse, pulse, pulse, pulse, and then they have a load of spectrometers which are timed with those pulses as well. And so then they're doing Thomson scattering at, like, I think, 100 kilohertz or something like that using the staggered bank of lasers. In my experiments, I can't get the laser to go twice during a microsecond because you can't easily get, like, a joule at a megahertz from a laser. It's terrifying. So then we just have one time. And then we have to gate around that time and make sure that we only collect Thomson scattering light during the time the laser is on. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah, so the technique I talked about was the low-temperature plasmas with the CO2. For example, they had a pulsed system. So they were constantly making the plasma. What they did is they had the laser synchronized with that-- so at whatever rep rate the plasma was being produced-- and they integrated. So they just added up the photons from every single experiment until they had enough photons to get a good spectrum because the trouble is the density is so low there that just from a single shot, you don't really get very much scattered light. But if you do it for hours at a kilohertz, you start to get enough photons that you build up your spectrum. For that to work, you have to be really sure your plasma is the same every time. And they were, or they felt that they were sure that it was the same every time. Obviously, if your plasma is changing, if we're dealing with a very unstable system where the plasma is moving around, and then the Thomson scattering probe will be sampling a different bit of plasma every time, that's not really reasonable. But if you've got a nice homogeneous plasma, you've got it. So that's a nice situation to be in, actually, because if you want better signal-to-noise, you can just run the system for, like, another day. Of course, your signal-to-noise usually goes to square root n. So you get diminishing returns. But yeah.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_15_Equilibria_and_Imaging.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: All right, so welcome back, been a bit of a break. You may be-- yes, exactly, why are we here? What are we doing with our lives? We're learning about plasma diagnostics. And you may remember, before we went away, we spent a very enjoyable lecture drawing all sorts of diagrams, like this, where we had some energy levels here. Maybe this is a ionization energy level. This is maybe some lower energy level. And something happened. Photons were flying around. Electrons were flying around. And we said that if we want to understand what the ionization state is in our plasma and also the excitation state, which would be if you have some intermediate level here, which I guess I've already used I. So I'll have to come up with some other notation for it, N, like that. Maybe we want to know how many electrons are in this intermediate level, in the lower level, in the upper level. Because at the end of the day, in spectroscopy, what we're detecting is photons, which are being emitted. And the photons are being emitted when the electron drops down either from being a free electron into a bound state or between two bound states. And we went through all of these processes. And we said, if we really want to understand the plasma, we need to model all of these processes. And obviously, that's an awful lot. So today, I'm going to go through. And I'm going to present some simplified equilibria in which we do not need to include all of these processes. And these are the equilibria which we tend to have a better chance of modeling. Anyone remember what this process was called here? This is my personal favorite, when you have two electrons in an excited state, and one of them drops down, and that provides enough energy for the other electrons to ionize itself. STUDENT: Autoionization. JACK HARE: Autoionization, yes, absolute favorite. That's crazy. OK, good stuff. So now, let's go through some equilibria most of which do not involve autoionization. And the reference for all of this-- there are many textbooks on plasma spectroscopy. A very famous textbook is Griem. That's the name of the author. And the title of the book is Plasma Spectroscopy. Griem is not actually a particularly good book for learning spectroscopy. It's a pretty good reference. But you will find all of this stuff in here, if you want to know more. Hutchinson has some of it in a slightly more fragmented fashion. So I've been taking most of this material from Griem. So the most simple equilibrium is an equilibrium where we're in complete thermodynamic equilibrium. If we're in a thermodynamic equilibrium, this implies straight away that we have a Boltzmann occupation of states. So our Boltzmann occupation of states is something like the occupation of the i-th state, whether that's an ionization state or it's an excited state within an ion, is proportional to exponential of minus the energy of that state over the temperature. And the electrons, and the ions, and the radiation all share the same temperature. So there is only one temperature for electrons, the ions, and the photons. The other thing about a thermodynamic equilibrium is that all the processes balance. So all of the up processes that excite the system are exactly balanced by their counterparts that de-excite the system. This is how we get into a steady state here. That's actually not true in other equilibria. There can be processes which excite and different processes that de-excite. As long as the rate is the same, you're in an equilibrium. And we'll talk about some of those later on. But for thermodynamic equilibrium, all processes balance with their opposite process. OK. Now, how often do we think we get a thermodynamic equilibrium in a plasma? STUDENT: [INAUDIBLE]? JACK HARE: Maybe in space, probably getting closer. I mean, a thermodynamic equilibrium is an infinite in the system, which is not changing in time. So of course, in reality, we can never realize it. The space plasma, you might get close to thermodynamic equilibrium. But this also implies that you have to have your photons in thermal equilibrium with your gas, which means that your system is optically thick. And space plasma is-- at least plasma in space, in outer space, is rarely optically thick. Maybe inside some astrophysical body, it could be optically thick. So this we don't really get very often. So this is rare. OK. Next one I want to talk about is local thermodynamic equilibrium, which I'll write as local TE. So people will often call this LTE, like this. OK. Now, in local thermodynamic equilibrium, we make the assumption that our plasma is optically thin, which means that we can ignore the processes that involve photons being absorbed and emitted in terms of changing the levels. So we have no-- I guess I'll call it radiative processes. The range of processes still occur. We can still observe the lines coming from them and do spectroscopy on them. But they don't dominate the balance of what determines what states are excited and what are not. In fact, that balance is then obviously dominated by the non-radiative processes. So these are collisional processes. The collisional proceses dominate is. So this is collisional processes which excite from a lower state to an upper state, but also collisional processes which de-excite from the upper state back down to the lower state. So our occupation of the upper and lower states is set purely by these collisional processes. However, we don't need to know the cross-section for these processes, which we would do in our more detailed full model. Remember, this sigma ij is the cross-section for a process that takes us from state i to state j. But they are unimportant in LTE, due to the balance. So we keep the feature of a thermodynamic equilibrium. But the processes balance the opposite process. So the actual cross-section of the process is not important. That's what makes it a local thermodynamic equilibrium. And if you look at what this means and you work through the mathematics of your population balance, you come up with the famous Saha equilibrium. So a Saha equilibrium, named after its discoverer, has a form that says that the electron density times the ionization density for state z in its ground state over z minus 1 in its ground state-- so this is the density of atoms in ionization state z and their ground state atoms in ionization state z minus 1 in their ground state times by the electron density-- is equal to, most importantly, a factor to do with the ionization potential. So this chi z here is the ionization to go from z minus 1 to z. So the change in energy there over the temperature. And then there's a few other factors out front that come from the fact that we're going to be assuming the populations are Maxwellian. There's an ion mass temperature over 2 pi h-bar squared, the 3/2. And then there's also a factor that's to do with the degeneracy of these levels. And we talked about the degeneracy as being different ways to arrange electrons and still get out the same energy. So this is gz, for the z ionization state, 1 over the degeneracy of the lower ionization state. So the Saha equilibrium doesn't tell you about how the excitation states are distributed within each ion, but it does tell you what the ionization is. And so this tells you, for a given temperature, how many particles you expect to be in each ionization state. And because of this exponential factor here, which comes from a sort of Boltzmann-like argument, we're going to end up with most of our particles still being in lower ionization states. OK. Any questions so far? Yeah? STUDENT: [INAUDIBLE]? JACK HARE: The radiation doesn't have to be the same because it's optically thin. And we don't care about it. But the ions and the electrons are the same temperature, yes. STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so this notation here, this lower number here, is like if you have a series of levels within an atom, that atom has a charge of plus z. And then I've got these levels, which I could label 1, 2, 3, 4, and so on like that. This 1 refers to the energy level with an atom with charge plus z. And the z here refers to that charge. So these are atoms with different ionization states, which means, in general, their energy levels will have different energies. But here, we're taking the ratio of the atoms of ionization state z in the ground state, level 1, and the atoms of ionization state z minus 1. So they have one more electron. And they're in their ground state, 1. And the energy of that ground state will not, in general, be the same as the energy of the ion with one fewer electron because all the energy levels will shift when we reduce the number of electrons. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yeah. STUDENT: [INAUDIBLE]? JACK HARE: Ah, yes. Yeah, but you could skip that by just taking ratios of this equation with an ionized argon. And then you get out the ionization to get the fifth ionization level. So you don't actually have to do it recursively. You can do it all in one go. You could also do it that way, if you wanted to. But yeah, in general, these ionization potentials are tabulated. They're things that people know from experiments. It's 13.6 electron volts for hydrogen. It's less for every subsequent element. And so you need to put in the temperature, of course, of your plasma. But then you can calculate this pretty quickly. So this is a nice and simple-- relatively simple equilibria. And so for example, if you're doing spectroscopy and you see a line which has an energy which corresponds to the z ionization state-- you see a line that corresponds to argon 5+, then you know you have to be at least the temperature that produces argon 5+. You can't be at a temperature where argon is completely neutral or only singly ionized. So straight away, this sort of equilibrium is like if I see that line, it must be at least this hot. And that's often good enough for spectroscopy. We'll talk about more advanced tactics for using spectroscopy later on. OK. Other questions? Any online? OK. Next equilibrium is something called corona equilibrium. Coronal equilibrium is called coronal because it was first derived in the case of the solar corona. Sometimes, this is also called a collisional radiative equilibrium. And that name, radiative, is much more descriptive. And I wish we used it more. But you'll still find most people calling it coronal equilibrium instead. So it's called a collisional radiative equilibrium because we consider the excitation going from a lower state to an upper state only collisional processes. But for any de-excitation going from an upper state back down to a lower state, we consider only radiative processes. So this is a slightly seemingly contrived setup that actually occurs a great deal of the time. It's a setup in which our system is optically thin. And so we are very unlikely to have a photon wander by and suddenly excite our system because the photons mostly just stream through the plasma without being absorbed. So this is optically thin. The, photons escape. And yet, it's also so collisionless-- the collisions happen so rarely-- that, although they are responsible for excitation, before you have chance to have a collision de-excite your system, it will spontaneously emit a photon. So it instead will have spontaneous emission taking us back down. So this works very well for low density plasmas. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yes, exactly. But we're not dealing with any radiative processes that rely on there being a large number of photons like stimulated emission. Those will never happen. It's just imagine you've got your ion floating around. Lots of photons stream by. They will never interact with it. Occasionally, a collision will happen. It excites. And so now, it's an excited state. Then the clock is ticking. Two things could happen now. Another collision could de-excite it and nonradiatively take that energy away. Or spontaneous emission could occur. And we've chosen-- or this system works in a regime where the density is low enough. Remember, these processes, the collisional processes, all depend on density, like that. We've chosen a density which is low enough that there will never be a second collision more quickly than we have spontaneous emission here. This occurs a lot in tokamaks and other plasmas. And it's also relatively easy to solve for. So this is a very favorite model in plasma spectroscopy. OK. So if you have that, we can also write down the occupation of the excited states. So this is some upper excited state, number of atoms in that state, compared to the ground state. Everything is being driven back to the ground state by spontaneous emission. Even if it takes multiple steps, you'll eventually get the ground state. And you'll get there long before any collision occurs. So we are going to end up mostly in this ground state here. And this ratio here is going to be equal to the number of electrons doing the collisions to pump it up to an excited state times by the cross section to go from the ground state to this excited state averaged over the velocity. So that's our standard reaction rate here. And it's going to be divided by the rate at which any process takes it from the upper state down to the lower state here. And this could be by a chain of multiple processes. So for example, if this is the upper state, and there's some intermediate state, and this is the lower state, we'd have to add up the processes that go straight down and the processes that go via some intermediate. And any chain of these processes will happen more quickly than other collisions. So this is the balance between these two here. And we can write it very easily for this collisional radiative equilibrium. But the up processes are no longer balanced by the down processes. They're completely separate. That's a big break from our local thermodynamic and our thermodynamic equilibria. OK. Questions on that? Mm-hmm? STUDENT: [INAUDIBLE]? JACK HARE: That won't happen in this case because, before the second collision happens, it will have decayed, right? Because we're saying that these collisions are so infrequent. Yeah, really, we just have collisions going up. If the collision only took us up to here, then, before a second collision could occur, we'd have spontaneous emission and decay. If it goes up to here, then there could be multiple pathways back down, yeah. OK. STUDENT: [INAUDIBLE]? JACK HARE: I'm sorry? STUDENT: [INAUDIBLE]? JACK HARE: Right. So this is one of these things, like when is this model valid, right? And so then there will be a dimensionless parameter, which will be the ratio of those time scales. And you would like that parameter to be very large or very small, depending on whether you write it one way or the other way up. What is very large and very small? 10 is a pretty large number. 3 can sometimes be large enough, right? So if you're using this model in a situation where the ratio is only 3, it will work roughly. It won't be completely wrong. There'll be a nice, gentle transition from being right to being wrong. And you will be somewhere in the middle there. And so a lot of the time, we will use this model. And if you go and look in Hutchinson's book-- I'm not covering it in the lecture today-- actually, there's a limit to the energy levels for which CRE works. And so we find that transitions between the lower energy levels tend to be very rapid. They're very favorable because of the overlap between the upper electron wave function and the lower electron wave functions. And so you can actually use this coronal equilibrium to describe the first n energy levels. And then you have to use a different equilibrium, like maybe LTE, to describe the upper energy level. So you actually have a transition between which models are valid even within the same atom with the same ionization state. So this stuff can get complicated. Yeah, that was probably too much information, but still. Any questions online? OK. And then the final thing, after we've done coronal equilibrium, is really just having to actually solve those equations. So this is what we call non-local thermodynamic equilibrium. For many years, I thought there was a hyphen here and this was non-local thermodynamic equilibrium. So we had a thermodynamic equilibrium, which mediated non-locally. It turns out I was wrong. This is just not a local thermodynamic equilibrium, which is a much less descriptive name, but is much more valid. And you'll often see this written as lowercase nLTE. So in general, you have to solve all of the different processes, exciting and de-exciting. But for a specific nLTE model for a specific plasma you're interested in, you may still be able to restrict what those processes are. So what you need to do is actually consider all the relevant processes because, even in these models, other processes are happening. It's just we think the rate is very small. And so those processes are not relevant. So if you're doing an nLTE model, you simply have to calculate what processes are relevant and what aren't. So I'll give you an example here, if we're dealing with a photo-ionized plasma. For example, I've got a gas cell, a chamber filled with gas. And I've got windows on it, which are transparent to X-rays. And I have an X-ray source outside. Those X-rays will stream in. And they will photoionize the gas. The photons will be absorbed. And we'll have photoionization. And then the occupation levels will be set by those photoionization processes. We won't have to consider collisional processes because the photon field, the strength of the X-ray drive, is so strong that the photons are much more likely to be involved than any collisions. And so for a photonionized system, for example, we can neglect collisional processes. So that reduces the number of equations that you need to solve. And so you may be able to get away with a smaller set of equations. But if you calculate it and all your processes are roughly equal, then you really do have to solve that big equation I put up on the board last time that was effectively 0, because we're in steady state, equals the sum over all processes. And as I mentioned, people spend their lives writing code to calculate these coefficients and to solve this balance. And it can get very, very messy indeed. But that's all I'm really going to say on nLTEs. So in general, if we're trying to do spectroscopy, there was a workflow that we might need to do. So this is modeling spectroscopy. So first of all, we're going to model the spectra from a blob of plasma, a little cube like this. It's got some mass density, which is what spectroscopists often work in, rho. And it has some temperature. Occasionally, your spectroscopists might work in terms of ion density instead. Obviously, these are same up to the ion mass here. Then you need to determine what equilibrium is appropriate. This could be a Saha-type equilibrium or a collisional radiative equilibrium. So maybe you know this by back of the envelope calculations. Maybe you go through and calculate a load of rates and determine what's most important. But you have to determine what your equilibrium is. So then you need to calculate the ionization states. What organizations are present in your system here? So these are the density of ions with charge z, all the different z's. In some cases, you may be very weakly ionized. You may have a lot of neutrals. And you may have just a few atoms which have an ionization of 1. In other cases, you may go all the way from neutrals to fully stripped. For something like tungsten, which has 74 electrons, 76 electrons, that's an awful lot of energy. But you may well be able to get a plasma [INAUDIBLE]. So then you'd have to take into account all the other ionization states. Then you have to calculate the excited states. And you have to calculate the excited states for each nz. So for all nz, you need nz, and then whatever the excitation is. Then you need to calculate the line strengths. So these are these coefficients a, i, j. And you'll remember that these had some hand-wavy relevance to the overlap integral with the dipole operator between the upper and the lower state. So this is the dipole operator, which is radiating away a photon of energy, h-bar omega ij. And that process has some probability to occur. And so this tells you how many photons with that energy that you're going to see. And you need to do that for every transition between every set of excited states for every ion in your system. So this is hard. Once you've done that, you can now predict the spectra. So this is predicting the intensity of radiation with frequency omega for the very specific density and temperature that you started out with. Now, if you're very, very lucky, your system is optically thin, so tau less than 1. Life is good. If tau is greater than 1, now you need to consider radiation transport or the fact that some of the radiation will be absorbed as it moves through your plasma. And actually, for both of these as well, you'll have to consider things like reflections. So the fact you've got metal walls on your tokamak, or whatever it is, means that some light going the other way will bounce back. I've got my plasma here. Some of the light from this bit of plasma is going to be emitted this way. And then it will bounce back into your detector. And although it's being emitted from here, it will look like it's being emitted along the line of sight here. So if you don't take into account the reflections, you will be thinking you've got lines from one bit of plasma when they're, in fact, from another bit of plasma. You'll also need to model the response of your detector. But there are no perfect spectrometers out there. And they, in general, have a response curve to them, which is like response as a frequency of omega. They're not flat. They treat different frequencies differently. They absorb infrared photons more than visible photons. And then finally, you need to compare this to your experimental data. So for example, if you had a spectrum that has two lines in it like that, and you've done all of this process, and you get out a spectrum that looks like this, you'll have a choice. You can either say, it's close enough for today. Or you iterate and you do all of this again. And you pick a new density and temperature. And you do some sort of gradient descent type thing until eventually your model spectra matches up with your experimental spectra. And then you say, aha, now I finally know the density and temperature of my plasma. It's a fair bit of work to do spectroscopy. Questions? Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so you can. I will show you some tricks to narrow this parameter space in a moment. This is the most general case. I had whole page of notes on this, and then my iPad hasn't updated them. But I wrote the word line ratios. And I'm going to try and reconstruct the memory what exactly I was going to say. So we'll see how it goes. Yes? STUDENT: [INAUDIBLE]? JACK HARE: Well, other diagnostics can be expensive they can be limited to certain parameter ranges. If you're doing astrophysics, you can't do Thomson scattering on your black hole corona. So you're stuck with this sort of thing. Spectroscopy does provide extremely detailed information on all of the atomic kinetics. This is what people call this sort of distribution of different states, the atomic kinetics. So if you're far from local thermodynamic equilibrium, say you're looking at fast electrons in a tokamak, the spectrum may provide information on those fast electrons that something like Thomson scattering doesn't because Thomson scattering may be as sensitive just to the bulk of the plasma. So this is a time-honored time honored diagnostic. We have a lot of knowledge about how to do it. It's just that it turns out to be very hard. I've presented this in a slightly flippant way. But this is what you need to do if you actually want to solve all these things. Some of these steps have already been done for you, right? I mean, a lot of this is now included in computer packages. And so if you know what your equilibrium is-- for example here, this rho t getting out some spectra here, there are large codes PrismSPECT is a commercially available code. You give it a density at a temperature and the size of your plasma and it will do radiation transport through it and tell you what the spectra is. You run that for a few different rhos and t's and you create a table of spectra for different rho and t, you can very quickly fit that to your experimental spectra. So none of this stuff is impossible. It's just, if you're doing it from scratch, it can be very hard. Yeah, I'll just take Nicola's question. STUDENT: [INAUDIBLE]? JACK HARE: Yeah, that's a great question. So what we talked about here is that there is some photon with some energy, right? So if we think that that is true, our spectrum should look like a set of delta functions, right? And those delta functions should sit at omega ij. And I know that delta functions are infinite. But they should have a height that is related to the strength of the transition here. What we'll talk about next lecture is line broadening. So that is when these lines are broadened by other processes. And we may be able to, in some circumstances, learn more about the plasma from the broadening. For example, if there's Doppler broadening, we'll learn about the temperature. If there's Stark broadening, we'll learn about the density. And then there are other Zeeman splitting and things like that may even be related to the magnetic field. But we'll get on to that later. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: I would say yes, in general. And the worst the case of this is the fact that this is line integrated. So we're going through the plasma here when we're collecting all of that light. And so if the plasma-- for example, rho or t-- changes along that curve, then you can't say exactly how that density changes because there are lots of different solutions that give you the same result. So definitely not unique. People do inverted spectroscopy and very cool techniques like that, where they measure lots of lines of sight and have symmetry assumptions. But that's quite tricky. Any questions online? OK, I'm going to try and do the line ratio thing. Let's see how this goes. So you may have a spectrum with some very well-isolated lines. So these lines could be ones that are some distance apart from any other lines. They stand by themselves. It's very clear what their wavelength is or the frequency is. There's no confusing it as anything else. An example of a poorly separated line would be if you have three lines all on top of each other, like this. Because these lines are probably adding together in some way, you can't exactly tell what the intensity of each of these is. But if you have nice, well-separated lines, then you have some intensity coefficients for this one, some intensity coefficient for this one here. So this line ratio technique, you need to have strong, isolated, and identified. You need to know what these lines correspond to. So identified, in this case, means we need to know what the ionization state is for it. And we also need to know what upper level and lower level it corresponds to within that ionization state. So we need to know the-- I guess we technically need to know the density of the upper state here. And so an example of this would be if I have a line that comes from aluminum that is twice ionized and aluminum which is three times ionized or something like that, these two ions will have very strong lines for certain transitions. You'll often have a very strong transition from some upper state, the lowest lying upper state, back down to the ground state because, in general, that's where you're going to end up. And also, in general, you have a large energy gap here. So this is pretty favorable. So maybe you've got some well-isolated lines that you know correspond from emission from an upper level down to the ground level here. And you can measure their intensity very well. So we now know the intensity of omega 1. And we know the intensity of omega 2. And this is what I mean by strong. You don't want to be doing this on little lines that are down here in the noise because we can't measure their intensity very well. So what you do with these lines is you say, OK, I know that the intensity of this line-- so the intensity at some frequency corresponding to the transition from the upper state to the lower state here is equal to the number of ions in the upper state times by the probability of transition from the upper state to the lower state. And then because this intensity often has units of energy here-- this is like a power-- we obviously need to have an energy unit here. So this is h-bar omega upper lower. So if we have two lines here, we can take the ratio of these. So we can have intensity of omega z upper lower over intensity of omega z minus 1 upper lower like that. And We can see then that we're going to get out a z upper over n to the z minus 1 upper a upper lower a upper lower-- I'll put little superscripts on these, so we remember that these are different coefficients for our different ionization states. And then the h-bars will cancel. So we'll have an omega z upper lower omega z minus 1 upper lower. But these are things that we can calculate, look up in our atomic database. These are things that we have literally measured because our spectrometer is well-calibrated. This line ratio we've also just measured. So that means we can infer the ratio between the number of atoms in one ionization state and another ionization state. Here, I've done them as neighboring ionization states. This could be like z minus 2, z minus 3. It doesn't really matter. You don't generally get that because, for a certain temperature, you only have a few ionization states active at a time. And because you've now inferred this quantity, you can then use an equilibrium, such as Saha. And you can get out the temperature. So you can say your Saha equilibrium says, at a certain temperature, I should have twice as many ions in the lower ionization state as the upper ionization state. That's what I see. Therefore, that is my temperature. So just simply from looking at line ratios without doing all of this complicated modeling, you may be able to get a handle on the temperature. And that could be a first step that you then feed back into your algorithm where you say, I know what the starting temperature roughly is. So let's only look in the solution space surrounding that. So this is quite a powerful technique. And often, this is as far as people go. They'll do a line ratio. They'll get a temperature out. And I mentioned earlier the even simpler thing to do is just be like, hey, I've got a line that corresponds to aluminum 3+ ions. That only happens when the temperature in the plasma is like 10 electron volts, maybe 12, maybe 8. But that means my plasma is probably 10 electron volts. Perfect. And that's good enough for a lot of purposes. OK. Questions on this? Yes? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so no, so absolutely. I say you're measuring it. But you know it, right? It's close enough. Yes, there may be some Doppler shift involved to it or some of these other line broadening mechanisms, which means you don't know exactly where this is. You may also have uncertainties in your atomic code, especially when you're working with X-rays. The X-ray spectroscopy models can be quite inaccurate to within 20%. And that's still pretty good. And so you'll say, OK, I thought it was going to be here. There's nothing anywhere near here, apart from this line. So that is probably this line. So making a line identification is very tough. And especially in a plasma in the laboratory, you think my plasma is made out of aluminum because that's what you've made it out of. But then you forget that the lenses are coated with magnesium fluoride. And so there may be magnesium fluoride absorption lines when the X-rays from your experiment photoionizing. Or your electrode is made out of copper. And now, you've got copper lines as well. Or someone left their thumbprint on one of the electrodes. And now, you have hydrocarbons and salt. So you have sodium lines on there as well. All of this has happened to me very recently. So actually, it's a nightmare. You think, my plasma is made out of this. And the answer is no. Your plasma is made out of whatever crap there is in the machine, so yeah. OK. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yes, and preferably negligible, right? So yeah, this is definitely a technique that works best if it's optically thin. You could use it if you have very strong priors about what the density and temperature are so that you can calculate the opacity. But if you have those strong priors, then you don't need to do this technique. You can use it to confirm post-hoc afterwards that your technique was right, yeah. STUDENT: [INAUDIBLE]? JACK HARE: Well, I mean, that's really tricky. If you're able to do an experiment where you can increase the depth of your plasma, that would be one easy way to do it. But of course, if you're in a tokamak, you can't just make the plasma twice as big just to see what would happen to the lines. And so in some experiments, you could use that technique. Otherwise, yeah, you'll have to use maybe other diagnostics to give you ideas of density and temperature that you can then check how optically thick the various lines are, and then iterate from there. Or simulation as well gives us some hints as to what the answer is. Any other questions? What are we doing? Ah, X-ray spectroscopy, very good. So the reason I'm talking about X-ray spectroscopy in particular is that quite a few people doing plasma physics end up using X-ray spectroscopy. And also, it's got its sort of niche nomenclature that I wanted to introduce you to so that when people start talking about it, you have some vague idea of what's going on. Of course, all forms of spectroscopy are valid. A lot of people do visible spectroscopy, infrared, ultraviolet. But if you're working with plasmas at fusion conditions, which are like kiloelectron volt plasmas, then you're going to be getting kiloelectron volt photons, right? And so that means they're in the X-ray regime. And X-ray spectroscopy can be a little bit different. So X-ray spectroscopy has very strong characteristic lines. So these lines tend to have a high transition probability, aij. And they tend to be narrow. So they show up very, very strongly. They're easy to identify. They're easy to work with. And these lines correspond to transitions often between some of the lowest energy levels inside our atom. So here, this 1 is not referring to some abstract concept here. This is literally n equals 1. So this is the lowest energy level down there, the energy level that your ground state hydrogen atom occupies here. And these are the other excited states. Because we're dealing with relatively high energy systems when we're doing X-ray spectroscopy, an ion may be almost completely stripped. There may be only two, or three, or one electrons here. And people start referring to these ions as helium-like or lithium-like. So a helium-like, unsurprisingly, has two electrons. And a lithium-like has three electrons. What they mean is this could be something like tungsten, or aluminum, or anything else like that. But it's so ionized that there's only two or three electrons left. And that means that you can use all of the spectroscopy, all of the quantum physics that you do on helium, which is a relatively simple system to solve compared to something more complicated. And all you have to do is replace the atomic charge, the nucleus charge, from 2 to whatever your actual nuclear charge is. So you just take z from 2 to-- I don't know-- 13 for aluminum. But it means that you have a pretty good idea of what these energy levels are because the quantum physics of low atomic number elements is pretty well understood. So these characteristic lines will occur when there is some vacancy in a lower energy level. So for example, if we have an electron up here in the n equals 2 level and there's space in the n equals 1 level to drop down, it will emit a photon here. Now, why would there be space in this lower energy level? I'd updated my notes and that was also not updated as well. This may be because of collisions, or photoionization, or something else like that. If we're in a very hot plasma, it's very reasonable to have hot electrons flying around that collisionally ionize us or, indeed, just collisionally exciters up to some very high energy level, like 7. And that leaves a hole here. And because this is energetically unfavorable, to have a lower energy level unfilled, one of these electrons in one of the higher energy levels will very rapidly drop down into here. And these different transitions from down to the ground state from 2 and 3 and so on have different names. So first of all, we have what are called the K-shell transitions. This is transitions from-- or down to n equals 1. We have the K alpha, which is from n equals 2 to n equals 1. That's the one I've drawn here. You have K beta. That's n equals 3 to n equals 1. And so on, K gamma, all that sort of thing. Now, it turns out that the K alpha is always the strongest transition. So you'll typically get an electron dropping down from 2 to 1 rather than from 3 to 1 more quickly. And this line will show up more strongly. And it's strong, again, because of this overlap between the lowest energy state and the second highest n equals 2 energy state. These wave functions look most similar. As you start getting out to high ends, these wave functions start to look really weird. And when you do the overlap integral, it's not particularly large. So quantum mechanically, it's much easier for the electron to transition between these two wave functions here. So the K alpha is always strongest. We also then have the L-shell. This is transitions down to n equals 2. And you'll have L alpha, which is 3 to 2, L beta, which is 4 to 2. Then you also have the M shell and so on. But we tend to focus on the K shell because these lines are the highest energy atomic lines you can get from a system because you're dropping down to the lowest energy level from somewhere else. So these lines are very bright. And they're very good signatures of what the plasma conditions are. So just to give you an example, this is energy here and this is intensity. And we have something like copper. The copper spectrum will have some Bremsstrahlung component background here that you can never get rid of. And then at sufficiently high energies, it will have two peaks, like this. This one is the K alpha. This is the K beta. The K alpha, in this case, is 8.1 kV. And the K beta is 8.9 kV. And these lines are so narrow and so well-defined that they form a very strong spectroscopic signature. So if I'm looking inside my tokamak plasma that I want to be hydrogen, and I look at these high energy kiloelectron volt ranges, and I see peaks there, I can say straight away there is iron in my plasma, there's tungsten in my plasma, there's molybdenum in my plasma. And of course, all of those impurities are radiating very strongly because these transitions are very strong. And so that really ruins your power balance in the reactor. So a lot of spectroscopy, early spectroscopy work, at least in magnetic confinement fusion, was looking for these signatures of impurities in the plasma here. OK. Questions on all of this? The nomenclature thing is basically knowing that K means transitions to 1 and alpha and beta mean from the level above and the level above that and knowing that the K alpha is much stronger than the others and maybe knowing that sometimes people are like, this is a K alpha transition in helium, like aluminum. And you're like, ah, cool, yeah. But questions? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, absolutely. Bremsstrahlung is producing those wavelengths as well, in general, but not lines. So that's why they're so unique. And maybe in some system, you could have synchrotron radiation or something like that. But in general, these are very specific. So yeah. And they're slightly different energies. So K alpha for helium-like is at a slightly higher energy than alpha for lithium-like. And so you may also then have the-- say these ones are the helium-like. And this is the K alpha and the beta for the lithium-like And then because you know exactly what these lines are because they're so specific, you can do your line ratio trick that we used before and work out what the temperature is because this is a lower ionization state than this one. Yeah. Obviously, if you are fully stripped, if you have got to a temperature where there are no electrons left in your system, you no longer have the line. You would only get a line from recombination. But the recombination has a spectrum, as we talked about, with an edge, but still a spectrum. yeah? STUDENT: [INAUDIBLE]? JACK HARE: No, I believe that is separate. Yeah, actually, it's been a while since I reminded myself what Lyman alpha was. That's the one that's 121 nanometers, right? STUDENT: [INAUDIBLE]. JACK HARE: Oh, OK. Any astrophysicists? No? OK. Yeah, that's like a UV line. Is there a chance that it is related to these? Yeah, I think it is an X-ray line, but I can't remember The most common one. It makes me think-- STUDENT: [INAUDIBLE]? JACK HARE: It's called the Lyman series or the Lyman-- there's a Balmer series. What's the Lyman series? STUDENT: [INAUDIBLE]? JACK HARE: OK. So maybe in hydrogen, they have separate names that correspond to these. And I think, historically, people were doing spectroscopy on astrophysical bodies, which were the first bodies we could looked at that had these temperatures. So people were not doing spectroscopy on copper straight away. But if you point to your spectrometer at the sun, then you can get these alphas-- or these Lyman series imbalances. I'll look that up. Good question. Any questions online or comments if you know anything about Lyman series and Balmer series? OK. We can use this in reverse to design filters for our X-ray systems. So we often want to be able to measure just the light in a certain region here. So this would be like taking a picture of just the X-rays. And so we want to be able to filter out all this other light and just use the light in this region. So this is looking at X-ray filters. So a filter is a solid object. It's usually a thin foil made out of some metal or some non-metal here. And the idea of the filters is that we still have these different energy levels here-- n equals 1, n equals 2, n equals 3 and 4. But now, this is for z equals 0. So this is for some solid, like a lump of tungsten or something like that. And then if we have a photon coming in, if there's a possibility that photon has enough energy, h-bar omega, to excite an electron from one of the inner shells, from 1 and 2 from 3, then that process is now favorable. It's likely that photon will be absorbed. If it doesn't match with any of these bandgaps or any of these transitions, then the photon will go through. But if it does match, then it will get absorbed. So what we see in our absorption plot for a filter-- so this is that quantity, alpha, the opacity, as a function of frequency-- is we see a series of what are called edges here. So this is the L edge. And this is the K edge. And what you get is your lines corresponding to your frequencies corresponding to K alpha, K beta, all the way up to the highest K, which is ionizing from whatever the highest occupied state is up into the continuum. And same here, you get-- I'm trying to space these parabolically-- L alpha, L theta. So effectively, you have a sudden increase in absorption whenever it is possible to do this ionization. If you have done the homework yet and you've looked at some of the other filters that you can generate on the Henke website, you will have noticed these really weird absorption features. Henke normally gives you transmission. And so you will have seen 1 over this. The absorption is 1 over the transmission. But you will have noticed these really sharp edges. That's not a numerical problem. When you first see it, you think, oh, the code is broken here. No, these correspond to these very strong absorption edges here. So if you have a spectrum that has some lines in it and you put that spectrum through this filter, this filter would cut out, for example, this highest energy line here. And you'd just be left with the spectrum in this region. So this is how you use X-rays to do filtering. Oh, OK. Questions on that? And the position of these edges depends on the element. So if I use an aluminum filter, or a tungsten filter, or plastic filter, they'll have edges in different places. And so by using different filters, I can image different parts of the spectrum. So we'll talk about X-ray imaging in a moment. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Not at all, yeah, just photo-absorbed, yeah. STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so that's a bit that doesn't-- yeah, yeah. That's the unique quantum and treating the photon as a wave in order to get-- well, you need a quantum treatment of it in order to get this roll-off here. Yeah, my explanation only covers why there's a sharp edge. I haven't seen a nice intuitive explanation for this roll-off afterwards. But all materials at high enough energies, no matter what, their absorption goes down to 0. So gamma rays will get through anything eventually, Yeah. STUDENT: Professor? JACK HARE: Oh, sorry, I'm just going to answer another question, Jacob first. And then I'll get back to you. So the question was, why the edges get taller and taller? I think it's due to the likelihood of this interaction, but I'm not sure, yeah. Sorry, Jacob. Your question? STUDENT: How do you design a material to specifically change or take out a certain-- like the K edge or the L edge? JACK HARE: You don't design a material. You have a look and see what materials are available. STUDENT: Yeah, I mean, how do you choose a material? JACK HARE: How do you choose a material? Well I choose it by going on the Henke database. And I generate these curves. And I see what they are. And I guess, if you're very experienced, you just know, for example, that this K ege is at 1.6 kV for aluminum. And if you're trying to measure photons of energies below that, then 1.6, this aluminum, is quite a good filter. But you can't engineer a material to have a K edge. It's an atomic process. So it really depends on the elements. If, for example, you have a filter made out of a mixture of two elements, then it's just the combination of the absorption for those two different elements. So yeah. STUDENT: And you have to get the L edge to do the K edge, right? Or can you just get-- JACK HARE: Yeah. No, no. The L edge will always be there as well, and the M edge, and so on, but down to lower and lower energies, right? So they may not be interesting for your bit of the spectrum. STUDENT: All right, so you'll still see the bottom ones. But you just know that they are too low of an energy to-- JACK HARE: If you think your spectrum looks like this and you know that there's no interesting features down here-- it's just Bremsstrahlung with very bright lines up at high energies-- then you don't really care what the structure is down here because when you multiply the transmission via your spectrum, there'll be like little fluctuations down here that you don't care about because your spectrum is going to be dominated by these lines you think. STUDENT: OK, makes sense. JACK HARE: Other questions? OK, let's talk about X-ray imaging, so why we might want to use a filter. Obviously, if we're doing spectroscopy, we don't really care about the filters because we can just do spectroscopy. And we can see where the lines are. But if you don't want to do spectroscopy and you want to make an image, you're going to need some of these filters so that you know what energies you're looking at. So this is X-ray imaging. So one of the facts about X-ray imaging is that there are no lenses. And that is a consequence of the fact that the refractive index is roughly 1 for X-rays. And lenses work on having a difference in refractive index from air. So if your refractive index of all your materials is about 1, you can no longer bend the rays. So the simplest thing that you can do for X-ray imaging is called pinhole imaging. Pinhole imaging is what the first cameras for visible light used as well. You have your object, which, in this case, I'm drawing as like a triangle with very little symmetry for reasons that might become apparent later on. You have your pinhole, which is a little opening of diameter d in some plate, which is otherwise completely absorbing. And then this is just geometric optics. So you trace the rays of light from the corners of your object through the pinhole. And if you get it right-- and I'm probably going to get it wrong-- you get an object which is inverted from the one that you started with. So it's upside down. And it's flipped up-down and flipped left-right as well. That's not usually a big problem. You can usually work out what's going on. The object will also be magnified. So there's some distance, u, between your object and your pinhole and some distance, v, between your pinhole and your detector. It could be film or some camera. And so this is also magnified by the ratio of v upon u. And you can check that out by drawing some lines for yourself. And you'll see very quickly, from geometric arguments and similarity of triangles, that you get out this magnification here. But this will also be blurred. So you'll not get a perfect image. And it will be blurred by two effects. The first one is geometric blurring. The geometric blurring is a function of the fact that you have to have some finite size pinhole here. Ideally, this pinhole would be infinitesimally small. And then all of your rays of light would have to go through the pinhole to make it through the detector. In practice, if your pinhole is very, very small, then you don't get any light through at all. And you don't get a very bright image. So you have to increase the size of your pinhole to get enough light through to make an image. The geometric blurring is effectively-- if you think that your object consists of two little bright blobs, like this-- so you're looking at it. And it's just two little LEDs, like that. The projection of those dots, say a dot here and a dot here, through the pinhole-- I draw one of these for the top side of the pinhole and one of them the bottom side of the pinhole. So if this is just a missing light uniformly in 4 pi steradian, then it will project a circle through here. So we will end up not with two dots, but with two circles. So this is purely geometric. I'm just using ray optics here. I don't need to think about waves whatsoever. And the diameter of these circles here is going to be equal to M plus 1 times the pinhole size. So we can change how much this blur occurs by changing our magnification and also by changing the pinhole. And in the limit that our pinhole size becomes very small, our blur also becomes very small. And we can resolve point objects again. But like I said, we need-- well, if we have a very small pinhole diameter, then we have no light. So there's a trade-off here. And if you do photography and you're aware of f numbers and things like that, some of this may be very familiar to you. But for those of you who don't, I think this treatment is valid. There's another limit, which is the diffraction limit. And this is where we need to take the wave nature of light into account. So we have our little pinhole again here. But we know, from Huygens principle, if we got some wavefronts coming in like this, every point on the wavefront is a source of spherical wavefronts. Normally, when there's no pinhole here, that doesn't matter because our next wavefront is then made out of the sum of all of these little spherical wavefronts. But when we put our pinhole aperture in the way, we block out some of those wavefronts. And instead, we start to get diffraction. And so we start to get spherical waves coming out from this. And so if we initially had a point source, it's now going to be imaged into a blurry source here. And so the-- which one's-- OK, I can't remember. I think it's the Rayleigh criteria. The Rayleigh criteria says that if you have two dots like this, for example, that are some distance apart then we can only distinguish them if their blur is separated and this separation, which sets the resolution, is 1.22 lambda over d. That's an angle space. So multiply it by v to bring that angle into a real position on our detector here. So this 1.22 comes from zeros of the Airy function. These are actually like little sinks. And we're trying to make sure that the first zeros are separated between these two here. So you may have seen this if you've done diffraction theory before. So the geometric blur doesn't care about the wavelength of the light. It doesn't care about the energy of the photons. The diffraction blur does care about the wavelength of the light because it's a wave effect here. And so we can make the diffractive blur go away by having a bigger pinhole or by having a shorter wavelength. So if we go to a bigger pinhole, then our geometric blur will be worse. And so there's actually a trade-off between these two. There'll be a sweet spot that gives you the best resolution, where you minimize the sum of the diffraction and the geometric resolution. But you need to think about this when you're designing pinholes. OK. Oh, the other thing about this is if you have a source that is emitting lots of light in lots of different wavelengths, then the short wavelengths will be distractingly blurred. And they'll be smeared out over your detector. And the long wavelength won't be. But that means that you'll have this blur over your whole detector from the short wavelengths, which reduces your signal to noise. And so this is especially where you want to have a filter. So you want to filter out the low wavelengths so that they don't blur out your image. And you want to just have the high wavelengths, like these ones here, make an image just in some of the K alpha emission from the very hot parts of your plasma. OK. Questions? Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yes. STUDENT: [INAUDIBLE]? JACK HARE: Yes, I'm not sure how that comes into this geometric optics framework. STUDENT: [INAUDIBLE]. JACK HARE: OK. Yeah, I see where that's coming from. But this is for a pinhole. It's always going to be inverted. I guess, theoretically, if you put two pinholes-- no, that doesn't work. Nope, it's always inverted. Yeah, so maybe it's negative in someone's sign convention. But not in what I'm using here because it's always true. Yeah, good question. STUDENT: [INAUDIBLE]? JACK HARE: Yeah. Well, what I've missed out here is, in fact, that the transmission of as, you know, a thin bit of metal foil is actually opaque to visible light. So if I put a very thin bit of aluminum foil in there, I will definitely block out the light. And you need like 200 nanometers. So you can just deposit it on some basically transparent plastic. And then you will also get the filtering from the aluminum K edge, for example. So yeah, if we haven't gone down to very low energies, this is kind of like an X-ray regime treatment of it because, of course, you need X-ray photons in order to even get the L edge. In reality, this goes down and some other stuff happens down at this visible wavelength. Yeah, but any metal that is sufficiently thick-- and it doesn't need to be very thick-- will block all the visible light out. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yes, you're right. I just constantly get confused between wavelength and frequency. Filter out long wavelengths, which is visible light, generally. OK, good question. You also do imaging-- and I'm not going to talk about it at length. Pinhole imaging is quite limited. You can also do imaging with bent crystals. So if you've ever looked at diffraction of X-rays from crystals, you'll know that the crystals will diffract light of certain wavelengths. And if you have a bent crystal, you can also do focusing with it. So people spend a lot of time very gently bending bits of crystal. And that makes a reflective lens. So you can't have a transmissive lens. But you can have a bent mirror, effectively. So there are two ways to focus light. You can have a lens like this that your light goes through. Or equivalently, you can have a mirror, like this, that the light reflects off. And because the mirror is bent, they will get focused to a different point. And as you adjust this mirror, it will depend where this gets focused on. Now, even a crystal is not a mirror for X-rays. It's a mirror for a certain wavelength of light. And so if you put your detector here, you might have the K alpha focused onto your detector. But for example, the K beta, which is coming in the same direction, might get reflected or diffracted off to a different point, which is off your detector. So this is a way of doing monochromatic imaging. So effectively, your crystal here acts as both the filter and the focusing element. So you can make beautiful images. You don't need to have a tiny pinhole. So you can get lots and lots of light. And you can get images with just a single wavelength, which can be very powerful for saying this is a very hot bit of my plasma that's producing K alpha. So bent crystal, you get what's called monochromatic imaging. But bending crystals is extraordinarily expensive. So pinholes are very, very cheap. And you'll find a lot of people using pinholes because they work pretty well. And of course, the bent crystals only work for individual energies. And so if you want to try and measure more broadly what the radiation is from the plasma, you'll miss out on information. OK. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: My last set of notes for today, I promise. Sorry, what? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, diodes are pretty good, yeah. STUDENT: [INAUDIBLE]? JACK HARE: No, no. It's nice silicon, 50 nanometers thick, perfectly good for X-rays up to about 10 kV, so nothing wrong with diodes. OK. So a lot of these detectors are simply film. If you went to the dentist a decade or so ago, they would have used film in their X-ray things. And in fact, most of X-ray film was developed for medical applications. And film is really good. OK. So film has a huge drawback. It's time-integrated. So you have no sense of what the time is. You have an integral of the intensity over time. So if you have some filtering, maybe you have still frequency information left in your image, because you're like, oh, this is coming all from the K alpha because I put a special filter in. But you don't have any time. So that could be a really big problem if you're doing something with a rapid implosion, like in inertial confinement fusion. But it doesn't have to be a big problem. For example, if you've got a signal that is very bursty in time, like an ICF-- so this could be a few hundred picoseconds across. And you have an X-ray diode looking at it. And that diode has time resolution, but no spatial resolution. Then you could use that diode signal to say, well, all of my really bright X-rays are missing 100 picoseconds. So although this is time integrated, I actually know that the time that this was integrating over was only P to P plus 100 picoseconds. So you can use a combination of diagnostics to localize your signal in time. That might be OK. Film is really high resolution. Film grains are very small. So it can be much higher resolution than a detector, like something that has pixels. You obviously don't have to scan it at very high resolution. But we can do that. It's also very well characterized. So although it may not be linear, we know like this amount of darkness corresponds to this much energy deposited. So film is actually really precious. And unfortunately, a lot of the film manufacturers, like Kodak, have stopped making X-ray film because there isn't a market anymore. I believe Omega has one of-- the Omega facility at Rochester has one of the largest stockpiles of remaining medical film in a fridge because they can't buy it anymore. And no one else has any left. So people keep begging Omega for some of their film from time to time. So it's good stuff. But a lot of people have now moved to something called image plates. Image plates, at first glance, looks an awful lot like film. It's a bit of plastic, plastic-looking stuff. It's time integrating. Again, there's no electronics to it. What happens with image plates, which is neat, is the X-rays excite it up to a metastable state. So you have the X-rays excited up to this metastable state up here. And because it's metastable, it doesn't actually decay straight away. This may be-- ah, sorry, yeah it doesn't make sense unless I do it like this. The X-rays excite the atoms in the plastic up to some upper state. And then via some non-radiative decays, because of the peculiarity of the plastic, the system decays down to a metastable state. And metastable, in this case, means that there is no permitted transition down to this lower state here. This doesn't happen. So this is a forbidden transition. Now, even though the transition is forbidden, there will be some other mechanism, mostly to do with collisions, which will eventually de-excite the atoms in your detector again. So you've got like a few hours to get this bit of image plate out and put it into a scanner. What the scanner does, very cleverly, is that it uses a laser to excite the image plate back up to some upper level. And then it looks for the fluorescence as it decays back down again. So you can read off the number of stored photons in each little bit of the plastic using this little laser and do this scan. And then you can reuse the image plate, which you can't do with film. So image plate is pretty nice. But you need an expensive scanner. So it has a laser readout. So both of these are time integrated. You can get time gated cameras. So these are cameras, for example, which have some gate on them, like this. And maybe that gate is 5 nanoseconds or something like that, where you're really getting a very, very fast burst of X-rays. So that avoids this problem. You know exactly when the X-rays were coming out. But of course, a time gated camera is very expensive because now it's got lots of electronics. These are often things with names like Hybrid-CMOS. These cameras can be very, very expensive. And they also may have relatively low resolution because each pixel costs a lot of money. So if you can get away with using film or image plate, you might be tempted to do that. Finally, there's something called a streak camera. A streak camera is a slightly odd device. And when you first see a streak camera image, it's very hard to understand what's going on. What you get out of your streak camera is actually something that has got a lot of intensity, position on one axis, and time on the other axis. So if you have something like an ICF capsule that is roughly circular and you put your streak slit here, initially, if this little circular blob is emitting nice and uniformly, you would see nice, uniform emission in all of x. And as this ICF capsule gets smaller, your slit stays the same. And so the region that's being lit up on the slit gets smaller as well. So you'd see a streaked image that looks like this, where this has got more light on it and this region has no light on it. So this is a very clever technique for monitoring like the evolution of a 1D profile of intensity in time. And the reason these streak cameras are so popular is that they're extremely fast. And they're actually based on the same technology that our CRT television uses with a beam of electrons being swept across here. They are very old, though. So everyone who has one, they keep breaking all the time. OK. Any questions on detectors? That's all I have for you today. Yes? STUDENT: [INAUDIBLE]? JACK HARE: Normally, what you actually have is a bit of film that sits behind the streak camera window. And so the film gets exposed. And it has this pattern on it. And you'll have some timing dots from a laser that will tell you how to calibrate your time axis. And you might have some fiducial that tells you how to calibrate the spatial axis here. But because it's something like film, it does have a dependence on the intensity. So it's not binary. But the intensity may be very non-linear because to get this image, you're generating an electron beam and sweeping it. And there's, like, magnets, and electron bunching, and all sorts of things like that. So interpreting these in a very quantitative way can be very, very hard. Yeah? STUDENT: [INAUDIBLE]? JACK HARE: Yeah, so there's something called a-- often something called a microchannel plate, which has some voltage across it. When the photons come in, they ionize. The photoelectric effect, they release an electron from the surface. That voltage sweeps the electron and accelerates it up. There'll be magnets, which bend the electron for a certain part of the time axis. The electron will collide with the phosphor screen. The electron excites the phosphor. The phosphor decays and emits green light. The green light is recorded by [INAUDIBLE]. Yeah, so they're very fun. But they are very complicated, which is why you shouldn't-- if you see a region that is twice as bright here than it is here, that doesn't necessarily mean that it's emitting twice as many X-rays because it's very, very non-linear. And they're very sensitive to voltage. STUDENT: [INAUDIBLE]? JACK HARE: Yeah, it's a pulsed magnetic field that's ramping up in time. So as the magnetic field ramps up, it sweeps the electrons to different places. That's exactly how a cathode ray works. And we were able to do cathode ray TVs to make moving images at 30 frames a second, so yeah. STUDENT: [INAUDIBLE]? JACK HARE: Yeah, but all the electrons from the photoelectric effect are be coming out with very similar energies. And then they're accelerating. So maybe they've got an energy range of a few EV. But they're being accelerated over a kiloelectron volt, which is also why you should not mess around with a CRT because inside it there is an electron beam that has kiloelectronvolts, yeah. Any other questions? Anyone online? OK. Next lecture, we are going to talk about line broadening. And we're going to finish off spectroscopy. So see you on Tuesday.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_22_Thomson_Scattering_Collective.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So once again, we are going to be discussing current scattering. And we're going to derive Thomson scattering for the third time. Huzzah! Huzzah. Thank you. Good. Well done. A grade. Coherent scattering-- so just as a quick review of what we discussed at the end of the last lecture, we are looking at scattering when the wavelength of whatever the mode is that we're scattering of is greater than the Debye length here. So this wavelength is 2 pi upon the size of k, where k is ks minus ki, like this. So we're not talking about the wavelength of our laser. Though, this wavelength may be similar to the wavelength of our laser, but we're talking specifically about the wavelength that we get from drawing this diagram where we have some laser beam coming in with a wave vector ki. We have some scatterer at a vector ks. And then we go, OK, ks minus ki. Aha, this is the k that we're talking about. We're talking about the wavelength of this. And the reason we're interested in things larger than the Debye length is on scales larger than the Debye length, the wavelength of this scattering vector is going to be sampling the collective motion of the plasma, or the coherent motion of plasma. So you might see this called collective Thomson scattering, as well as coherent Thomson scattering. So modes which exist in the plasma that fulfill this condition, they have a frequency, which is much, much less than our laser frequency in general. And this is a consequence of the fact that we're doing relatively low energy scattering. So our photon energy is much, much less than the kinetic energy of the particles in our plasma. But the size of the k vector may well be on the same order as we see in this sort of vector diagram as the k vector for our free space mode. And so we still have modes which carry a lot of momentum, even if they carry relatively little energy. And these two facts together mean that our phase velocity, which is defined as omega upon k, is much, much less than the speed of light. And the modes that we identified that fulfill this were the ion acoustic waves. And these have a dispersion relationship, omega equals square root of z te plus ti over mi-- Could you mute your microphone, please? Times k. And we also have the electron plasma waves. And these have a dispersion relationship of omega squared equals omega p squared plus 3 bte k squared. So these are the sorts of modes we're going to be scattering off. This is a very low frequency mode. And this is merely a low frequency mode. So when I say scatter off, it would be reasonable to ask why am I so sure that these modes exist in my plasma. Is there something in my plasma that is launching ion acoustic waves or electron plasma waves? And the answer is my plasma, just by virtue of sitting here, is a bath of these different modes, these fluctuations going on all the time. We have in every single direction ion acoustic waves flying off from every part of the plasma to every part of the plasma. So if you try and fire your laser beam through here, you are certain within the region of plasma you're interested in to find a wave going in the direction you want here. And you will get scattering. And we will be scattering off, as I've drawn several times now-- we'll be scattering off ion acoustic waves going in this direction at the sound speed here, but we will be observing them over here with our spectrometer, looking along ks. So I think this is one of the things that confuses people when they first start learning about collective Thomson scattering. They're wondering, why are these waves here to start with? The fact is there is always a possibility of there being a wave. And as you scatter off it, you transfer more energy and momentum to that wave. So you actually transfer energy from your probe laser to the wave. This is sort of like a Landau damping process. And you will grow that mode. And so that mode will definitely be there for you to scatter off. And the intensity of the scattering-- so again, we can write this intensity as the amount of power scattered into a solid angle scattered into a certain frequency will be proportional to the fluctuation strength for the number of modes-- fluctuation strength. Or if you want to think of this in a quasiparticle picture, you can think of it as like the number of phonons, or something like that, which are available. So the spectrum that we expect to get if we do all the mathematics correctly-- this is omega. This is what our spectrometer measures. If we expect to see some peaks at very low frequencies, there's 0 in the middle here, corresponding to the ion acoustic waves. And we'd also expect to see some peaks at slightly higher frequencies, corresponding to the electron plasma wave, the EPWs. And just remember I'm using this omega here, which is the difference between the scattered light and the incident light. So if I didn't have any plasma and I scattered off a lump of metal, I would just have a very big spectra right here in the center, omega equals 0. When there's plasma, I scatter off these modes inside the plasma. And there'll be two possibilities. I can scatter off a mode, which is going this way, or I can scatter off a mode that's going this way. And they both fulfill that condition that omega equals omega s minus omega i and k equals ks minus ki. So I'll see both of these peaks here. Now, in reality, they're not going to show up as little delta functions, even though if you look at the dispersion relationship, it looks like there's only one solution for a given omega and a given k. In reality, they'll be thermal effects, which broaden these. So what we will see is some sort of peak like this. It's good, because we don't like singularities. They don't really occur in nature. This heuristically is what we expect to get. But now we have to go out and do the mathematics and actually go and get it all. So any questions before we launch into that? Yes. AUDIENCE: [INAUDIBLE]. JACK HARE: Scattering gets chosen by where we put our detector, our spectrometer. So if you would like to think of it as a fiber-coupled spectrometer, I would have a little fiber optic bundle here. And that would go to my spectrometer, which disperses the light in the frequency space like this. There'll be some dispersive element. And so for example, if I have a lens here, I would collect light from this region here. And this spectrometer, therefore, defines the k vector, ks. Now, you bring up a good point, because for any realistic system, you actually collect a range of ks. You can't just collect along a single vector, because different angles of light will be collected by your lens. And that will also contribute to this broadening as well. But fundamentally, you can make that very, very small. And you still wouldn't get these delta functions. You're still going to get some broadening due to thermal effects. AUDIENCE: [INAUDIBLE] JACK HARE: Well, actually, you don't really choose the magnitude of k. You only choose the direction of k. The magnitude is set by the dispersion relationship and the frequency relationship here. So when I see a wave with a certain frequency, I can use the dispersion relationship to work out what k was going to be at. So really, this is just setting the angle. So it's setting the vector relationship here. It's the size of k. Remember, we had this nice expression for the size of k, which was the size of k is equal to 2 ki size of ki sine theta upon 2. And we got that from saying that we're doing elastic scattering. So the size of ki is roughly the size of ks. So what you do is once you've chosen your angle and you know what your incident wavelength is, you have effectively chosen the size of this k vector straight away. And then when you choose the angle, you've chosen the other part of the vector, which is its angle. So this is the magnitude. And this equation determines theta and this is the magnitude. And then you've set everything about k. And so if you look at a certain frequency and you see a wave show up at that frequency, you know what your k is. And therefore, from the frequency you measure, you can work out what this term has to be. So for these peaks here, I can work out that this frequency shift is going to be proportional to square root of z te plus ti-- I don't know why I made a temperature vector there-- ti upon mi. Other questions? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: At this point, this is like, yes, I guess this is for the k in the center of our lens here. And then one side, we're going to have k's, which are a slightly larger angle, and the other side, we'll have k's at a slightly smaller angle. And you can see from this that corresponds to slightly bigger and smaller magnitudes of k, which in turn will, for the same temperature in the plasma, correspond to slightly bigger and smaller frequencies. So this corresponds, for example, to theta 0. And on this side, this is theta 0 minus delta theta. And this is theta 0 plus delta theta, where we're talking about this being the angle theta. And then this being the angle delta theta between the sort of main angle of your thing, and the spread of angles there. AUDIENCE: [INAUDIBLE] JACK HARE: No. It's the multiple effects that are going on. But the angle is one of them that would also stop you getting singularities as well. And you do need to think about it, because a good trick-- we talked about how few photons we have. So it makes sense to have a bigger solid angle to collect more scattered light. The trouble is that each of the photons are coming at a different angle. They actually have a different spectrum. So it'll also broaden your spectrum, which will make it hard to measure things. Other questions? Anyone online who has a question? Let's do some maths. This is a two-step process. Our first step is calculate the scattered power spectrum, which, again, we're writing as d 2 p d omega d nu. And we want to know how this relates to density fluctuations. So fluctuations in the density, delta ne of r and t. Then we're going to quickly Fourier transform these and actually be looking at them in terms of a spectrum of density fluctuations with certain k vector and omega, because everything is going to be Fourier transformed here. So we relate this to this quantity, a relatively simple formula. And then the more complicated bit is calculate what these density fluctuations are, delta-- well, we're just going to go call them any k and omega in a plasma. So this goes back to my previous point that there are always these fluctuations in a plasma. So we just have to work out the size of these fluctuations. And we're going to do that using a test particle treatment and a little bit of Landau damping along the way. And one thing I want to say is that this treatment here is going to reproduce the incoherent case as a limit. So our treatment now-- sort of derivation includes what we've done previously. It just includes it as a limit. And I'll try and point out where in our derivation we've exceeded what we did previously. But you will be able to get back all of the Thomson scattering spectrum, not just the collective part, from the equations at the end here. So we want to recall a couple of things. We had our Fourier transformed scattered electric field. So it's in terms of frequency, which was equal to the classical electron radius over the distance to the scattering volume. And we were doing a Fourier transform on this. And we had this pi dot Ei0, which was just the thing that gave us the shape of our scattering. So it's got some angular dependence. Actually, I can just write this as Ei in terms of r and t prime. And we were Fourier transforming it. So we had an exponential of minus i omega s and e. And we replaced our t with the t prime, because that's what the electric field was. And when we replaced this t with a t prime, this becomes t prime. And we get a second term that was minus ks dot r at here. And now our integration is respect to d prime instead of t. So this was from a single particle what our scattered electric field was going to be. So now we want to know the scattering from a distribution of particles. And we're going to take as our distribution function the Klimontovich distribution. So I'm going to use a capital F here. And this is a distribution function in space and velocity and time. And in some sense, it's a very dumb distribution function, because we just say it's the sum over the position of every particle, which we write as a delta function, r minus r sub j of t, and another delta function, which is the velocity of every particle. So sort of like an obviously true statement, but the Klimontovich distribution is not usually very useful to work with. But we'll get back to why we do it later on. You may have seen this in particle kinetic theory. And we're going to say what we want now is the scattered electric field. We want the scattered electric field from every particle in this Klimontovich distribution. So we're going to call this scattered electric field Es total. And we're going to do an integral over all of the plasma, all of the particle velocities, our Klimontovich distribution, and our electric field from a single particle here. So this is the total field. The key difference in what we did in the incoherent case, I believe, is that now in the incoherent case, we kind of handwaved our way over this integral over space. And I think we effectively assume that all particles are in the incoherent case. All the particles j were at r equals 0. And so now we're going to do the coherent case, and we're not going to make that assumption. So r equals rj. That's what the delta function up here does. And that means that if we have our spectrometer, our observer over here, and our origin here, and we have two different particles at positions R1 and R2, the electric fields coming from these, because they're scattering waves in this direction, they may be a phase difference. And so we may get interference from them. Previously, because we assumed that all the particles were in the same place, we couldn't get a path difference. We couldn't get any interference. And so we just summed up their contribution. Now, when we do this integral, we're going to very carefully take into account the difference in position of each of the particles inside Fe. And so when all the waves gather up together, we're going to be interested in how they interfere with each other. So this is the key difference in this derivation. A couple of things to note, of course, is that our electron density, ne is simply the integral of our distribution function over all velocities. That's still true. Just to be clear, this is ne as a position of space and time. So we've lost the velocity component. We've integrated over it. And so-- do we have space to write this? No. This total scattered electric field, where has it gone? That one up there. It's going to be equal to re on R. There's going to be this boring phase factor out front, iks dot R, which will vanish very soon. This is the phase that all of the waves pick up by virtue-- this is kind of like the average phase. It's the phase that they pick up from going from this point to here. So it's going to be a very boring phase that will cancel out. But we're still interested in the additional phase from the exact locations of each particle. There's going to be a factor of this pi dotted into the polarization. Again, it's just a shape function. But now, we're going to be an integral over pi and an integral over volume of the density of R and T prime. So I've carried out the velocity integral and replaced this with the density. And now I have the factor from the electric field. I didn't write it down again, did I? OK. I shall put it here. Our incident electric field is equal to Ei 0. I'm going to do it this way. Cosine exponential of i ki dot r minus omega i t prime. This is the incident electric field. So if I now substitute in the electric field, I'm going to get that exponential term here. But that's going to be exponential of i ai dot r minus omega t prime. And then I'm also going to have this exponential here. I'm just going to write directly below, exponential of minus i omega s t prime minus ks dot r. So far this looks a great deal like what we did before. But if you want to evaluate this electric field carefully, and then work out the scattered power very carefully, you have to pay an awful lot of attention to exactly-- sorry, I've forgotten to put in the limits here of this. d t prime d3 r. So integrating over time and space. To do this properly so that we can take into account the interference requires you to very carefully go through this and pick out some important terms. And I had it in my notes from last year. And it was not a very educational exercise to do on the board. So what I'm going to say is if you are interested in seeing the mathematics behind this-- this is Hutchinson's 7.3.8 to 7.3.12. And I'm simply going to quote the result now instead, which is that the total scattered power into some solid angle, into some frequency here is equal to the classical electron radius squared, the power of the laser over the cross-sectional area of the laser, this shape function pi dot i hat squared times ne times the volume that we're scattering from trying to find all of this times something called s of k omega. This is the important quantity called the spectral density function. And it's important because all of the information about this spectrum is held inside there. This is where the k and omega dependence are that tell us what the spectrum looks like. These are all just scaling constants that tell us the intensity. And often, because we don't have absolutely calibrated spectrometers, we can't really measure this. So we don't really care. But we can measure the spectrum, the relative intensity. And that's all hidden within this function. And this function is equal to 1 upon the time of our integration, the volume of our integration, and then the average of the Fourier transformed electron density squared over the average electron density. So these are the fluctuating term, and this is the sort of steady state ne 0 electron. I should really make that square very clear. It's very important. Question? Yeah. AUDIENCE: So why are we the r instead of [INAUDIBLE]? JACK HARE: Yeah. This is a generic Klimontovich distribution function. We'll be using it. Yeah, you're right. When it goes into here, I did put t prime in a Klimontovich distribution. So you're right. That's just the Klimontovich distribution. But you're right. We're evaluating a Klimontovich distribution at time t prime. So this should be t prime here. And this should be function of r d and t prime here. I haven't generally been using primes on the r's. Though, Hutchinson does to remind us that it's r evaluated at t prime. So he writes r prime is equal to r at t prime. I tend to drop the prime because I remember. But maybe I shouldn't drop it. Other people don't use that notation. So just if you note, if you go into Hutchinson. In the syllabus, I recommended a book by Froula and Sheffield called Plasma Scattering of Electromagnetic Radiation, which is very good and very thorough. And I've kind of blended their treatment with Hutchinson's treatments to get some of this. So not all of this looks exactly like how Hutchinson does it, but I do think that Hutchinson's treatment is pretty good for most of this stuff. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: This one? AUDIENCE: [INAUDIBLE] JACK HARE: So it comes in as part of this term. So let's be clear. So this, it actually should be back here as well. I have missed it out. So there's a times e to the i as dot r factor in there as well. And where it comes from is originally this is exponential of minus i omega s t dt. And when we substitute out t equals t prime with a load of other bits, one of the other bits becomes ks dot r, and the other bit comes ks dot capital R. And we can take that outside. But yeah, I forgot to write it down here. It should have been in here as well. And what that represents, that relatively boring term, is just the fact that all of the electromagnetic fields from all of the plasma have to travel some distance r to our detector. But some of them travel r. So if this is the r vector here, some of them travel r plus delta R or minus delta R, or whatever else like that. So we're kind of accounting for the overall phase that they all pick up as the waves travel. And then we're trying to calculate these very small differences. And those are the ones which interfere and give us the interesting coherent effects. Other questions? Yes. AUDIENCE: [INAUDIBLE] JACK HARE: That's the relativistic treatment, the relativistic treatment where we keep terms in b upon c. And we've dropped those. So this is the non-relativistic treatment. Good point. Any other questions? So we've now done step one. We have calculated, to some extent, the power spectrum in terms of density fluctuations. So now the question is, step two, what are these density fluctuations? What we're going to do is we're going to consider the response of the plasma to a test particle. So we imagine we have some plasma. And we put some test particle through it. And then we asked ourselves, what does the rest of the plasma do in response to this test particle? The test particle, in this case, what we're going to do is we're going to pick each electron in the plasma and each ion in the plasma, work out what the rest of the plasma does in response to each electron and each ion moving. And then we'll sum all of those responses together. So at the moment, this could be an electron or an ion. Remember, we're not interested in the scattering of the ions. But if the ion motion perturbs the cloud of electrons, and then we can scatter off that cloud of electrons, we will see that light here. So this cloud here is a sort of Debye cloud here. These are the particles which are shielding our test particle. If our test particle is an electron, then the shielding particles are an absence of electrons. The ions can't move out the way fast enough as an electron goes through, but the electrons will move out of the way. And you can imagine that as this particle goes through, and the electrons move out of the way, it will cause a perturbation to the density. It'll be a negative perturbation. So it'll actually scatter less. But that means that these electrons have to end up somewhere else. The density will be higher. They'll be scattering off those. If the test particle is an ion, then it turns out that half of the particles, or half of the shielding that's made up that shields this ion charge are due to electrons being attracted. And the other half of the shield is by ions being propelled. So it's slightly different from the argument for the electrons, where only electrons can shield electrons from ions. Some of the shielding comes from drawing electrons in. The rest of the shielding comes from pushing ions out. And that can happen because the ions can move as fast as an ion. And so we can see that shielding here. So what we get is a perturbed charge density, rho 0, at some position r and t, due to our test particle, which is q delta r minus vt. We're just saying our test particle perturbed density looks like this. As it moves through, it's at some position r, moving at some velocity v in a straight line. And that means that its Fourier transformed contribution to the charge density is equal to q2 pi delta a dot b minus omega. So this is the Fourier transform. Now, we're going to take some equations from electromagnetism and talk about this in terms of a polarization and a polarization charge density. So it'll be free and bound charges. So we have a polarization vector, which is equal to the susceptibility times the permittivity of free space times the electric field. And this is also equal to pi upon epsilon, which is just the permittivity, and the d field. And the e and d fields are defined as divergence of the electric field is equal to rho 0. And divergence of the field is equal to rho e. So this is the free charge. And these are the bound charges. And in our case, our bound charges are our shielded charges. I ran out of space to write shielding. There we go. You get the idea. And we can solve these equations to work out what the size of our shielding charge is. And our shielding charge, rho e, is equal to minus chi e upon epsilon times by this Fourier transformed test particle charge q2 pi delta a dot b minus omega. What this is effectively saying is as this test charge moves through or as it exists as a mode in Fourier space, it induces some bound charge to shield it. And that bound charge has a charge density, which is to do with the susceptibility of the permittivity here. Then we can-- this board-- say that our electron density, ne of k and omega, is going to be equal to a density, which is due to the electrons, and the density, which is due to the ions, which have been perturbed in this system. So for the electrons, we sum over j electrons. And we have a 1. This is the test particle charge. So whenever we have an electron test particle, it's going to contribute 1 to the electron density here. And we're going to subtract off it the number of electrons which were repelled by that test particle, pi upon epsilon. And both of these are 2 pi delta k dot bj for the electrons minus omega. So this is the electrons. This is our test particle electron. And these are our shielding electrons. And then we also have a term that's for the ions. We sum over l ions. And here, we simply have a term pi e upon epsilon here. The ions have a charge of z 2 pi delta k dot velocity of the ions, vl, minus omega. This is the ion contribution here. Just to be very clear, this is the contribution of the electrons shielding the ions. We're only talking about the electron density here because we're only interested in electron density. So this is the contribution of the electron test particle and the electron shielding the electron. And this is the contribution of the ion test particle. So we don't get a 1, because we don't add. The ion density of the test particle doesn't go into the electron density overall. So we merely have the response of the shielding electrons, these ones, which are following the ions around here. Again, we're only interested in ion. So this, in case you're wondering where we got to in Hutchinson's book, is his equation 7.3.19. And we said that our spectral density function, s of k omega, was equal to 1 upon the average electron density, the time and the volume that we're integrating over, and then the ensemble average of k omega squared like that. We can plug this in. We note that we've got two terms. So we're going to have a cross product, but the cross between these two terms is going to be a correlation between the electron and the ions. And there is no correlation. So those cross terms vanish. So we simply get this term squared and this term squared. Let me write some of this down. So we would get the electron contribution squared plus 2 ion times electron plus ion contribution squared. This cross term goes to 0, because we have uncorrelated electrons and ions. And we also use our dodgy identity from last week that the square of a delta function in frequency space is equal to p upon 2 chi of that delta function. So when we take the square of this term and this term, and we have a square of a delta function, we're just going to replace it with itself, with a t over 2 pi in front here. And so all of this gives us 2 pi upon the electron density times the volume summed upon j electrons 1 minus pi e upon epsilon squared delta of k dot bj minus omega. So this is the electron term, summing over j electrons, plus a term that looks like sum upon the ions, pi e upon epsilon squared delta k dot vl, the ions minus omega. And I need to put some big ensemble average brackets around this. And I actually want to go back and put in some absolute value signs inside here, which is why we end up with these absolute value signs. We're about to convert this into something that makes more sense, but any questions at this point? The size of the density fluctuations in Fourier space, these can be complex quantities, because they've got a phase attached to them. So this is any times any star. It's the same as that. So we're taking the-- the power that we measure depends directly on s of k and omega. The power we measure has to be a real quantity. We can't measure an imaginary power. And so therefore, this needs to be a real quantity as well. So we're just making sure we've only got the magnitude of it, not its phase. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: What we're really doing is the cross terms would involve delta functions of ions and electrons times by each other. So if you can imagine, it's very unlikely that an ion electron is in the same place at the same time. These are delta functions that are very, very sharp. So really, what we're saying is not so much they're uncorrelated, but they don't occupy the same space. So they're different locations. So if I did a little diagram of some x-coordinate here, I might have an ion here and I might have an electron here. But those delta functions will not overlap in general. Any other questions? Yes, we'll do this. This still does not look very useful because we're summing over all like 10 to the 23 or whatever electrons in our system. So this is a difficult sum to do. But you must have spotted this and thought, OK, if I'm doing a sum over a load of particles delta k dot d minus omega, this is looking an awful lot like my Klimontovich distribution again. And in fact, this looks like a sort of integral over space and over velocity of my Klimontovich distribution, r and v and t, times by some delta function k dot v minus omega. Remember, this consists of a load of delta functions-- delta r minus ri delta v minus vi. And so if I integrate this up over this delta function, this is going to pick out all the particles. So I can replace this by my Klimontovich distribution. And I'm going to end up with a term that looks like the volume from our d3v integral. There's a 1/k that I'm going to swap out omega equals kv. I'm going to swap out my v integral, the v for-- when I do that, I'm going to pick up a 1/k factor. I'm going to have an average over my Klimontovich distribution function, omega upon k. And here, this is like an ensemble average of lots of particles. The Klimontovich distribution function is very spiky. And we don't really want to work with it. So I wave my hands and say, this is V 1/k, some nice, smooth distribution function that you can actually work with. So I'm not going to claim that's a particularly rigorous argument. But you can have a look in Froula's book and in Hutchinson's book if you want to see maybe a slightly better argument for this. So we're going to basically replace these sums over a large number of particles-- this should have been vi here-- with the distribution function, which is kind of, obviously, what it is. If you have a sum over a load of particles, it's going to be some distribution function. But this distribution function is now being evaluated in terms of omega and k. So I'll write that down, and then we can talk about what it is. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Well, our test particle treatment we talk about a single particle moving through. And that's what starts introducing all of these delta functions. And so I think that that is why we do it, because when we've done the test particle treatment, we end up with all of these, and it's easier to get rid of them by going through a Klimontovich. So I think we started with a Klimontovich, knowing that this would happen later. Then we take out the Klimontovich later on. There may be other ways to do it. And it's not like I'm claiming that what I've done on the board is particularly rigorous. So there may be an equally rigorous way to do it without the Klimontovich distribution function. But I think if you want to do it rigorously, you have to put this. So if we do this substitution, we then end up with the very, very key formula for the spectral density function in terms of things that we actually stand a chance of evaluating. So s, k, and omega, 2 pi upon k times the average electron density. Then we have the brackets, the first term, 1 minus the susceptibility of the electrons over the permittivity of the entire system squared. Now, the distribution function of the electrons in the direction of k, so this is our incoherent scattering thing again, where we're only taking a slice of the distribution function in one direction. So a one-dimensional distribution function here. Plus a term that looks like susceptibility of electrons over the permittivity of the entire system squared. And then, formally here, we have to sum over every single ion species. So now we're doing a sum over different ion species. So if you've got carbon and hydrogen, things like that, in a quasi neutral plasma, you need to sum over the charge of each of these ion species and the distribution function for each of these ion species. But again, if everything is Maxwellian, this becomes much simpler. But this is the fully general formula here. And if you didn't want to write this down, this is Hutchinson's 7.3.22, or you can check in case I've made a typo here. So I'm going to talk through what each of these terms physically means, physically motivated. So this here, the distribution function at omega k is the number of electrons with a velocity v equals omega upon k. So the number of electrons in this distribution whose velocity is equal to the phase velocity of the mode that we're scattering from. Remember, we've been setting omega and k all this time. So this tells you how many scatterers there are which are electrons. And this tells you the response of the plasma to each of these electrons. So each of these-- there may be more than one. Each of these electrons are flying around as a test particle. And the plasma is going whoa and moving out of the way in order to accommodate them. So this is the response of the plasma. We've got a contribution from those electrons. That's the one. And then the minus is all the electrons getting out the way of those electrons. So that's the total amount of electrons you have to scatter off. We have the density of those electrons, but we also have a depletion of density of other electrons getting out of their way. This over here is the number of ions with a velocity omega upon k. And this is that response again. But note that we don't have a 1, because we don't scatter off ions. Their scattering is tiny. So all we get to see when the ion is moving-- the scattering we get is just the scattering of the electrons, which are pulled into the ion. Note there's a difference in sign here. Well, I guess-- yeah, it doesn't matter. When we square it, we get rid of it anyway. But these are the ions which are drawn into the region where the-- sorry-- these are the electrons which are drawn into the region where the ion is. And so we're scattering off those. So we only look at the response of electrons to these ions here. This is what your total scattering spectral density function looks like. Where does this function have peaks? Where am I looking in this for that spectrum what I drew of r here? Where do we get peaks in s of k and omega? There will be a singularity, shall I say? I'll try and make it very clear. Where will I get singularities in s of k and omega, which will look like nice, sharp, defined resonances scattering off specific waves? We will get peaks, singularities for epsilon equals 0 in the denominator. So whenever epsilon tends to 0, this function will be extremely large. Epsilon is also-- by the way, this is chi of k and omega. This is also epsilon of k and omega. So there's k's and omegas hiding everywhere inside here, not just within these fusion functions. These depend on the speed of the test particle as well. And so if the test particles are going at a certain speed, they will induce a certain response to plasma. That's a wave. So this is equivalent to saying, epsilon equals 0 is equivalent to finding the normal modes of the plasma, just like you do with determinants of matrices and stuff like that. So are we done? Can you plot me a Thomson scattering spectra from this? You can? You were nodding. We don't know anything about chi and epsilon. So now we're going to have to go do some [INAUDIBLE]. Don't worry. We're almost there. But in principle, if you did know these for some reason and you did know your distribution functions, you could work out what the scattering spectra was. But we need to know what these chis are. And just-- yeah. AUDIENCE: [INAUDIBLE] JACK HARE: So again, still, we are never scattering off the ions. But people will talk about scattering off the ion feature. So these are actually-- these two terms tend to be large at different frequencies. This term is large, around the ion acoustic waves. This term is large around the electron plasma waves. And so people call this the ion feature. But just, it's important to remember that no ions were scattered off in the making of this spectrum. Any other questions on this before we keep going? And I don't think I said it explicitly anywhere before. But your permittivity is always equal to 1 plus all of the susceptibilities in your system. This is just true for some electromagnetic system. It could be dielectrics and stuff like that. So in this case, our permittivity is equal to 1 plus the electron susceptibility plus the ion susceptibility. And technically, because I've written this in terms of a load of ion species, it's the sum over all of the ion susceptibilities. So what we're heading out to find is the electron susceptibility and the ion susceptibility. And that will give us epsilon straight away. And we'll also find that when epsilon equals 0. Remember, epsilon is some function of k and omega. So this will be some function of k and omega equals 0. And all of a sudden, out will pop omega equals csk like that. So that will be a solution to epsilon equals 0. And it'll be the dispersion relationship for the ion acoustic waves, or the electron plasma waves, or whatever wave you've got in your system. Questions? Run out of tea. Oh, no. There's a bit left. Well-- yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Please. AUDIENCE: [INAUDIBLE] JACK HARE: Yes. AUDIENCE: So when we get a measurement, we get a direct [INAUDIBLE]. JACK HARE: Yeah. We effectively do. Do I have it anywhere? What we measure is this quantity. We measure watts into some solid angle into-- at some frequency range here. That depends on the cross-section, the intensity of our laser, how well we focused it, if we put our spectrometer in the right place to actually see the scattered light, how many scatterers there are, how big the volume is I'm scattering off, and s. This is the only thing that has a shape. This is just a number. All of these together for a spectrometer in a certain place, this is like two. Whereas this s is actually this function that looks like it has peaks at different frequencies and stuff like that. So that's omega. This is s, like that. So this is the only interesting thing. If we had absolutely calibrated our spectrometer so that we could calculate this value, we can, in principle, get the density of our plasma as well. It turns out the absolute calibration of a spectrometer is very hard. So very few people use it to get this density. It also turns out that the electron plasma waves carry information about the density. So you're usually better off just measuring the frequency shift of these. So this is like the spectrum, what we used to think [INAUDIBLE] all of our lines and our k-shell, and all that sort of stuff. And this is just some constant factor that we probably won't be able to do anything with. So it does tell us that if we want to get more light onto our spectrometer, we should use a more powerful laser. We should focus it more closely. We should put our spectrometer in the right place. We should make our plasma denser. And we should collect the light from a larger region. So it's sort of useful information to know how to do Thomson scattering better. But this is the sort of thing that you actually spend your time fitting to. So you alter the temperature and density of your model plasma until your model sk omega matches the data that you've recorded. Yeah. Cool. But good question, yeah. Thank you. AUDIENCE: [INAUDIBLE] JACK HARE: We have not assumed it's Maxwellian yet. AUDIENCE: [INAUDIBLE] JACK HARE: Yes. If we did assume it was Maxwellian, then we could get a temperature. We'll go through it. We will derive-- I can't even remember which board we're on now. We would derive these susceptibilities from an arbitrary distribution function. And then we would discuss it in the case of a Maxwellian distribution function, which is a pretty common case. Any other questions? We're getting there. What shall we erase? The refractive index of a medium is simply equal to its permittivity to the half. So if we were dealing-- and for example, for a plasma, a cold plasma, where the thermal velocity was much less than the phase velocity, we derived, for example, [INAUDIBLE] mode, this was something like omega p squared on omega. So I'll write that as n squared. So if we were dealing with a plasma like this, it would be very easy to find the permittivity. But we're not. Remember, we're dealing with plasmas where the dispersion relationships we're interested in look like omega squared omega pe squared plus bk bte squared. So that means that our phase velocity now is on the order of the thermal velocity. So we can no longer do a cold plasma treatment. We need to do a warm plasma. And because we're doing a warm plasma, that means we're going to have to think about Landau [INAUDIBLE]. This is because our distribution function, something like this, previously, for this case, for the cold plasma, our wave was all the way up here-- v phase much, much larger than vte. There were no particles which had a velocity close to the phase velocity. And so therefore, from the point of view of this wave, the plasma was just cold and stationary. What we're doing now is we're looking at the phase on the order of vte. And so we have to consider how the wave interacts with and damps on these particles. So you will recall that we start with the Vlasov equation, where we're looking at the time derivative of some distribution function, plus the velocity dotted into the spatial derivative of a distribution function, plus the force-- the acceleration from the electric field, q of fe dot the change in the distribution function in velocity space. And we're going to set this equal to 0. So this is explicitly a collisionless system. Now, it turns out you can derive Thomson scattering for collisional systems, but it also turns out that most Thomson scattering systems are collisionless. I'm going to spend 30 seconds on this, and I hope I don't confuse you too much. You may be doing Thomson scattering in a system, which is collisional. A collisional system is one in which the length scale is larger than the mean free path. But when you're doing Thomson scattering, the length scale we're interested in is lambda. Remember, this lambda is 2 pi upon k. And that lambda may be on the order of the Debye length. If we're down towards the incoherent limit, this is just the Debye length here. And on these length scales here, the particles never collide. The mean free path is huge. And so these waves never see a particle collision. And so it's still legitimate to use this collisionless treatment, where lambda mean free path is much, much larger than our scattering wavelength. So even in a very collisional plasma, I can still do collisionless Thomson scattering. And that's good. If I wanted to do collisional, I'd have to put some collision operator in here. And if any of you have done plasma kinetic theory, it's a pain. But people do it. There's some really nice theory out there on Thomson scattering in collisional plasmas. We are going to assume that we have some distribution function that looks like some background distribution function plus a perturbed distribution function, which is oscillating, so some Fourier decomposition here, so e to the i k dot x minus omega t. When we substitute that back into this equation, we get minus i omega f prime plus k dot v f prime plus q on m e dot df dv all equal to 0. And we can define a current, which is equal to this perturbed distribution function times the charge of the particles we're talking about times the velocity integrated d3v. And we can rearrange this to be an equation for f prime, substituting into this. And we get q squared upon i times the mass, integral of velocity dotted into-- sorry-- velocity times the electric field dotted into the df dv over omega minus k dot v dv like this. We then say this current here, j, is equal to some conductivity times the electric field. And if we have a conductivity, we know that the susceptibility is equal to the conductivity over minus i omega epsilon 0. So this is just from electromagnetism here. We have taken all of the complicated plasma physics. And we're going to chuck it inside this sigma, calculate it, and then we're going to say from electromagnetism, we know the link between the conductivity and the susceptibility here. So this chi is equal to q squared over epsilon 0 mk the integral of partial f partial v over omega minus a0v dv. So now for some distribution function f, we can calculate this chi here. And I need to make this f in the direction of our k vector here, because along the way, I've stopped this from being a tensor. And I've just made it into a scalar. And we're taking the longitudinal component because these waves are all longitudinal waves. So this is the zz component of that tensor, which would otherwise be a 3 by 3 tensor. So we're just interested in the component of this along the electric field, which is causing all of this in the first place. When you do this, you remember that there is a singularity in the denominator here. And you have to be careful. There's a pole at v equals omega upon k. And then you do your integral in the complex plane. And you close it so that you don't go around this pole. This is called the Landau [INAUDIBLE]. I assume that you have all seen this at some point. And I'm just reminding you exactly what's going on here. So now we have-- we do this integral properly using the Landau contour, we have chi. And then we also have epsilon is equal to 1 plus chi e plus chi i, where these subscripts here are going to come from us putting in the distribution function for these. So this is chi j. And this is f in the k direction for the j-th species. Any questions on that? Hopefully you've seen this in more detail. So now let's consider a Maxwellian distribution. So there's an arbitrary distribution function. If your plasma is not Maxwellian, you'll get some complicated looking spectrum you can, in principle, calculate. But if we specify a Maxwellian such that our f for the j-th species in the k direction-- so it's a one-dimensional distribution function-- is a function of velocity is equal to whatever density of particles, some constants to make sure that this is normalized so we get density out when we integrate over velocity. Thermal velocity of the j-th particle electrons or ions exponential of minus v squared upon that thermal velocity squared. So this is just the Maxwellian. Then we find that chi, in this case, susceptibility for the j-th particle looks like omega p the j-th particle upon k squared thermal velocity the j-th particle squared. I've still got this 1 upon square root 2 pi hanging around. Then we have this integral of minus b upon btj exponential of minus v upon btj word over omega minus kv-- sorry-- omega upon k minus vbv. Now, finally, we get back to alpha because we recognize that this term here is just the Debye length for the j-th particle squared. So we can say that chi j is equal to 1 upon k lambda Debye squared. That is effectively 1 upon alpha squared, our coupling parameter. So this is why this has come back in and why I didn't find [INAUDIBLE] 2 pis, because at the moment, it's [INAUDIBLE]. And then out in front here, we're going to have a term that looks like the charge of the j-th species. For electrons, this would just be minus 1. But for ions, it could be some bigger number-- the density of these species times the electron temperature over the electron temperature times the temperature of the j-th species. And we've got a squared there. We've pulled a load of terms out because what I'm about to write next is a different function, but I just want to point out if you stare at this a little while, this is equal to 1 for electrons. For electrons, this term just vanishes here. It's kind of obvious for one electron density, electron temperature, electron density, electron temperature. This all cancels out. But for ions, it will be a different number. And that will be mostly due to the fact that the ions may have different temperatures than the electrons and due to the fact that the ions will have different charges. And then the thing that is left is this function, which we call w. And it's a function of some dimensionless parameter squiggle of j. This squiggle parameter-- I don't remember how to pronounce it, so I may as well be honest-- is equal to the phase velocity over the thermal velocity. Remember, the phase velocity is omega upon k. So this is one when we're talking about particles or thermal velocities that are same as the phase velocity. And it's less than 1 or greater. This function here is the-- I think it's the Wolf-Z function. And it's simply a function, which is defined to be-- to involve the solution to this integral in a dimensionless way. And this function w of squiggle is equal to 1 minus 2 squiggle e to the minus squiggle squared integral from 0 to squiggle e to the x squared ex plus a term, which is the residue due to the Landau contour, i pi to the 1/2 squiggle e to the minus squiggle squared. Obvious-- take it home to meet your parents. This is a very ugly function, but because we use Maxwellian so often, this is a function which can be tabulated. So we can just calculate it for various values of epsilon. We can make a table of-- sorry, not epsilons-- of squiggles from very small numbers to very large numbers. And we can just calculate this up. Remember, it's a complex number, a complex value here. But we can have a lookup table here. And there is something called the plasma dispersion function. I think it's normally written as z. That is related to this function here. And it's related. It's sort of like-- I can't remember what it is, but the z is equal to squiggle to the 1/2 w of squiggle, or something like that. I can't remember exactly what it is right now. I prefer to work with this thing, because this is a thing that you'll find in Python, and Julia, and Matlab, because it shows up in other contexts as well. And so, people, there are very nice fast implementations of this function in all of these languages and many other languages. Fortran as well, no doubt. So it's almost always better if you're writing your own code to write it in terms of this function, rather than the plasma dispersion function. But you may have come across the plasma dispersion function in other Landau damping contexts. And there they're very, very similar. This is also related to the error function, the complex error function, and a related family of functions that crop up in all sorts of other interesting contexts as well. The link between them all is that they have these fun integrals and e to the something squared inside them all.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_18_Neutral_Particles.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: Today, we are going to be discussing neutral particle diagnostics. So the fact of the matter is, we have some plasma, like a tokamak, and it has some block surfaces. And we want to know what's going on inside the core, because this is where the fusion is happening. In particular, we want to know what the distribution function of the ions is. It's probably Maxwellian-ish, but the "ish" is the interesting bit. There may be some lovely tails of particles here, and those are the ones that might be doing most of our fusion reactions. So we'd like to be able to measure this distribution function f of i. The trouble is, the particles in the core are hopefully very well-confined. If they're not confined, we haven't done a very good job. So we can't exactly wait for them to wander out and see what they're up to. And so we need some technique of measuring this core distribution function f of i Vi. Now, we have some ideas based on some of the other diagnostics we've looked at already. For example, we could use electron cyclotron emission. That gave us information about what's going on in the core. But this gives us a measurement of Te. And Te does not necessarily have to be the same as Ti. So that's not a very good diagnostic. We could do something using laser-induced fluorescence, or two-photon absorption laser-induced fluorescence like this. If you remember, that's where we had an upper level and a lower level and we scanned our laser wavelength in frequency space here so that we could address particles which are Doppler shifted towards us and Doppler shifted away from us. And by looking at the intensity of light coming out, we could measure this distribution function here. So this looks like a pretty good technique for measuring the core. The trouble is that in the core, all of our particles are fully ionized. And so there are no electrons here. So we can't use these techniques because there are no ions with an electron in them in order for us to be able to address it using our laser. And thirdly, we could also use Thomson scattering, which we haven't talked about yet, but we will do in detail soon. The trouble with Thomson scattering is that it tends to give us information-- scattering-- about the electron distribution function, fe Ve, like that. And again, that doesn't have to necessarily be the same as the ion distribution function. So we're going to need a new technique in order to measure the distribution of these core particles. So the idea is, well, what if, spontaneously, some of these core ions turned into neutrals? Well, if some of these core ions turned into neutrals, then they would no longer be confined. They would be able to wander freely out. And perhaps we could stick a detector on the outside of the plasma and measure these neutrals, which will carry with them some information about the ion distribution they came from. And that will allow us to indirectly measure the ion distribution. So what's a good way to summarize this? They're going to-- these neutrals would give us information on the distribution function. So there's two problems that's coming to your mind straight away. Why exactly would an iron suddenly turn itself into a neutral in the core? And does it stand a chance of actually escaping from the core to the outside, where we can detect it? And so we'll talk about those two issues one after the other. So first of all is the neutral production. And I guess we could call the second problem neutral transport. So in terms of neutral production, there are two major processes that we could have. We could have a charge exchange. So I'm just going to write that as CX. We talked about charge exchange before. This is where a neutral collides with an ion, and the neutral loses an electron and the ion gains an electron. So we could have charge exchange with edge neutrals. And they could be neutrals coming in from the edge of the plasma. And that would look like some neutrals coming in like this, and I'll use n sub a to be some density of neutral atoms. These are going to be wandering in from the edge all the time. Maybe a few of them make it to the center, and then they charge exchange of our core. And then those core-- [INAUDIBLE] Hence, the ions make it back out again and we can detect them. That's one possible mechanism. A more sort of active mechanism would be taking a neutral beam and putting a neutral beam into the center of the core here. This is neutral beam injection. We use neutral beam injection for all sorts of things, like current drive and heating of the plasma. And so we often have this process happening automatically. The whole point of the neutral beam here is that as these neutral particles come into the center, they'll charge exchange with the ions and become ions. But that means that the ions they charge exchange with become neutrals. And so those neutrals may come out of the plasma here, where we can detect them. So we have charge exchange with edge neutrals and charge exchange with neutral beam injection. Neutral transport is asking, can the neutral make it from the center of the core-- yeah. Can that neutral make it out along some path? And so there's going to be a series of processes which are trying to re-ionize this neutral as it tries to leave the plasma. So these processes are going to be things like-- I'm going to give them their cross-sections, sigma sub e. This is electron impact ionization, which we talked about before. This is going to be-- we can use sigma subscript p. This is ion impact ionization, because we can also have collisions of ions causing ionization here. And finally, we can have the probability of charge exchange with ions. So if we've charge exchanged once in order to get our neutral in the first place, it's not at all impossible for us to charge exchange again as we head out, and then our core neutral that was heading out and giving us information about the core will become an ion and it'll be trapped again, and now, we'll have an ion and further out become a neutral, and that will try and leak. So this charge exchange process is particularly problematic because what we're ending up with is something that looks like a hydrogen neutral plus a hydrogen ion going to a hydrogen ion but a hydrogen neutral. Now, the beautiful symmetry in this equation here means that this is resonant. And so it's quite likely to happen. So this sigma C is large. So that's an overview of what we're going to try and do. I'm going to talk a little bit about the transport, estimate the size of these different coefficients, and work out how likely it is we can actually get any neutrals from the core back out again. And then we'll talk about the neutral productions. I'm going to go 2 and then 1. But that's an overview. If you have any questions before we get going, this is a good time to ask. AUDIENCE: [INAUDIBLE]. JACK HARE: OK, yes. You should take a closer look at Hutchinson's book for a more formal definition. But quantum mechanically, this is nice and likely to happen because we're starting-- or ending up with something very similar to what we've started. And so this process is likely to occur. Yeah. OK. So again, we've got some blob of plasma here. We've got some particle, which is born at a point A. And we want to know if we can get to a point B, like this. So this is the core, and B is where we have our detector. If you look at this, you say, this looks an awful lot like our radiation transport problem. And so we could say that there's a probability of the particle getting from A to B. Or this could be a power of particles as well if I multiply it by the particle energy per unit time. And that's going to look like the exponential of the line integral from A to B of some opacity, in this case, more to do with scattering than it is to do with absorption, which varies along the path integrated-- along the path, the L, and then the minus sign. So again, this looks like radiation transport. So we talked about three different processes over there which might be important. And the importance of those different processes will depend a lot on the distribution functions that we have here. So if this is V And this is f of V here, we'll have an ion distribution function. And that's just the ions making up the plasma. And that will be relatively small in velocity space here. So this is f of i. And we'll also have an electron distribution function, which is much broader, and that's because the electrons are less massive. So for a similar temperature, they have higher velocities. And we'll also assume that our neutrals coming out here have some distribution that is relatively narrow and shifted like this. So this is at a loss T, the A like this for the neutrals. And so we're assuming here, to make our calculations, that the ion velocities are all much less than these core neutral velocities. And this means that the i minus the A is about VA. It's useful when we're working out these cross-sections that have the difference between the two velocity populations. And then we can write down, that our attenuation coefficient alpha here is going to look like one upon VA, the electron impact cross-section sigma e Ve evaluated at VA and e plus the ion impact ionization sigma p, Vi evaluated at VA. And that's times ni plus the charge exchange cross-section, sigma C Vi evaluated at VA times my ni, like that. And we'll also now assume that our beam velocity is much less than-- or, sorry, the neutral atom velocity-- the neutral is trying to get out of the plasma-- is much less than the sort of average thermal velocity of the electrons so that we can have this nice ordering where Vi is much less than VA much less than Ve. And that allows us to further simplify this. We'll simplify it in two ways. The first one is that from the point of view of the electrons, the ion-- sorry, the neutrals are basically not moving. So we can set this to 0. And from the point of view of the neutrals, this Vi is approximately constant. It's a small value, and it's approximately constant. And so we can just take this Vi out. Sorry. We can evaluate this Vi as just being VA and we can take it outside, where it will nicely cancel with this VA. So this gives us an expression that looks like 1 upon VA sigma e Ve evaluated for effectively 0 neutral velocity ne plus just these two cross-sections, sigma p and sigma C, times ni. And we see that this term goes as 1 upon VA and this term doesn't. So if we're dealing with energetic enough neutrals coming out of the core, then we can imagine that this term could be relatively small. It will depend exactly on what sigma e, sigma p, and sigma C are, but we can imagine, for reasonable values, this term will cancel. So we end up with an expression that just has this. So this again, is, just roughly alpha. Neglecting this for large-- OK. So then, you can put this back up into our probability of the particle getting through, and we just get exponential of minus sigma p plus sigma C integral of ni dl, like this. And you can go into Hutchinson's book and see graphs of sigma p and sigma C, and sigma e, as well, for different plasma conditions. And so if we just use an example that maybe we have 10 keV neutrals streaming out of the core, because that's the sort of temperature we'd like in a fusion reactor, so 10 keV neutrals, we've got a density of around 10 to the 20 per meter cubed. So this is on the dense side for an MCF reactor, but that's the sort of density we might need for reactable conditions. Then, using this, we will get a length scale in this exponent, which is effectively the mean free path of the neutrals trying to leave the plasma. And that is going to be about 10 centimeters. So for this relatively hot, relatively dense plasma, we're not going to see very many neutrals more than 10 centimeters away from the core. And probably, our reactor is going to have a diameter A of about 100 centimeters or so. So we're not going to see very many of these particles, which means that in order for these neutral diagnostics to work at all, we're going to require. But that require... AUDIENCE: And-- Yes. JACK HARE: Yeah. AUDIENCE: Have a-- JACK HARE: OK, thank you. So we're going to require that the line integrated density here is going to be less than about 10 to the 19 per meter squared here. And if you do that, then you get lambda A more like a meter or so. So we need "low" density. I put it in quote marks because low means different things for different people. But we need a relatively low density tokamak or magnetic confinement fusion device in order to get neutrals escaping from the core. OK, questions on this? Yes, Audrey? AUDIENCE: [INAUDIBLE] JACK HARE: This is just, like, a feed of the neutrals coming out here. I realize, actually, drawing this, I don't think this is right at all. I don't think this is a delta function, because we're talking about a distribution. That's the whole thing we're trying to measure here. So I think that, actually, I should be drawing it like this and call it f of A. So this is the distribution of the neutrals. It's something Maxwellian-ish. But the average velocity here, which I could call VTa, which is maybe here, is much larger than VTi, but much smaller than VTe. So this is just the idea that for these neutrals here, from the point of view of the neutrals, the ions are basically stationary, and from the point of view of the electrons, the neutrals are basically stationary as well. And that allows us to simplify all of these angle brackets, which are actually averaging over the difference between the two velocity distributions. We can even set it to 0, or just be the velocity of the neutrals. Yeah? AUDIENCE: [INAUDIBLE] JACK HARE: Yeah, well, we imagine there's some temperature distribution inside this. So we're talking about, these neutrals here are neutrals that are in the core of the plasma, so we hope they're hotter than the ions in the rest of the plasma. AUDIENCE: OK. So that [INAUDIBLE]. JACK HARE: No, these are the ions out here or something like that, further out. Yeah, that's a good question. So if we have a sort of radius and temperature-type thing, we assume, for a good reactor, that it's peaked in some way. And so we're asking you about this, the f of i Vi at r equals 0 becomes rf of neutrals V neutral at r equals 0. But they're streaming through some f of i Vi at r greater than 0, which in general is going to be a lower temperature. So the distribution is smaller. Yeah. All of this is very hand-wavy, of course. If you want to do this properly, you have to keep track and do all these integrals correctly. But just to give an idea of the most important effect, it turns out r-- the [? ion-ion ?] impact ionization and the charge exchange. And the charge exchange dominates this pretty strongly, actually. So it's mostly just charge exchange with some of the other ions. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: We haven't talked about that. No. So what I'm saying is, imagine somehow you have created neutrals in the core. Would they actually be able to get out? Because if they can't get out, there's no point even trying to create them in the first place. So we're sort of working our way backwards through the problem. The next step is, OK, now that we know, if we create some neutrals, they can escape, we can probably measure them in some way and learn about the core ion distribution. Now that we know that that's the case, can we now work out how to create the neutrals? And we'll talk about some techniques for that. Yeah. Yeah. AUDIENCE: Why would we [INAUDIBLE]? JACK HARE: Yeah. I mean, this is very crude. You're quite right. I mean, this should be inside here. But if we assume that these don't vary too much as the particle travels from the core to the edge, then it allows us just to pull out this factor of ni dl and make it clear that it's like the line-integrated density that's important thing. In reality, it would have to stay inside. Yeah. Cool. Any questions online? All right. Let's go find out how to put some neutrals in the core. I'm trying to spell neutrals with an A there. OK. So again, Let's have some flux surfaces, some plasma like this. So one of the ways the neutrals could get there, as we discussed before, is from the edge. So around the edge of the plasma, there's, for example, the scrape-off layer. And then there's going to be a region which is in contact with the wall, which is relatively cold. And so we can have neutrals being produced here. And these neutrals will wander in and they won't see the magnetic field lines, and so maybe one of them will only get so far here before the charge exchanges. And this one will get a little bit further. But you could imagine that maybe a couple of these neutrals wander into the center, they charge exchange, and then we can start getting the new neutrals, the ones that represent the core ions, start moving back out again. So one possibility is-- we'll call it edge fueling. And we know how to calculate the probability of that, because we just spent a long time trying to calculate the probability of the opposite process. So now, we know that it's basically PBA, which is the probability of going from the outside, B, into the core, which we called A in our previous diagram. So we can estimate how likely it is that any of these particles get through. And the answer is that this actually does create a fair amount of neutrals in the core. Just because there are so many neutrals in the edge, some of them will get through. So this technique by itself will create a detectable flux of core neutrals coming back out again. So PBA, which is edge to core. Another thing you might say. Well, perhaps, just by some atomic processes in the core here, we could get neutrals developing here. So we could have radiative recombination, because, after all, there's a load of electrons flying around. These electrons are up above the ionization energy, so they're free. This electron could be caught by the electric potential of the ion and become a bound electron. And then we would get out our recombination photon. That process could happen. But of course, the other process we've already discussed is when we have an electron come in and collide with one of the inner shell electrons, and we end up with two electrons coming back out again. And this is electron impact ionization. And there'll be a balance in the steady state between these two processes and that balance will look like the density of electrons, the density of ions, the reactivity or the cross-section for the radiative recombination, timesed by the electron velocity. And that will be balanced by the electron density, the density of neutrals here, sigma e for electron impact ionization, Ve, like that. So that means that we can look at this and go, OK, what is the steady state density of neutrals? That's going to be nA over ni. These cancel, so it's just going to be the ratio of these reactivities, sigma r Ve over sigma e Ve, like that. And at 1 keV, which is a reasonable temperature for you to have in your core, this is about 10 to the minus 8. So the point I'm trying to make here is although you might occasionally have some recombination process, because the other processes which lead to ionization, again, are so rapid, you quickly will re-ionize. And so if you have 10 to the 13 or 10 to the 20 ions per cubic meter, they'll only have 10 to the 12 neutrals. And that's not really enough to detect. So this is negligible. So edge fueling. Yes, it could be a way of getting neutrals in the core. Radiative recombination is not a way. And you kind of know this because if this was an effective process, then you would end up with a large neutral fraction in the core of your tokamak, and that's not something that we generally think about and doesn't generally happen. And then finally, there are beams. And I'll talk a little bit more about beams in a second, as an active version of the diagnostic, but I just want to talk about how we actually measure these neutrals coming out in this passive version where we just let edge fueling do the work. So passive cx with edge neutrals. So we've got some tokamak vacuum chamber, some MCF vacuum chamber, with some plasma like this. We have all of the particles coming in from the edge, and then the charge exchanging with particles in the core. And they're producing, for example, a hydrogen neutral that's coming out from the core. And we put that hydrogen neutral into something called a stripping cell. And it comes out as a hydrogen ion again, but a hydrogen ion with the same velocity, or effectively, the same distribution function, as the neutral coming in. The stripping cell has a very electropositive gas, so it wants to take off the electrons, and it will charge exchange without really changing the trajectory of this very much. And then we have, for example, a large magnetic field in this region that bends the particles according to their velocity. And then we have a detector like this with bins in it, and the bins have different sensitivity to different velocities here. And so by counting out a number of particles in each bin, we get out that distribution function, like that. And we hope that, during their travels from the core through our stripping cell onto our detector, the distribution function has not been changed very much. So we really are measuring the distribution function of the particles in the center there. So if we look at that on our detector, I plot it on a logarithmic scale. The log of intensity, which is equal to log of particle number versus particle energy here, but not velocity but energy, which is 1/2 mv squared, we see that we have a region that is approximately straight, and that has a slope of 1 over Ti, because our distribution function for Maxwellian is exponential of minus e over Ti like that. And then we might have some interesting features down at the bottom here, where it looks non-Maxwellian. And so this can be the tail. And this will correspond to fast ions. And measuring fast ions is very important in magnetic confinement fusion. These fast ions might be due to things like rf heating, or they might be due to runaways being produced by inductively driven electric fields, or other cool physics. So it's nice to be able to measure those fast ions as well. The main problem with this technique, of course, is it's line-integrated. If I have my detector set up like this, I'm collecting all the particles in this region that come down here. And so you don't really know exactly where those particles have come from. And so we have the same problem we have with all of our other line-integrated diagnostics. But it's pretty free. We don't actually have to put any effort into making neutrals in the center there. We just let the fact that we have a crappy vacuum chamber do the work for us. So this is a nice way to do it if you don't have a neutral beam. Any questions on that before we put a neutral beam into the system? Mm-hmm. AUDIENCE: [INAUDIBLE] JACK HARE: Oh, you would just have a magnetic field here. And frankly, the magnetic field of your tokamak does a pretty good job of that. But yeah. AUDIENCE: How does it depend on the material of the [INAUDIBLE]? JACK HARE: How does what depend? AUDIENCE: Like, how many does [INAUDIBLE]? JACK HARE: Yeah. I mean, we tend to be thinking about-- we still tend to be thinking about these neutrals coming from the edges being hydrogen. So when we have the hydrogen plasma-- by hydrogen, I mean deuterium and tritium, our fuel. So when we have our fuel in contact with the wall, there will be a layer of low ionization state, partially neutral gas around the edge, just because it's in contact with the wall. And so most of this will be coming from. I don't know what happens if you start looking at this and you start looking at all the neutrals as well. Presumably, you now need many more coefficients to calculate. And probably, if you're doing it properly, you should take into account those. But I think, although impurities can dominate things like line radiation, this really depends on the ion number. So we're going to be thinking, probably, about-- the impurities probably don't have a higher number density than the fuel, or if you've done something very wrong, that's happened. So I think you're still going to be dominated by your fuel ion. Yeah, cool. Yes? AUDIENCE: So if you start with the neutral at the edge and then it goes into the plasma for charge exchange, then the new neutral that leaks out was an ion in plasma, and then you're measuring that neutral that leaks out. How do you know that you're only measuring both [INAUDIBLE] ion and the neutrals at the [INAUDIBLE]? JACK HARE: Yeah. So the question was, how do we know that none of our signal is due to the neutrals at the edge? Well, the neutrals at the edge are very cold. So their energy is very low. So we probably will have a spectra that looks like this or something, with some noise up here, or some spurious signal that's due to other neutrals in the system. So I'll just start my fit at some nice high energy, which can't be due to any of these cold ions-- or cold neutrals from the edge. Has to just be due to the core ions instead. Any other-- yeah. AUDIENCE: [INAUDIBLE], JACK HARE: Yeah. So I mean, that's what's very challenging about this here, because in fact, this will be, like, the sum of a load of lines which will be weighted by their density. So I think it is difficult to interpret the passive version of it. Of course, if you go high enough out in energy and you can still fit a bit of a straight line, you can be like, there is some part of my plasma-- because you'll have different slopes. And there will only be one slope that still extends all the way out here, because each of these will eventually sort of drop off. So if you're like, aha, still out here, there's some region, then that's probably your core ions. So you might have a chance of identifying them. But in general, your signal will look more complicated, which leads me very neatly on to using a neutral beam to do this instead. So this actually is very similar to some of those active spectroscopy diagnostics we talked about before. We've, again, got our vacuum chamber and our plasma like this, but now, we have a neutral beam going through the plasma. And if we have our detector oriented perpendicularly to this, now, we know that the majority of the neutrals are being born by charge exchange between the neutral beam and this small core region here, which overlaps the field of view of our detector and the neutral beam here. And so now, we know very well, when these particles come out, that we are detecting f of i Vi r about 0 from the core here. So this allows us to actively probe that central region here. It's the localizers. And this is often convenient because we often have neutral beams. As we said, these neutral beams are already being used for heating and for current drive. Of course, the neutral beam has the same problem that we found out when we were talking about how far these neutrals can penetrate. If you have a density that's high enough inside your machine, you're not going to be able to use a neutral beam to heat all of your current drive. So neutral beams tend to be limited to lower-density machines, which is why we've never had much use for them on devices like CMOS, where the magnetic field and the density is very, very high. But for other devices, you may well have a neutral beam already there. So this is, again, kind of a freebie to have this as a diagnostic. Let's talk about some other things you can do with a neutral beam. So there's a very nice technique which is called-- where's my green? There we go. Charge exchange spectroscopy. This technique has developed a number of acronyms in the literature, which are all identical in meaning. So some people call it charge exchange recombination spectroscopy, and some people call it charge exchange recombination spectroscopy. So if you see an acronym that looks something like this, they're probably talking about the same diagnostic. These are not two separate diagnostics. They are the same thing here. So the idea of charge exchange recombination spectroscopy is, within our plasma, we will have some neutrals. Sorry, we will have some ions which are impurity ions. Sorry. So impurity, like A. We'll just use the symbol A. I'll make it clear what it is in a moment. These impurities will the things like oxygen or carbon or boron. And although these impurities have many more electrons than the hydrogen, they are, in the center of the machine, probably fully stripped. They've lost all of their electrons. So they no longer can produce any light, so we can't detect their presence in the machine. So we don't know where these impurities are. So the idea of charge exchange recombination spectroscopy is, we put our neutral beam through the machine, and we end up charge exchanging on these impurities. So we have some neutral beam and some impurity in a state Z plus where it's ionized at times, and this charge exchanges and gives us a neutral beam ion plus an impurity in a state Z minus 1 plus. And most importantly, it's likely that the impurity will end up in an excited state. So then our impurity, which is in some excited state, it will spontaneously decay down and we'll get out a photon like that. And because this photon will have a characteristic energy that we're looking for with our spectrometer, we can say, aha, this is from an impurity. And now, the invisible impurities are visible again. So there are a few things that we can measure from charge exchange recombination spectroscopy. We can measure the density of impurities from intensity. So you need some atomic models and you need an absolute calibration for your detector, but you can work out the density of these impurities. That's pretty useful. We can also measure Vi and V flow from the Doppler broadening and the Doppler shift, respectively. So as we discussed in the spectroscopy a few weeks ago-- or a week ago-- when we were talking about different broadening mechanisms, these effectively can be the impurities which are acting as a tracer for the flow velocity. So our bulk motion of the entire plasma around the torus and for the temperature. And again, the reason is that maybe you could rely on impurity that was very high Z and still had some electrons left. But if you've done a really good job of reducing all of the tungsten and nasty things in your machine and all you're left with oxygen, carbon, and boron, then you need to effectively give them back an electron so you can do spectroscopy on them. And again, this is a nice active diagnostic, because you can localize the region where the particles are sitting in by putting your spectrometer perpendicular to the line of sight of your neutral beam. And again, you can localize it to that region here. So this gives you these two, and they're all localized measurements. There can be problems with this. So one issue is, you get charge exchange with edge impurities. I know I said this measurement localizes this, but you could imagine that you could get light reflecting from here go back down into your detector. And you can also just have background, so something like Bremsstrahlung radiation, at this photon energy. And because there's not so many impurities, we hope, and not so many of them are charged exchanges neutral beam, the signal could be relatively weak. So there's Bremsstrahlung at the photon energy of the charge exchange with impurities. That could be a significant amount. So we may not be able to see this signal very clearly. So a really neat trick that you can do is to modulate the beam. So you take the intensity of your neutral beam and you just turn it on and off like that. And then when you look at the intensity at this specific wavelength that you're on the lookout for, you should see that there's maybe some background, but that background is being modulated by the neutral beam. And then you can do tricks like phase-locked loops and all your other favorite Fourier transform techniques, and you can isolate out which part of this signal is actually due to charge exchange and which parts of the signal is due to BREMS. That's pretty neat. OK, questions on this? Yes. So theoretically, we have just a single line like this. If all the particles were stationary and the temperature was 0, all the particles moving at the same speed. And we just have a line at omega 0. If all the particles are moving in the same direction, this will be Doppler shifted, and that Doppler shift would be adopted into the flow velocity. And if we have some broadening on top of that, because we've got non-zero temperature, some of the particles-- although they're all moving in one direction. Some of them are moving back a little bit and some are moving forwards. And then the width of this feature, full width of half maximum, is proportional to VTi, which is square root of temperature. So you measure the shift of the feature and the broadening of the feature. That gives you the velocity and the temperature-independent. Any other questions? Any questions online? OK, next topic. There are yet more fun things you can do with a neutral beam. Who would have thought it? Next fun thing you can do with a neutral beam is something called beam emission spectroscopy, BES. The idea here-- once again, vacuum chamber, plasma, neutral beam. As the beam goes through the plasma, remember, our beam is going to be hydrogen or deuterium or something like that. The neutrals inside here-- and these are all neutrals. So I'll just put a little 0 to remind ourselves they're neutrals-- will have some glancing collisions with the electrons and ions-- not enough to ionize them. Some of the collisions will ionize them. But some of the collisions will be enough to excite these. And so we'll end up inside our beam with excited neutral hydrogen and excited neutral deuterium or something like that. And we can then put our spectrometer down here or our detector down here, and we can look for the characteristic-- we actually have some drawings of this. No. OK. We can look for these characteristic decays here. And if you remember before, we talked about the Lyman-alpha and -beta, and the Balmer series, as well. So we know very well-defined what these energies of the photons coming out are going to be. Now, the neat thing about this is the intensity of these lines is proportional to-- or is-- I'll make it stronger. It's directly proportional to the number of electrons in the plasma, because this is a collisional excitation process and we know that that is proportional to Ne, and then whatever the relevant collisional excitation cross-section is. OK. But the main thing is it's proportional to the density. So the neat thing is if I now have density fluctuations-- so this is-- we've got some background density, but the density is fluctuating on top of that. So Ne 0 plus delta Ne of T. Then that's going to give me an intensity fluctuation as well. So by carefully measuring the intensity of these lines-- and again, we can localize this relatively well because I have a specific field of view for my detector and I know where this beam emission is coming from. So I can localize it to a little region. I could have multiple detectors. And I could have another one looking here. I can measure density fluctuations in very precise regions here. And density fluctuations are very important, especially in magnetic confinement fusion, because these fluctuations lead to transport-- turbulent transport. So we really, really want to be able to [INAUDIBLE]. So beam emission spectroscopy, I think this is the only slide I have on this. Yes. Well, I have another one that's related. This is a very cool technique. The trouble is that what I just said here about just being proportional to density here, this is rather simple. It's not actually true in reality. So what we actually have to do is include all relevant processes. This is going back to spectroscopy again, where if you're not quite in a simple equilibrium, you need to include lots more processes. And you also need to include the beams' attenuation, because of course, the point of launching a neutral beam into your system is not to hit the far wall of your vacuum chamber and melt it, but it's actually to charge exchange with the plasma and heat it and drive current. So this beam is going to be getting less and less intense as it goes through the plasma. And so if you want to do all this properly, you also need to track the beam attenuation, because this density is also proportional to the number of beam particles as well-- the number density of beam particles. So we need to track the beam attenuation. But if you can do that, this is potentially a very powerful diagnostic for measuring density fluctuations, which are otherwise quite challenging to measure. Questions on beam emission spectroscopy? AUDIENCE: Professor? JACK HARE: Yes. AUDIENCE: So do you need to specialize or neutral beam for these kind of readings, or is this something you can do while using the neutral beam as a heating source already? JACK HARE: Yes. The nice thing about all of these things is you tend to just be able to use your standard neutral beam. So some people do have specifically made diagnostic neutral beams. So on eta, they have several neutral beams specifically designed for heating, and they also have diagnostic neutral beams. So the heating beams are up at megaelectronvolts, and the diagnostic beam is at a lower energy. And I believe that lower energy is because it gives you better cross-sections for all of these processes. So I'm not sure. An amusing example of that is that there was a diagnostic neutral beam developed for the National Compact Stellarator Experiment at Princeton, which was canceled. And that diagnostic neutral beam then went on to be the heating neutral beam for a much smaller device. So you can build a diagnostic neutral beam, but at the end of the day, it's still a neutral beam. It can do whatever you normally want. And this is going to be coming in at 300 keV or whatever energy you can generate. And all these processes are going to be happening. And the fact that the beam slows down and converts into ions and does all the heating relies on all of the same atomic processes that we then use to observe beam emission spectroscopy and charge exchange and all that sort of stuff. So those processes have to happen for the beam to work. And now, you're just observing the obvious consequence of them occurring. Yeah, good question. AUDIENCE: Great, thank you. JACK HARE: Other questions? Now we're going to be done early. The final thing, as we discussed it the other day, is the motional stark effect, which I'd forgotten was to do with this. And in fact, I was looking through my notes today and I was like, oh, I actually wrote several pages of notes on the motional stark effect and then told people I had no idea what it was. So that just shows how good my memory is. So now, you will have the pleasure of learning about and then instantly forgetting about the motional stark effect. No, it's actually really cool. This is often abbreviated as MSE. So we already talked about the stark effect. And in the stark effect, we said that there was an electric field. And I'm going to continue writing electric with this weird curly E's because we're going to have some capital E's for energy later and don't want us to get confused. So this is the electric field, which shifts the energy levels of some atom. So we could do this just in a laboratory with a strong electric field and a neutral gas. And we then considered what this does inside a plasma. So in a plasma, we have lots of electrons moving around, as we said. And those electrons will be colliding with our ions, so our atoms that have bound electrons. And as those electrons get close, they will be shifting the electric field, and therefore, the energy level in the vicinity that atom. So we saw there that we got a change in the energy of the photons being emitted, which was proportional to the electric field. And that electric field was proportional to density for the 1/3. And this was just an electric field 1 over r squared argument. This should be 2/3. There we go. And this was from collisions. But this whole picture here required a thermal plasma with lots of particles moving in all sorts of ways here. So this was with the distribution f of V Ve that was exponential of, I don't know, minus V squared over V thermal e squared. But in this case here, for a neutral beam, we have a very peaked distribution function. So our distribution of neutral beam ions is roughly a delta function. So they're all at VB. So we don't expect to see a broadening. What we would expect to see, maybe, is some sort of shift in a single direction here but where is the electric field coming up from in a plasma like this, where we're just talking about our particles moving? Because here, we argued this was the electric field from particles close in within the Debye sphere. But on large scales, we don't see this electric field. But what we do have is a magnetic field. We have a magnetic field B, like this. And so we have some beam velocity V cross B. And in the frame of the particle, that creates an electric field, which is equal to minus V cross B. So this is our Lorentz transform. So in the frame of the lab we see particles moving with velocity V beam, and we see a magnetic field. But in the frame of the particle, when we do the Lorentz transformation, the particle, of course, doesn't know it's moving within its frame. And instead of seeing a magnetic field, it sees this electric field instead. And so that means that now, we're expecting to get energy shifts delta e, which are, again, proportional to the electric field, which are now proportional to V cross B. And I just want to point out-- and Hutchinson goes into this in some detail-- we've assumed here that we're talking about the linear stark effect. And depending on exactly what's going on you may need more complicated things like the quadratic stark effect. But this is the linear stark effect. It turns out the linear stark effect is good when this delta E is large. So as long as you have sufficiently large magnetic fields and beam velocities, then you end up with this linear regime, which is much easier to analyze here OK. I'm going to erase this because I want to keep that on the board. Now, some voodoo magic happens, and we find out that each energy level splits into multiple energy levels. So this is reminiscent of the Zeeman splitting again. So each energy level splits into energy levels whose energy is given by 3nK. I'll explain what all of these symbols mean in a moment. Electric fields over ze over 4 pi epsilon 0 A0 squared. I haven't got, in my notes, what A0, is and I've forgotten what it is off the top of my head. So that's not very useful. Some constant is the most important thing. All this times the Rydberg energy, which you all remember is 13.6 electron volts. And this electric field is V cross B modulus of it. OK, so n here is the quantum number. n equals 1, 2, 3. And K, here for each quantum number n, can take multiple values, K equals 0 plus or minus 1, plus or minus 2 up to plus or minus n minus 1. So for n equals 1, K can only take the value 0. At n equals 2, it can take three values. At n equals 3, it can take five values like this. And because these energy levels are all split-- and I'll draw some now so that you get some feel for what's going on. Let's see if I've got space to do it here. We start out with n equals 1, n equals 2, n equals 3. This n equals 1 will just be split when n equals 1 level. The n equals 2 will be split to three levels. And the n equals 3, we split to five levels. And each of these levels is going to have an energy, delta E, between them, which is given by this formula here. And so by measuring this splitting, we can measure the magnetic field, because most of the stuff in here is just a constant. We should know our beam velocity because we're the ones injecting the neutral beam. So the only thing we don't know is the size of the magnetic field. So we could try and use-- so again, this is still a neutral beam, so we've got one electron. So we could try and look at transitions from here down to here. And there should be a triplet of transitions all coming out with slightly different energies. What is the problem with using this? first of all, does anyone remember the name for this transition, and what's the problem with using it? AUDIENCE: Is it a forbidden transition? JACK HARE: It's not forbidden, but good question. Hydrogen. Transition from the first excited state down to the ground state is the Lyman-alpha. What's the wavelength of the Lyman-alpha? Yeah it's 121 nanometers. So this is what's called vacuum ultraviolet. And the reason it's called vacuum ultraviolet is it only propagates in a vacuum. So it's a real pain to deal with. So we don't want to work with this transition here. Instead, what we want to do is use a transition from n equals 3 down to n equals 2. What's the name of this transition? AUDIENCE: H-alpha BALMER JACK HARE: Yeah, H-alpha BALMER. This is where the forbidden transitions that were mentioned come in here. So these have an angular momentum quantum number nm of 210 minus 1 minus 2, 1, 0, minus 1 like this. And the selection rule for these that tells you whether this transition is forbidden or not is delta m is equal to-- have I got it written down here? Yeah. 0 and plus or minus 1. So from any given level here, for this one, for example, the m equals 0 and the n equals 3, we can have a transition down to 0, down to minus 1, and down to 1. We get a triplet of lines from that. We'll also get another set of lines-- you can see this starts to get quite complicated quite quickly-- like this. There is no transition down to minus 2 because it doesn't exist. And then finally, we'll have-- well, the minus 2, could guess, can go to minus 1. And the m equals 1 can go down to m equals 0. And m equals 1-- I'm running out of colors-- and the m equals 2 can go down to here. Sometimes. So you see there's all sorts of different lines. But you notice that their spacings of these levels are the same as the spacings of these. No, it's not, because n is different. Yeah, you're going to get clusters of different lines at slightly different energies. So you need a very nice spectrometer to be able to distinguish between these lines. So as you increase the magnetic field, the energy gap gets bigger, and so you need a less and less nice spectrometer. So your resolution requirements go down for a bigger magnetic field. But in general, you're going to see a forest of lines. You're going to see, instead of just your h-alpha line that you had originally, you're going to get-- I'm not going to try and draw this accurately-- a set of lines like this. And then you might get another line over up here, and you might get another line down here, and all sorts of things like that, and other sets of triplets and doublets different places. If that's not quite right, it's not completed. OK, cool. Why are we bothering with this? This looks an awful lot like Zeeman splitting. Couldn't we have just done Zeeman splitting? So the main thing about this is this delta E is much, much larger than we get for Zeeman. By "much, much larger," I about 10 times larger. So it's much easier to distinguish between the lines. And once again, we can use our neutral beam that we already have to do this. So this is a very nice use of a neutral beam. Now, we talked about, with Zeeman before, it's not actually super useful to be able to measure the strength of a magnetic field in a tokamak, because we know what it is. It's the toroidal magnetic field. Basically, the poloidal magnetic field is very small. So this is our condition that the poloidal is much, much less than the toroidal that we have in a tokamak. And the nice thing about this is that each of these different transitions has a different polarization. And this is going back to what we were talking about before, where we had these high polarizations and sigma polarizations. And I'm running out of room to draw it. But if you go back to the notes that we had on Zeeman spectroscopy here, this polarization, for example, will come out. We've got some magnetic fields here, which is the toroidal field plus the poloidal field. We'll have the pi polarization coming out of the plasma perpendicular to B. And we'll have the stigma polarizations coming out parallel to B. So by having two spectrometers with two different polarizers cross of each other and measuring the relative strength of these two lines, you can actually determine the angle of the magnetic field. And then from that, you can work out the size of this B poloidal component. And from the B poloidal component, you can start to guess at distribution of current inside your tokamak, which is very, very important, especially when we're doing things like bootstrap current, where we need very specific current profiles in order to induce it. So basically, emotional stark effect here is very useful for magnetic reconstruction. Questions? Yes. AUDIENCE: Is [INAUDIBLE] always there [INAUDIBLE]? JACK HARE: So the Zeeman effect requires certain electronic configurations to actually occur. And I believe that you don't get Zeeman splitting in hydrogen, normally. So this motional stark effect is therefore more useful in that case. Yeah. And this motional stark effect is particularly big because we're talking about very fast-moving neutral beam particles. So this isn't useful in a plasma-- in just looking at the thermal ions, because all these velocities are in different directions and they're relatively small. So all that will give you is a broadening. The point is that this V is large and all in one direction. So you get a big splitting and it's consistent. Every single particle is emitting light with the same spectra. And so you can actually stand a chance of measuring. Any questions online? All right. You can have a quarter of your-- quarter of an hour of your lives back. Do whatever you want. And happy Thanksgiving.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_10_Refractive_Index_Diagnostics_VI_Faraday_and_Reflectometry.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So as you remember, in the last lecture, we were looking at waves in magnetized plasma. So I'm going to give a quick recap of what we learned, and then I'm going to go on and show you how to measure magnetic fields using the Faraday effect [INAUDIBLE] plasma. So the geometry that we were using, we had our z-coordinates like this. You have our y-coordinate in it, and we have an x-coordinate. And we're looking at systems where there's some static background or slowly varying background magnetic field, B, which is in the z-direction. And we realized that we can just rotate our coordinate system so that the k vector for our wave lies in the z-y plane like this, and then it's only specified by this angle, theta, with respect to the z-axis and, therefore, with respect to the magnetic field. So we don't have to worry about the components in x. And we churned our way-- well, we didn't actually churn our way through a lot of algebra, but we could have churned our way through a lot of algebra. And you can go have a look at it. And we found that we had to solve the determinant of this big matrix here, omega squared minus C squared k squared times the identity-- I'll write it with two arrows, OK-- minus the dyad formed by two k vectors plus I omega over epsilon 0 times this tensor, sigma, which contains all the information about the conductivity of our plasma. And it's a 3 by 3 matrix because the plasma is anisotropic due to this magnetic field. So we want the determinant of this is equal to 0. And that will give us all of the modes or the eigenvalues of our waves, and we can substitute them back in and get out the eigenmodes. So in general, this is very, very complicated to solve. If you want to do it at some arbitrary angle theta, it's a lot of work. But we decided we would simplify down and focus on two angles, which were most likely that you're going to come across, and they demonstrate the physics very nicely. So first of all, we looked at theta equals pi upon 2. This is a particularly good one if you're working, for example, with tokamaks because if you've got your cross-section of your tokamak like this with magnetic fields predominantly coming B toroidal out of the page, and you are probing from, say, the outside or the inside of your plasma, you tend to be in a situation where your k vector is perpendicular to your magnetic field. So this applies quite a lot of the time for tokamak diagnostics or stellarators or things like that. And we found that in this case, there were two modes. There was a mode, which we called the O-mode, where O stands for Ordinary. And this had our standard dispersion relationship, N squared equals 1 minus omega p squared over omega squared. And we've seen this several times before from unmagnetized plasma. This corresponds to a mode where the electrons don't actually feel the magnetic field, and so we just get the unmagnetized plasma result. And we also had the X-mode, which had a rather more complicated dispersion relationship, N squared equals 1 minus omega p squared over omega squared times 1 minus omega p squared over omega squared. And that term here is over 1 minus omega p squared over omega squared minus capital omega squared over lowercase omega squared, where that capital omega is the cyclotron frequency for the electrons. And we said that if we want to choose one of these modes for doing, for example, interferometry we need to select the polarization because these two modes, when we substitute these eigenvalues back in, we get out different eigenmodes, and they've got different polarizations. And so you can select X or O with your polarization. If you don't know your polarization, you might get too confused, and that leads to some further ambiguity when you're doing something like interferometry. So this is very interesting. We will come back to this a little bit when we discuss electron cyclotron emission. It doesn't actually help you measure magnetic fields necessarily, though you can see there is a bit of a magnetic field effect here. So maybe if you had an interferometer that measured both O-mode and X-mode, like two interferometers with cross polarizations, the difference in refractive indexes here might give you some measure of capital omega, and therefore the magnetic fields. I don't know whether people generally do that. The one that people generally do is the other mode, which is where theta equals 0. So we're going to take theta equals 0 here. And here, we found that we have, again, two modes which we called plus and minus. And because they both have a very similar dispersion relationship, I'm just writing the two of them together, so N plus or N minus is equal to 1 minus omega p squared upon omega squared. All of that is over 1 plus or minus capital omega over lowercase omega, like that. So in the case where capital omega goes to 0, where the magnetic field goes to 0, you just recover your unmagnetized dispersion relationship, the same as your O-mode. But otherwise, we see that these two modes in the plasma have different refractive indexes, and so they have different phase velocities. And the phase velocity also depend on the sign of the magnetic field. And so we would think that by measuring something to do with these waves, we should be able to get out the magnitude and the sine of that magnetic field. Now, the neat thing about this dispersion relationship that we started talking about last lecture is it actually works for a wide range of angles, not just theta equals 0. But it works for quite a large number of angles where theta is less than pi upon 2. So this dispersion relationship is approximately correct all the way up until the point where this dispersion relationship is correct. And the formal criteria is that capital omega over lowercase omega secant of theta must be much, much less than 1. But if you have a system where your cyclotron frequency is much less than your probing frequency, which for some plasmas is very easy to arrange, then you're going to end up fulfilling this criteria for a huge range of theta. It doesn't give you exactly this dispersion relationship. You have to modify it slightly. Instead, here, this omega, is defined as eB over m e. That's our standard cyclotron frequency. We instead have to use B parallel, which is B in the direction of our probing laser, which is B0 cos theta, looking back at that geometry over there. And then the omega that we actually use is equal to e B parallel over m e. So you get the same dispersion relationship, but you don't have omega which corresponds to the total magnetic field. You have an omega which corresponds to the projection of the magnetic field in the direction that your wave is propagating. And we substitute this back in, and we find out what the modes are here. And we find out that the modes are related so the polarization in the x-direction and the polarization in the y-direction is equal to plus or minus i. So that means the electric field in x, the electric field and y is 90 degrees out of phase to each other, either leading or lagging. And we talked about using these things called Stokes vectors, which allow us to write the electric field in x and the electric field of y just as a compact vector here. And for the right-hand polarization, which maybe is the plus 1 here, this Stokes vector was 1 and i, And for the left-hand polarization, this was 1 and minus i. So this is plus and negative, like that. Now, these are two basis vectors. But obviously, when we prepare laser beams or other permanent radiation, we tend to just have linearly polarized waves rather than circularly polarized ones. And these linearly polarized ones have Stokes vectors that look like, for example, x polarized linearly in the x-direction is 1, 0, and y polarized linearly in the y-direction is 0, 1. But we can just rewrite them in terms of these right- and left-hand circularly polarized waves as R plus L over 2 and R minus L over 2i, like that. And if we have some arbitrary fertilization, so if we just are dealing with some arbitrary polarization P, we could write that as some constant times R and another constant times L. And equivalently, that would be a different constant times X plus another different constant times Y, which is just to say that we could write some arbitrary vector as a sum of some other basis vectors. And it doesn't matter. We can switch between these basis vectors depending on which is more convenient for our calculations. And it turns out that we will start off using these X, Y basis vectors, but then we will write things in terms of R and L because those are the natural basis vectors for this dispersion relationship here. So that's where we got to last time. So any questions? Now, it's often the case that we can make an approximation to this dispersion relationship. And the approximation we'd like to do is the same one we do for interferometry. We'd like to have this dispersion relationship be linear in the quantities that we're trying to measure here. And so we can say, if we happen to be in the case where capital omega over lowercase omega is much less than 1 so that our gyrotron frequency is much less than our probing frequency, then we can say that our dispersion relationship is going to look like 1 minus omega p squared over omega squared 1 minus/plus capital omega over omega here, where I've just Taylor expanded this again. Now, I want to be clear that this is not always true. You need to check whether this approximation works for your plasma. And for example, it may not work if you're using gigahertz radiation looking at a tokamak, because gigahertz radiation here, lowercase omega, in a tokamak, the cyclotron frequency is also in the gigahertz range. And so this approximation may not hold at all. This is a good approximation for the sorts of plasmas I work with. And it would be a good approximation if you used a higher frequency in order to make this measurement on a tokamak. But you'd have to think carefully about this when designing some sort of Faraday polarimetry system. So what we see from this dispersion relationship is two things-- that this quantity, capital omega over lowercase omega, it has a small effect on the overall phase. So if you tried to measure this with interferometry, you're mostly measuring this term, and there'd be a very small change in this term. There'd be very, very small change to the overall refractive index caused by this small term here. So this is a very hard term to measure with interferometry. It's probably not going to work. But on the other hand, this term turns out to have quite a big effect-- [INAUDIBLE] effect-- on the polarization. And we'll show that in a moment. And so although we can't measure it directly with interferometry, we can measure it in terms of polarization and so then we can still measure the magnetic field. So let's have a look at what that looks like. We'll start by writing down the phase or a wave going through some plasma. So let me just draw a little diagram of what's going on here. We have some plasma like this. Inside it, there's some magnetic field, some density. And we've got our light going through. And the light could be right-hand polarized or it could be left-hand polarized. And what we want to do is calculate delta phi that is the change in phase from a reference beam that has gone around the plasma. So this is the quantity we're going to calculate here. So delta phi R or L is going to be equal to the line integral of the wave vector along our probing path. That's just by definition. That's how much phase we pick up. And that is also equal to omega upon c refractive index times dz, here. OK. And we have an expression for N. Here, we're going to be using the form where we've done the Taylor expansion just to keep things nice and linear. And so this is going to be approximately equal to omega upon c 1 minus omega p squared upon omega squared 1 minus/plus capital omega upon lowercase omega dz, by this. And the approximate sign here is because we've made this approximation that capital omega upon lowercase omega is much less than 1. So we can see straight away that the phase of the right-hand and the left-hand polarized wave going through this plasma is going to be different. And it's going to be different by an amount that is to do with the magnetic field that we encounter inside this plasma. So the right-hand wave, we take the minus sign. And for the left-hand wave, we take the plus sign. So the left-hand wave is going to have more phase than the right-hand wave. OK. Everyone following so far? So this is where it's very convenient to go back and work with these Stokes vectors here instead, with X and Y and R and L. That's because, as we said before, if I prepare my wave so that it's polarized in x, so that it just has an initial electric field 1, 0, that is equal to R plus L over 2, after-- for example, this is what the wave looks like here. After the plasma, we want to know what the polarization state is, which I'll call X prime. So after the plasma, this is going to go to a polarization state X prime, which is equal to still R and still L over 2, but I've less space here because the R and the L waves have picked up phase factors. They've picked up this phase here. So the R wave has picked up e to the i phase R, and the L wave has picked up e to the i phase L. And these two are not equivalent to each other. So what is the consequence of the fact that these two are not equivalent to each other? What has happened to our polarization with respect to our initial polarization? Yeah? AUDIENCE: It's rotated. JACK HARE: It's rotated, yeah. But it's still linear because we can always write it as some linear polarization that's an instant in time. So you can just say that our initial polarization-- this is x, and this is y, and you're rather ill-advisedly staring at the laser beam as it comes towards you. This is your initial polarization X, right? And then you've now got a new polarization X prime. And that's being rotated by some angle alpha. For some reason, people in the literature use alpha for this angle. Just go with it. Know that it's not theta. It's nothing to do with the angle between the probing direction and the magnetic field. This is a different angle. OK. So then what is alpha? Can we actually work out what it is from X and X prime? Yeah, it's relatively simple geometry. We just have that alpha is equal to the arctangent of the y unit vector dotted into X prime over the x unit vector dotted into X prime. That's simply geometry here. We're calculating this length in y over this length in x and taking the arctangent of it. So that means you have the arctangent of minus i e to the i phi R plus i e to the i phi L over e to the i phi R plus e to the i phi L. And you can verify that by just going through these definitions of R and L and thinking about the fact they've got x and y components and [INAUDIBLE] the x or the y components. So this angle here looks quite complicated. Alpha is the thing that we're going to be able to measure. And that alpha is going to be related to the magnetic field, and it's related to the magnetic field through these phase factors. And these phase factors have picked up some term through the magnetic field inside here. So it's not completely trivial to get the results here if you crank the algebra up a little bit. And I'm not going to do it. But the result is that alpha is equal to the integral of omega p squared over omega squared times all over 2c times capital omega squared dz here. We can write that in terms of some other quantities that we like a bit more, as e over 2 m e c. times the integral of ne over nc, the critical density, times the magnetic field in the probing direction, like this. And we already talked about the fact that we're going to use the theta equals 0 dispersion relationship, even for theta 0 equal 0, as long as we replace omega with the omega that's due to the components of the magnetic field in our probing direction. And that's just what this represents here. This is exactly the same as having B dot k hat here. I just want to note down here that this gives us a lambda squared dependence. Because it depends on the critical density, it matters what your probing wavelength is, and you will get different rotation angles depending on your probing wave. This is actually, let's say, yeah, nc is proportional to 1 over lambda squared. OK, questions? Yes? Oh, sorry. AUDIENCE: Are we just rotating the perpendicular component of the polarization if we have some normal wave vector? Is the part that's parallel unchanged? JACK HARE: So for these modes, there is no parallel component. These are purely transverse modes. There is no electric field in the direction of propagation. AUDIENCE: OK. I thought that you said that you can have k not directly [INAUDIBLE]. JACK HARE: Ah. So this is the real subtlety that people trip up on. We have transverse modes, and we have longitudinal modes. We have perpendicular waves, and we have parallel waves. These words, although they sound like they're talking about the same thing, are talking about two different angles, right? So here, when we're talking about this angle theta, and we're talking about components of the magnetic field, this is talking about whether our wave is parallel to the magnetic field or perpendicular to the magnetic field. But the electric field is going to be sticking out in some other direction like this. And it turns out for the X-mode, you actually have a small component of the electric field that is longitudinal that is along the k vector. But when you solve the special relationship for theta equals 0, you don't pick up any of that. You don't have any electric field in that direction. It's purely like an electromagnetic wave in a vacuum. It's purely transverse. So there's only the electric field perpendicular to the direction of propagation. And so when I draw diagrams like this of the electric field, this is all of the electric field that there is, all completely perpendicular to the direction here, which I've chosen k here. Now, if your question is, do you pick up a very small longitudinal component when you go away from theta equals 0, but you're still using the theta equals 0 dispersion relationship because you fulfill this condition, probably you do have a very small electric field coming up here. But we don't measure that on our detector. So we're only able to measure the transverse polarization. And also, when the wave comes out of the plasma, it's going to have to couple back into the vacuum modes. And the vacuum modes don't have any longitudinal electric field. So actually, by the time it got to the detector, there won't be any longitudinal electric field, because that can't propagate in a vacuum. AUDIENCE: So if there is a third one [INAUDIBLE] will it get deflected back into the plasma-- JACK HARE: Maybe. AUDIENCE: --or something crazy. JACK HARE: The plasma deals with it somehow, yes. [INTERPOSING VOICES] JACK HARE: [LAUGHS] It's a good-- that is a good question. I hadn't really thought about it before, but yeah. Certainly, by the time you get to the camera, you can only measure these R and L, which have no electric field in the direction of propagation. Yeah? There was another question? JACK HARE: OK. Any questions online? OK. So what are some implications of this? I mentioned one of them already. So alpha is proportional to-- which looks like an alpha as well, I guess. I'll use this symbol instead-- lambda squared. So if we want to measure a large rotation angle, because presumably measuring bigger rotation angles is easier, then we want to use a longer wavelength. Alpha also goes like the integral of B dot dl, like this. So we pick up the component of B along the probing direction. So it's B along. So we don't measure the vector B. We just measure a projection of it. And that means that if the magnetic field is misaligned with our probe, we will still get some rotation, but we don't know necessarily what the orientation of the magnetic field is. OK. So that means that this, like all other line integrated measurements, requires a little bit of interpretation. And the final thing is that alpha is still proportional to any dl, just like we had for interferometry. So we're not actually measuring just the magnetic field with our rotation angle. That's the thing we want. We're measuring the magnetic field weighted with the electron density. So we'll get a bigger contribution to the signal in regions where the density and the components of the magnetic field along z are high. So you could imagine a system where the density is high and the magnetic field is low, and you might actually end up amplifying that low magnetic field due to the weighting with the high density here. All this to say that it's a line-integrated measurement, but it's sort of doubly line integrated. So it makes life quite hard. At the very least, in order to have even a slight chance of measuring B, you need to have an inline interferometer. And that inline interferometer, its sole job is to measure this term so that you can divide it out of alpha and try and recover the magnetic field. So let me just give you an example of numbers because one thing you might be thinking is, well, alpha is an angle, so that means we can probably only measure it modulo 2 pi, or even worse, maybe only modulo pi if we think about polarizations. So do we have to worry about alpha being an absolutely huge number? So let me give you an example here. If we have a system where our density, electron density, is 10 to 19 per cubic centimeter, which is 10 to the 25 per meter cubed-- so this is pretty dense-- and we have a magnetic field of 10 tesla, which is not nothing, and we use a wavelength of 532 nanometers-- so this is a green laser, second harmonic, of neodymium YAG-- and we have a plasma that is 1 centimeter long-- so this describes some of the plasmas work quite well. When you get through all the maps here, you find that the rotation angle is a gigantic 3.7 degrees. So even with this plasma, which is very dense and has a very strong magnetic field, we are not actually measuring a very large rotation angle here. So this measurement is extremely difficult. We're certainly not in any danger of having any ambiguity about alpha. It's unlikely that it's ever going to be getting up to pi. But if you are worried about the ambiguity, you should spend a little bit of time thinking about how you would heterodyne this. It's very cool. So you can temporally and spatially heterodyne this. You have to find a way to modulate the polarization in space or time and carry your polarization signal on a modulated polarization. I don't think anyone's done it. It's almost done. I think know how to do it in time, but it's really hard to work out for it in space. Anyway, so how do you do this practically? How do we actually make this measurement of alpha in a real system? Well, we'll have our setup with our plasma here. We'll have our probing radiation, which we'll prepare in some linear polarization state, let's say X polarized like this. We will set up a standard Mach-Zehnder interferometer around this. That interferometer, its job, as I said, is just to measure any dl, like that. But now we've got probing radiation coming out of the plasma with polarization state X prime. And then we split that. And we send some of it through the beam splitter and through a polarizer, and the other beam also goes through a polarizer. And both of these beams go on to some sort of detector. These detectors, again, we could be doing a temporarily resolved measurement, in which case you could think of a diode which is outputting a voltage trace onto an oscilloscope. Or we can have a spatially resolved measurement, in which case, you can think about a camera taking a snapshot in time. Or of course, you could try and combine those two if you had a fast enough camera. OK. So either of these could be functions of time or functions of x and y. So these polarizers here, what you want to do to make a nice measurement of your system-- what we're doing here is a differential measurement. So I keep talking about differential measurements because they're very good ways to make good measurements. You set these polarizers up so that they're at a slight angle, beta. This is some angle in degrees. You set these up so there's a slight angle beta to the extinction angle for X. That is to say that in the absence of any plasma, these two polarizers are perfectly crossed with the polarization of X. And so you see no signal whatsoever. And you set one of the polarizers up to be a plus beta and the other polarizer up to be a negative beta. And so these two will see different things depending on the sign of magnetic field. So say the magnetic field points in this direction here. One of these detectors will see a signal that goes darker, and the other one will see a signal that goes brighter as the polarization is either rotated towards extinction or away from extinction. If we reverse the sign of B here, we would reverse which detector goes brighter and which detector goes darker here. So this is a differential measurement. You see a brightening of one channel and a darkening on the other channel. And you know that's due to magnetic field, not due to anything else. And so we call these signals intensity signal on the plus channel, intensity signal on the negative channel, like this. And these two intensities, I s plus or minus, are just equal to whatever our initial intensity was times sine squared of the rotation angle induced by the plasma and plus or minus beta, which is this slight rotation angle here. So again, if there's no plasma in the way, alpha is equal to 0, there's no rotation. And these two detectors would just measure sine of beta. Beta is normally going to be a small angle on the order of a couple of degrees. So these are going to be measuring a signal very, very close to 0. As alpha increases, on one of these detectors alpha minus beta is going to be close to 0, so that detector will measure nothing. It will measure darkness [INAUDIBLE] sine of 0 is 0. On the other detector, you'll have alpha plus beta. That will be some larger number. And so you'll measure a larger signal noise. So you should see, for example, that this is a function of time-- actually, yes. If this is a function of time, and we, for example, have the magnetic field going up, one of these detectors is going to measure a signal that looks like this, and the one is going to measure a signal that looks like that. So you get brightening and darkening differently on the two detectors. And the reason you do that is because, as well as the signal that you're measuring from your probing radiation like a laser beam, you also have some background radiation because your plasma is bright. And so you're always pushing out a copious amount of light in every direction, And this light is unpolarized. And because it's unpolarized, some of it will get through your polarizer. The amount that will get through your polarizer is, on average, 1/2. This just comes from thinking about what unpolarized light does. If you haven't come across this fact before, it's worth looking up. Unpolarized light doesn't have a polarization, but it will-- still some of it get through a polarizer here. And so what you can do with these two signals, you can see that they differ in terms of how they treat the polarization of the radiation, but they're the same in terms of how they treat the self-emission. So this immediately suggests to you that you can combine them in some way to make a differential measurement of alpha, which becomes 1/2 of arcsine the intensity on the positive branch minus the intensity on the negative branch, and all of that times the tangent of beta over 2. Again, I've skipped an awful lot of mathematics to get there. But effectively, when you take the difference of these two signals, you cancel out that self-emission that's unpolarized, and you get a signal-- you'll get an alpha which depends on the-- it has a sensitivity on whatever initial offset polarization angle we chose and on these two measurements of the intensity for the two different channels. And if you want to know more about this, there's a good paper by George Swadling in RSI in 2014. And that explains the mathematics behind this and a nauseating amount of detail on sensitivity analysis as well. It turns out that [INAUDIBLE] propagation, this gets quite complicated. So that's Faraday rotation effect. Any questions? Yeah? AUDIENCE: I'm trying to understand exactly why we're implementing this beta scheme. I guess my initial guess when we were thinking about this was, oh, I'm going to have two polarizers which are 90 degrees out of phase that I can look at two different components. Is this because the expected angle shift or polarization is so small? JACK HARE: Yeah, exactly. Right. So if you did the 90 degrees out of phase, you would be very well set up for measuring sort of 90 degree-style deviations here. But if you're trying to measure alpha plus or minus 90 degrees here, and alpha is very, very small, you're going to be measuring very similar intensities. You're going to be measuring something that's close to having an intensity of, like, 1 plus or minus a tiny amount. And so you're not going to have very much sensitivity. Down here, you're very close to the null, where you've almost got no light, and so that means any change in intensity is very, very measurable. So Swadling's paper talks about exactly how you choose beta. But rule of thumb-- if beta is about the size of your expected signal but a bit larger, then it's good. The reason you want it to be a bit larger is if beta is smaller than your maximum value of alpha, then this line goes through I equals 0, only, of course, it doesn't it bounces back up, and then we have our phase ambiguity again. So you're trying to keep beta sort of just a little bit bigger, so beta is greater than alpha, like that. Yeah. If you had 90 degrees, then you might want to think about a different scheme. But the main reason not to just use a single channel, because the other thing you might think of is, I'll just use a single channel, is the self-emission problem. And then you'll see brightness on your detector, and you'll think, ah, that must be magnetic field. But no, actually it's just the plasma glowing. And you have no way to tell the difference between those two. If you think that the self-emission is really small because you've got an incredibly bright light source and it just overwhelms the self-emission, then maybe you don't need this stuff. But in general, it seems to be very helpful. Other questions? Yes, Sean. AUDIENCE: I've heard of people using Faraday rotation to make astrophysical measurements. How does that work if you're not controlling the light source, if it's all unpolarized? JACK HARE: Yeah. So Faraday rotation in astrophysical measurements, I believe you need to find a source that has polarized light, something that is emitting polarized light like synchrotron light, from some specific type of magnetized object. And then you look at the rotation of that through the intervening medium. So you have to look for a source and then wait for that source to be between you and the object you're trying to study. So generally, people are studying the intergalactic medium or an interstellar medium, so it doesn't really matter where the source is. But it is the same effect. I mean, you still get a rotation that's to do with these two factors. I don't know in their case whether they are worried about phase ambiguity. Obviously, these can be quite long in the universe. So I don't really know how they do that. I do think that they use multicolor techniques. So they can probe lots of different wavelengths, and so they get a measure of alpha for different wavelengths. And that helps reduce some of this ambiguity, like we talked about getting rid of vibrations and neutrals in interferometry, if you have multiple measurements of different wavelengths, that's also interesting. Yeah. There was a polarimeter on C-Mod. I don't really know how it works. Like, I don't know whether they had problems of alpha being close to pi or something like that or exactly how it works, but yeah. OK. Any other questions? Any questions online? Yes, Nicholas. AUDIENCE: Why would we do-- why would we measure B with a diagnostic such as this instead of just [INAUDIBLE]? JACK HARE: Well, this measures the magnetic field inside the plasma. So it's very hard to stick a probe, a magnetic probe, inside many plasmas. Yeah, exactly. And a second thing I'd say is that magnetic probes are perturbative. You stick them in the plasma, you will change the plasma. Whereas this technique shouldn't change the plasma. So there's two reasons. The disadvantage is that this technique is line integrated. So we don't get the local magnetic field. And in fact, we talked about Abel inversion. If you have a cylindrical system, like a z pinch that has a poloidal magnetic field around it, there are very complicated formulas for doing Abel inversion of both the density and the magnetic field at the same time. So you can still make progress if you know something about the symmetry of your system. But if you have no idea about the symmetry of your system, this is very, very hard to actually use in practice because you don't know whether B dot dz is changing, whether B is changing, whether ne is changing, along your line of sight. You have many different places where you don't know what's going on. OK. Any more questions? Yes, go on. AUDIENCE: [INAUDIBLE] we use magnetic diagnostic [INAUDIBLE] general question-- are we trying to actually map the whole [INAUDIBLE] field everywhere in space? Or is this more so you get boundary condition somewhere so then with a [INAUDIBLE] equation or something like that, we can figure out the rest? Are we [INAUDIBLE], or are we [INAUDIBLE] trying to map the field everywhere? JACK HARE: So the question is, are we trying to measure the field everywhere, or are we just trying to measure it in some point for some boundary conditions, do some reconstruction? So on a tokamak experiment, for example, where, because we're in a low beta regime, our magnetic measurements are pretty good, we can measure at the boundary. and that's very close to what the measurement is inside the plasma. I don't think you would need this to help with the Grad-Shafranov type measurements. This might be useful for getting higher temporal resolution in some cases, for example. On the experiments I do, obviously you're never going to get the three-dimensional field, because it's line integrated and you only measure a component of the magnetic field. But you can certainly measure the magnetic field as a picture. So you can take pictures of the magnetic field. You expand your laser beam up and take an image. That is pretty useful. But of course, you can only interpret-- what you measure is the polarization. That's true. You know what that is. What you're trying to get out of the polarization angle alpha is the magnetic field. That requires some sort of model or some sort of intuition or some sort of assumption. So you're never going to get the full three-dimensional structure. And particularly, like I said, the fact that you only get the components of the vector is not great. There are things like the motional Stark effect, which I've never fully understood, which does give you some way of measuring the local magnetic field. But never quite been able to work out what's going on with that. Any other questions? Anything online? OK. We're going to go on to a new topic, refractometry-- or reflectometry. And then that'll be all. So the idea of reflectometry is that we launch some radiation from one side of the plasma, and we assume that the plasma has some sort of density structure like this. So maybe this is a coordinate like r. Could be something like a tokamak. And this is density. And we know that somewhere, as this radiation propagates in, it's going to hit a surface where the density is the critical density. And at the critical density, N-- and we'll call that point-- I guess I'm using x [INAUDIBLE] my note. We'll call that point xc. And so the density at xc is just critical density. And the refractive index at the critical density is 0. And so at that point, as we know, the wave is going to reflect. So effectively, we have a system where the refractive index of our wave is 1 in the vacuum region. And then as it comes in, the refractive index goes down, goes to 0. And the wave would be evanescent propagating forwards. And so to in order to conserve energy, the wave has to reflect back out like that. And if we have a measuring device, we can measure how long it takes for the wave to bounce back. Or we can measure the phase shift between the ingoing wave and outgoing wave, and maybe we can learn something about the position of this critical surface here. So this looks a little bit like radar, right? We send out a burst of radiation, we wait for it to bounce off our reflective object, the critical surface, and then we measure that radiation coming back. And from the phase lag, the time of flight, if you will, we get some information about where xc is. Now unfortunately, this is not radar. Why is this more complicated than radar? Yeah? AUDIENCE: With radar, you dealing with bouncing stuff off of solids, versus this is [INAUDIBLE] fluid. JACK HARE: OK. So the answer was, in radar you're just bouncing stuff off solid, but this is a plasma. But in this picture here I've drawn, it looks quite similar, doesn't it? We've got some radiation coming in, we bounce off a reflector, we come back out. Seems like it should work. I mean, the plasma is acting as a perfect reflector. So you're getting there, but that's not quite it. Yeah, Adam? AUDIENCE: The speed of the wave should be changing as it goes through the plasma due to the [INAUDIBLE]. JACK HARE: Absolutely. So the answer was, the speed of the wave changes as it goes through the plasma. And so in radar, the speed of the wave is constant throughout the entire system because we've just got air, it goes at the speed of light, and it bounces back. In this system, the speed of the wave changes. And so although it's going to reflect off this point, the details of what happens in the region before it reflects are very important. So delta phi depends on the density for x less than xc. So it depends on whatever the density is doing in this region here. Doesn't depend on what the density is doing afterwards, because we've reflected. But this region here, it is very important. OK. So let's look at that in more detail. Let's say delta phi is equal to the phase of the wave at B, this point here, minus the phase of the wave A, where it enters the plasma. That, as before, is equal to omega upon c the integral of the refractive index dx A, B, like that. This is just what we've done before. So now, we're just dealing with the phase of the wave going in. We will double this when we want to come back out again because we'll assume the plasma is stationary on the time scales of this light being reflected. But there's a big problem with this equation I just wrote down here. Does anyone know? I've made a huge assumption in all of the work I've been doing that some of you picked up on before that isn't valid for some region of this plasma. Especially people working with RF waves might spot something here. So we've been using a WKB approximation. But that WKB approximation does not work when the refractive index gets close to 0. And one maybe intuitive, maybe not way to think about that is as the refractive index gets to 0, the wavelength of your wave gets very, very large. And the WKB approximation assumes that the wavelength of your wave is much smaller than any gradient in your system. And so as N gets to 0, the wavelength gets long, even this gentle gradient in density starts to look too large, and the formulas that we've been using don't work. So we cannot simply double this formula here. It turns out if you do all of the math appropriately, apparently you also think of a factor of minus pi upon 2 for this reflection. Don't ask me where that comes from. OK. So actually, your total phase, your phase doing the bouncing, coming back out, is going to be equal to 2 times this-- times that. But we only pick up the pi over 2 once at the reflection here. OK. And that's because you can't really put your detector here. You're always going to have to do a double pass in this system. So it looks a little bit like a [? Michelsen ?] problem. OK. And this integral-- I probably should have just written this on a new line. OK, I'm going to get rid of that. So let's just say that the phase is actually going to be 2 times omega upon c. And the reason I wanted to rewrite this is I want to replace these limits, A and B, with a being where the wave started and xc being where the wave reflects, times the refractive index dx minus pi upon 2. And that pi upon 2 is due to reflection. And the point to give here is that if we measure delta phi and the phase lag between a wave going into the plasma and the wave coming back from the plasma, it isn't just like equal to something times xc. xc appears in the argument of an integral-- sorry, in the limits of an integral, and the argument of an integral depends on all of the plasma before we get to xc. So this is mathematically a little bit why this isn't radar. If it was radar, you'd just have delta phi equals some constant times xc, and that constant would be a distance [INAUDIBLE]. OK. So there are ways around it. And the way around it is you use multiple frequencies, use many omegas, like this. So for example, you have measurements of delta phi, and you measure it for lots of different wavelengths of different frequencies. Maybe it looks something like this. And this could be you're doing the measurements simultaneously with lots of different wavelengths, like a two-color, three-color, five-color measurement. Or we'll talk about some other techniques to do with chirping it later on, where you have a continuously ramping frequency and you listen for the reflected wave coming back, like that. But the reason you do this is that after a little bit of mathematics that you can find in Hutchinson's book, you can find out how this phase changes with the frequency you put in. And you get an equation that looks like d phi d omega is equal to 2 times the integral between lambda and infinity dxc d lambda p-- and I'll define some of these in a moment-- lambda p d lambda p lambda p squared minus lambda squared all to 1/2. This looks very odd. Don't worry. It's going to make slightly more sense soon. Lambda p is a slightly odd quantity. It is the wavelength that a wave with a frequency of omega p would have in free space. Obviously in a plasma, a wavelength frequency of omega p has infinite wavelength. But this is just the equivalent in free space, and so it's just 2 pi c upon omega p. And lambda is just 2 pi c upon omega. So we've gone from this equation, which implicitly, of course, has N is equal to 1 minus omega p squared upon 2 omega squared-- all the way through this, I'm assuming the O-mode. All the other modes are very, very complicated to work with, but there are definitely a few different modes. And you can do it with other modes as well. So we've taken this equation. We've differentiated it with respect to omega, and then we have taken the bizarre step of, although we've got omega on the left-hand side, we've written everything here on the right-hand side in terms of lambdas. Anyone have any idea why we did that? It's a mathematical trick we're about to employ, and you've seen it in this class. AUDIENCE: An Abel inversion? JACK HARE: It's an Abel inversion. Well done. Has no right to be, but it is an Abel inversion. So for those of you who are not following, you remember we had the Abel transformation. So this is the one where you go from your [INAUDIBLE] distributed function, F of r, to your line-integrated function, F of y. And that was equal to 2 times the integral from y to a of f of r r dr over r squared minus y squared to 1/2. And so I can make the identification of this term with this term, of lambda p with r and lambda with y, like that. This means I can use all of the mathematical tricks that I had from the Abel inversion to do the inverse Abel inversion. Remember, this is what we've measured. This is what we want. So we can now do the inverse Abel inversion. And again, a fair bit of mathematics follows. So if you don't follow all of this, it's because I'm not showing you every step. But then we have the location of the critical surface as a function of frequency is equal to a minus c upon pi the integral from 0 to the frequency that you want to know xc for, d phi d omega prime d omega prime over omega squared minus omega prime squared to 1/2. I want to point out that as we change frequency, we obviously change location of our critical surface, right? If we use a higher frequency, our critical surface will be at a higher density. And so for a monotonic density profile, it will be further in. What this equation is saying is that we can know the location of the critical surface for every frequency if we know the phase shift for every frequency omega prime, where omega prime is less than omega. So if we have measured the phase shift for every smaller frequency coming up to the critical frequency we're interested in, then we can know the location of the critical frequency. So this is a very powerful technique because if we know the location of the critical frequency, we also know the density at that location because it's just the critical density. And so we can build up-- as a function of x, we can build up the density here. This is critical density at x1. And then we can build up the critical density at x2. And you see soon enough, we can start building up the entire density profile as a function of x, which again, in a tokamak, could be something like the minor radius. So this is a way of measuring the actual density profile to see how much more powerful this is in interferometry, which only gives you line-integrated profiles, this gives you the accurate density profile. So this is a very, very cool technique. Any questions on this so far? Yeah, [INAUDIBLE]. AUDIENCE: [INAUDIBLE] use Abel inversion [INAUDIBLE] do we need to make assumptions about the symmetry-- do we need to say something [INAUDIBLE]? JACK HARE: No, we made those assumptions in order to derive this formula. Now by analogy, we notice the formulas are the same. And so there is-- I don't know if there's something deeper under here that I don't understand, but there has not been any assumption about symmetry in our plasma. I will say we've made an assumption that our plasma has a monotonically increasing density profile. Or rather, should we say that if your profile is like this, you can only measure up to here. You can't measure the other side. And that's a pretty good approximation for a tokamak. But if you have some funky system which has a dip in the center, you won't be able to measure that. Hutchinson says you can't look over the hill, which I think is a nice intuitive way of doing it. You can only measure up to a maximum. So yeah. Just to say that this is an analytical solution for the O-mode. If you happen to decide to do this with the X-mode because you want to use that polarization, you can, but you have to do all of this numerically instead. And it doesn't come out as nicely as this. But it's completely possible. You just have to know what your dispersion relationship is as encoded by the refractive index N. Any other questions on this before we move on to a few practicalities and then finish up, actually, quite soon? Any questions online? Yeah? AUDIENCE: You expressly drew this almost as it looks like a microwave horn. I guess this is a question I can answer myself in the sense of just computing the range of critical density of plasmas of interest to know what range of frequencies to use. But on machines like tokamaks, is this normally done in the microwave range? JACK HARE: Yeah, yeah. So I did draw it as a microwave horn, and that is to suggest microwaves because this is a technique that is done on tokamaks. Because of some of the practicalities I'm going to talk about in a moment, for a system which is very short lived, it's very difficult to make a measurement at lots of different frequencies. And so for the plasmas I work with, this is completely impractical. You would also, for the plasmas I've worked with, need to generate radiation that goes up to the critical density. And that would be sort of like soft X-ray lasers, which are also very, very hard to get hold of. Like, visible radiation, we're in that nice regime where any over nc is very, very small in the sort of plasmas I work with. So that's great. But if I wanted to actually work near the critical density, I'd have to get something with much shorter wavelengths. And that technology doesn't exist. So although Hutchinson's book is very good at emphasizing the principles of diagnostics rather than the practicalities, a lot of the time the practicalities of what radiation sources we have and what detectors we have really dictate what techniques we can use in different plasmas. And then when new technology becomes available, that enables us to do new things that we weren't able to do before. So OK, so let's talk about a practical system for making this measurement, which is the phase lag at a range of different frequencies. And can I just point out something that probably some of you have spotted already? I drew this very deliberately as a nice, discrete measurement that is a discrete set of omega. And so you're going to have numerical noise when you differentiate that. And once again, that actually matters because I've written this little small d's here, but of course, in reality it's always going to be capital deltas. And so you're going to have to think about, can I fit this with some basis functions nice and smooth? Can I do all sorts of techniques that I normally do to avoid amplifying up this noise? And of course, I'd be mostly amplifying noise very close to the frequency that I actually want to measure. So noise here is going to be very important for measuring the location of the critical surface, this frequency. This is like the Abel inversion, where noise near the center of the image was very, very important. So there are some nice analogies you can draw here. OK. So a surprising number of plasma physics techniques actually started off being ionospheric measuring techniques. So if we've got the Earth down here, and we've got some sort of radar dish, we have, floating above it, the ionosphere, which is a low-density plasma, like this. And you know, Sputnik is up here having a good time flying around. And so when people invented radar technology during World War II, they suddenly had this ability to produce intense bursts of quite specific frequency radiation that was being used for radar. And they, therefore, were able to use that technology for civilian purposes and start doing interesting measurements. It turns out that radar is well matched to the sorts of densities that you get inside the ionosphere. And so if you want to do this reflectometry measurements and try and build up a picture of the density in the ionosphere, you can start sending out different pulses of radiation. And these different pulses will reflect off, and they need to be detected, I mean, in principle, by another detector, maybe something very close to the original one here. And you can build up a map of what the density looks like without even having to fly a satellite up there. The ionosphere is relatively slow evolving, right? It doesn't change very much. So you can send a pulse at one frequency, wait for it to bounce back, and then send the next pulse. And because the ionosphere is evolving slowly, the properties of the ionosphere don't change during the seconds that you're doing this technique, where you're waiting for the reflection and analyzing the data from it. So this works very well in the ionosphere. For a tokamak, this doesn't work very well. If you have to wait a significant amount of time, the plasma is going to have changed by the time you get around. So what they do in a tokamak is a slightly more complicated system. So let's have our tokamak, nice D-shaped cross-section, diverted plasma like this. The density is constant on a flux surface, so these flux surfaces I'm drawing are also surfaces of constant density here. And again, I'll have some sort of hole and I'll be launching radiation into the plasma. With this, I'll also have some hole collecting radiation coming from the plasma. But what I'll do is I'll have a source of radiation, and I will also split some of that radiation that's going into the plasma and send it to a heterodyne mixer, where it adds a small amount of frequency. Remember, this could be our rotating wheel or our moving mirror or our acousto-optic modulator or something like that. And then I mix that back in. I effectively do temporally heterodyne interferometry on the returning wave, where I have omega and omega plus delta omega, like this. But at the same time, I sweep this initial frequency here. So for example, I have on this plot here, I think-- I can't remember which tokamak I got it from-- there's a system which sweeps from 40 to 70 gigahertz, and it does that in 15 microseconds. But that's kind of like your sweet period, and then it drops back down and does it again. So if you want to capture dynamics, you can capture dynamics on timescales which are greater than 15 microseconds. If there's something small and fluctuating on a shorter timescale, you won't be able to capture it with this system. And here, we're not measuring the delay directly as we did with this system, where you're literally counting seconds between sending it out and coming back. Now instead, you're measuring the phase shift with this heterodyne system, as we did for interferometry. So just a couple of final notes on this technique and some of its limitations. So one big hope of this technique will be to measure fluctuations. So fluctuations are very important in magnetic confinement devices because they are associated with turbulence, and turbulence is associated with anomalous transport and, therefore, bad energy confinement and, therefore, uneconomical reactors. So we'd like to understand what is going on with fluctuations inside here. And so Hutchinson says, a lot of people were trying to use this technique for a very long time to measure fluctuations, but it is very, very challenging to do this. And one of the challenging things is that it's sensitive near the critical density, so near xc, the location of the critical density. And that's kind of what I was talking about here, where this formula has this singularity in it. And so it's going to be very sensitive to contributions very close to the critical density. But there's also still contributions from the plasma, which is the x less than xc. So if it was just sensitive to fluctuations in xc, this would be great. You could think about it like a little vibrating mirror, and we could measure that position of that mirror really sensitively. But unfortunately, it's not. There's also these contributions from elsewhere. And Hutchinson says, rather dismissively, that most people forget about this second problem when analyzing the data and just assume that it's just measuring fluctuations in the location of the critical surface. If you include the fact that there's these other contributions from elsewhere, you can imagine the fluctuations are, maybe, out of phase or doing something else, and it's going to make your measurement really hard to interpret. And also, a lot of the picture we've been using here has been one-dimensional. But reality is very three-dimensional, and so there might be waves which are scattering off at slightly odd angles. There might be reflections off the walls, things like that. And that makes this measurement very, very tricky in general. So as far as I know, people mostly use it just for measuring the sort of slowly evolving density profile, and not these small-scale fluctuations. And the final thing I want to do is just contrast reflectometry with interferometry. So reflectometry, you measure near the cut-off. So you're measuring near this critical density here. And you can't measure behind the location of the critical surface. So this was the "looking over the hill" problem here. So you're choosing your radiation so that you are near the critical density. That's completely different from interferometry, where you usually work in a regime where the densities you're measuring are much less than the critical density because that means you don't have to worry about refraction and all sorts of other effects. And so you choose your probing wavelength so that it's sensitive to these densities much lower than the critical density, and we make very small measurements there. And so this means that we avoid the cut-off, but of course, our measurement is line integrated in interferometry in a way that, although it is complicated in reflectometry, you still can actually get the profile along the probing line of sight, which you don't do for interferometry without some symmetry arguments, which could lead to something like Abel inversion. OK. Any questions? Yes? AUDIENCE: How exactly does the [INAUDIBLE] receiving [INAUDIBLE]. Why would it not just bounce back into the [INAUDIBLE]? JACK HARE: So apparently, you do need a two-horn system. People tried to do it with a one-horn system, but the reflections are very complicated to deal with. And so it's been found to be better to have a launcher and a receiver. There's a few more details on that in Hutchinson's book. AUDIENCE: OK, thank you. I will say-- so when did this class two years ago, there were several people in the group who worked on reflectometry. And they told me that there may be some more up-to-date stuff than what Hutchinson had in his book. But they weren't able to point me to any review papers or even any readable papers about this. And this is a huge problem with diagnostics. So there's not a paper, I can't teach it. So if anyone is here and they work on reflectometry, and they're like, this is ancient, we haven't done this in 20 years, shout out. I'd like to learn. It's just like, I've been unable to find any resources on how this is done better. I know, for example, on ASDEX, there is a reflectometry system that is trying to measure density fluctuations and correlate them with the temperature fluctuations measured using correlation ECE, which we'll talk about later. So that sounds like an incredible system. And it sounds like they are measuring density fluctuations. But I have no idea how, because, again, Hutchinson, he says this is basically impossible. So there must be a way to do it, but don't know what it is. Was it a technological advancement? Was it a conceptual advancement? I'm not sure. So yeah.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_13_CECE_and_Bolometry.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: Well, welcome we few, we happy few. Let's do a little recap on electron cyclotron emission, and then we will go on to a few other things. So we had a look at the physics of electron cyclotron emission. We didn't actually derive the emissivity of it, but we gave ourselves a hand-wavy reason why there may be multiple peaks here. So if we have our emissivity as a function of frequency, we expect to have multiple different peaks, even from a single particle. And these peaks are going to be occurring at frequencies-- I'm going to switch into angular frequency units, omega m, which were equal to this cyclotron frequency, which was just our normal gyro frequency modified by a relativistic parameter here, and times by this relativistic rest mass term, which always makes the frequency smaller, divided by a relativistic Doppler term, which might make the frequency smaller, might make it larger, and, most importantly, all this multiplied by some integer m for some natural number N. And so, we have these peaks here. And we can say this is m equals 1, m equals 2, m equals 3. And they're evenly spaced. And this is just for a single particle. And then, when we put this all together, first of all, we looked at what a distribution of particles would do, and we agreed that these peaks would, in general, be broadened in some asymmetric fashion. But the exact shape will depend on exactly how big these two terms are with respect to each other. And we said the neat thing about this is that these frequencies depend only on the magnetic field. So this frequency here, eB over m e-- admittedly, there's a gamma factor inside there. But if we neglect that, the frequency depends only on the magnetic field. And so, if you see some emission at some certain frequency, then you know that it's been emitted by a region of plasma which has this magnetic field. And that was particularly useful when we considered a tokamak because, if we have something like a tokamak-- --or a stellarator-- hello? STUDENT: Oh, sorry, my bad. JACK HARE: No worries-- which has some magnetic field that goes 1 upon R, then different regions of our toroidal device are going to be emitting electron cyclotron emission with different frequencies. And those frequencies are different because the magnetic field is dropping off nice and monotonically throughout our device here. So we said that, for example, we might have three different regions, which we could label positions R1, R2, and R3. And each of these then have different magnetic fields, B1, B2, and B3. And each of these magnetic fields will then produce a spectra of lines at these different harmonics here. Now, the neat thing was, if we consider just the lowest harmonics, so the m equals 1 and often the m equals 2 harmonics, we said that these tend to be in an optically thick regime. So that is the region of plasma that's emitting is emitting as a black body. And that means the black body spectrum I of nu, is equal to T nu squared upon c squared. And what we would expect to get out if we put our little microwave horn and collected the different frequencies coming out from this plasma, is we'd expect some sort of spectrum where different parts of the spectrum correspond to different regions of this plasma. So, for example, we might have a region down here corresponding to lowish frequencies at low magnetic fields. So I've got this the wrong way around. R3 is the lowest magnetic field. We might have another region where the frequencies correspond to the center of the plasma and a higher frequency part of our spectrum which corresponds to the high field part of the plasma. And at each of these points, if we measure the intensity, we then know straight away what the temperature is corresponding to the black body spectrum for that frequency. And, therefore, we know what the temperature is inside our plasma. So we can go back and we can say, OK, using these measurements, maybe we have a temperature profile that looks like this where, again, we can identify three specific points inside here that we've been using as part of this example. But, in general, you can get it all the way through the plasma. And so, what we're doing here is we're saying that we've got some mapping from radius, the magnetic field, to frequency of maybe the first harmonic here. We have some intensity as a function of frequency, which is proportional just to temperature because of our black body case. And then we can back this temperature out. First it's temperature as a function of frequency. Then we can say, well, that frequency corresponds to a certain magnetic field. And then we can say that magnetic field corresponds to a certain spatial coordinate. And so, this is a technique which will give us, by looking at the spectrum for lowish frequencies, the first or maybe the second harmonics of this cyclotron emission, we can work out what the temperature is as a function of space. So this is why this is a powerful and popular diagnostic in MTF. And we also discussed a little bit about accessibility. If you remember, this is when we got into talking about the O mode and the X mode. And I'm not going to recap that now. And Sean posted an interesting way of looking at using something called a CMA diagram on Piazza if you want an alternative view on how this works. So that was just a quick recap. Any questions on ECE before we go into correlation ECE? OK. Ugh, a bad chalk. This is correlation ECE, often called CECE. So what is the problem we're trying to solve with correlation ECE? Why isn't this technique sufficient as it is? Well, what we want to do, we want to measure very small temperature fluctuations. We want to measure temperature fluctuations within the plasma that are maybe on the order of 1% of the baseline temperature, delta T upon T about 0.01. So if you have a small temperature fluctuation at some part in the plasma, that means you'll have a small intensity fluctuation. If the mean temperature is just Te, and we have some fluctuation around 1%, this intensity will also fluctuate by about 1%. And that 1% is actually extremely hard to measure. And this is because the noise is just too high on these systems. There are lots of different contributions to the noise. But, in general, they all add up to make it very hard to measure these very small fluctuations. So ECE, as it stands, is very good for measuring the broad average temperature profile with some error bars. But if you want to measure small fluctuations within those error bars, you don't stand a chance with this system. And you want to measure these fluctuations because these fluctuations are related to turbulence. And, as we've discussed before, turbulence is one of these key properties that we want to understand in plasmas so that we can build an economically viable fusion reactor. So it'd be very nice to be able to know. Now, the fact that the noise is too high does seem like a big limitation. But there are some clever tricks that we play where we use correlations. And I'll talk now about what exactly these correlations are and how they provide us with information that allows us to get a signal out despite the overwhelming [INAUDIBLE]. So our setup here is borrowed from ASDEX Upgrade. And, in effect, I referred to Alex Creely's PhD thesis, which you can find online if you want more information, from 2019, which is a pretty good summary of this. A top tip, if you're ever looking for an accessible description of some of the physics or a diagnostic, you should go find someone's PhD thesis. Usually much better than any of the papers that have been written about it, because there's not really any base restraint. And, of course, the person writing the thesis is desperate to tell everyone about all the cool stuff they just spent the last six years of their life working on. So it can get very detailed. So in ASDEX Upgrade, we don't really have a circular cross-section plasma, but I'm just drawing it like that, be a nice D-shaped plasma, got our plasma inside here. And our system, at first glance, looks an awful lot like the system that we sketched out up here. We're going to have some sort of special lens. It turns out, you can make lenses for microwaves. I didn't know this-- but some sort of plastic HDPE lens that then couples into a waveguide here. And that lens is going to collect light from a region like this. So I exaggerated it slightly, but there'll be some region over which we can collect light from relatively small volumes inside here. So we can have a series of volumes inside here. And, once again, our magnetic field is going as 1 upon R, all the standard stuff that we had previously. Now, specifically for this ASDEX Upgrade system, but generically for C-mod and other devices where you might have this, the first thing that's done is a band pass filter. So this is applying some filter to it that in frequency space looks a little bit like this, some sort of top hat. It's centered around the frequency where most of the electron cyclotron emission is. It's around about 110 gigahertz in ASDEX Upgrade. And this has got a bandwidth of 10 gigahertz. So we've cut out an awful lot of the radiation in bands that we're not interested in. We are no longer going to study those, any bremsstrahlung, any higher order things. This is going to capture all of the information in, say, the first harmonic within some relatively small window. And that's the important thing about correlation ECE. We're not trying to measure the temperature profile throughout the entire plasma. That's very challenging. We want to measure it inside some very small region. And the reason is that our turbulent eddies, our little fluctuations inside here, these are very small as well. So the size of our turbulence of R turb, is on the order of 100 microns. That's the width of a human hair. So we are trying to measure turbulent eddies inside a tokamak on this scale of a human hair here. And so, that means, we're not trying to cover this entire frequency range. We're zoomed in on only quite a small frequency range. Once we've got our bandpass filtered signal here, 110 kilohertz is still too fast for us to digitize. This would be an extremely expensive digitizer. And so, what we actually do is we downmix it with the signal at 100 gigahertz. And so, then we get out our beat signal, which we can digitize, which is at 10 gigahertz here. So 100 gigahertz mixing was this. Our beat frequency is about 10 gigahertz plus or minus 5 gigahertz. So this is-- actually, I will write this as 0 to 10 gigahertz. This is the sort of signal that we can actually digitize now. And what we do is we then amplify it. And when we split it off, as we did with our standard electron cyclotron diagnostic, and we split it through a series of different bandpass filters-- so there might be another low, a lower filter, medium filter, the high filter. And then we put these through to a set of detectors. And so, each of these channels, now that we've split off, is looking at frequencies in a very narrow range here. These are about 100 megahertz in width now. And the spacing between these different bins is 125 megahertz. It's important that these bins do not overlap in frequency space, which means that these volumes do not overlap in real space. They are each sampling a separate discrete part of the plasma inside here. So let me just write that down. So each channel samples a non-overlapping region in frequency. And, therefore, in real space, because, again, we've got this very strong link between our magnetic field and our spatial position and the frequency at which we're sampling. So each of these represents a measure of the power that's being emitted by a very small region. And these regions are clearly distinguished. And you can tell that these regions are very small because we're dealing with 100 megahertz bandwidth. And we originally started with 110 gigahertz here. And so, you see we've gone down by a factor of 1,000. If 110 gigahertz was roughly enough to cover this entire region here, then we're now dealing with regions which are 1,000 times smaller than the radius of our tokamak. And that's how we're able to get down to this 100 microns or so. And we digitize these signals. And, just for reference, the reason why we downsampled these is that digitized is now much less expensive. We're doing it something like 4 mega samples a second. And that is quite an affordable digitizer compared to the ones you would need to digitize this signal up at hundreds of gigasamples a second. So I have not yet told you anything about how correlation ECE works. I'm just giving you an outline of exactly how these measurements are made with an example from ASDEX Upgrade. But there are other similar devices on other tokamaks. Any questions on this before we get to the meat of what we're trying to achieve here? Yeah. This wave is too damn fast. We can't digitize it. We mix it with 100 gigahertz. We did this before when we were heterodyning our signals for interferometry. We get out two frequencies. We get out a frequency at omega 1 minus omega 2, and we get out a frequency at omega 1 plus omega 2. Omega 1 minus omega 2, that signal is about 0 to 10 gigahertz. We can digitize that. This one, whatever, 200 gigahertz, we still can't. On the integration time of our detector, the sampling time here, that will have many, many oscillations. And so, it will average out to-- well, because it's power, it will average out to 1 or something like that. So it's some DC offset that we can subtract off after. So all this is doing here is mixing the signal down so it gets to a regime that we can effectively digitize. Yeah, another question. STUDENT: [INAUDIBLE] JACK HARE: So if we were doing geometric optics, which we're not, then you would have a lens like this. That lens could collimate your beam, as in we'd have a load of rays coming out into our horn. And if it collimated that beam, that would mean there'd be a focus point at some distance f away-- if this lens has a focal length f, then it will diverge afterwards. So, because of reciprocity, that means that, as opposed to launching rays this way and seeing where they focus, we have rays coming from this way. Now, we don't ever actually get to infinitesimal point like we predict in geometric optics or Gaussian optics tells us that there's a beam waist. And our beam looks like this instead. And that was what was trying to draw up here. STUDENT: [INAUDIBLE] JACK HARE: Yes. STUDENT: [INAUDIBLE] very long. JACK HARE: Yes, exactly. Exactly, and you can choose properties of your lens that will give you a narrower waist or not. It depends on your size of your lens and the wavelength of light and all sorts of other [? goods ?] like that. So this is not exactly a picture-perfect sketch of it. But the idea here is that there was some region over which, in the transverse direction perpendicular to your collection volume, you have a very narrow scale, which means that you can actually collect from a very small region on the order of 100 microns. Any other questions? Anything online? OK. STUDENT: Professor? JACK HARE: Yes. STUDENT: I have a question about how you split the signal to each of the harmonics. JACK HARE: You mean this splitter here physically or-- STUDENT: Yeah, what's the physical method of splitting it? JACK HARE: I don't know how microwave splitters work. But there is something that you can buy from DigiKey or something like that that will do it for you. Anyone know how micro splitters work? Looking at you? STUDENT: RF. JACK HARE: The answer I got was RF, which I don't think is much more satisfying than my answer. So, yeah, there are circuits which will split microwaves. STUDENT: OK. JACK HARE: Thing called a rat race, but I don't think that's for gigahertz. Have you ever come across a rat race? If you're doing lower things, it has ports which are spaced by wavelengths of your microwave and then there's constructive and destructive interference in different places? Yeah. STUDENT: [INAUDIBLE] JACK HARE: I don't think that's how-- I think they are literally splitting it and then having separate band passes. But, you're right, you could come up with some clever analog way of doing the splitting. What they were actually doing with this system is all of these were very tunable. And so, if they wanted to look at turbulence in the edge or turbulence near the center, they could actually tune all their bandpass filters to do that. But this is getting way beyond what I was hoping to talk about on this. So let's get on to correlation ECE see what that does. So now you've got these n different channels up here. And each of those channels is measuring some signal. And we're going to write this little tilde here to remind us that this signal is some oscillating quantity or time varying quantity. And that signal has two components to it. It's got some fluctuation, which is due to temperature fluctuation. This is the thing that we want to measure. And then, it's also got some noise. This is the thing that we don't want to measure here. And so, we are making the assumption here that our noise is much, much bigger than our temperature fluctuation signal. If it's not the case, then you don't have a problem and you don't need to do this technique. But it is the case in most of the applications that we're going to see it's used for. And so, the signal, as a function of time, is going to-- what we want it to do is look like, again, our temperature signal, which might be some nice and smooth function like this corresponding to some turbulent eddy. But what it actually looks like is some messy noisy thing which, even with some very aggressive filtering, is still going to be extremely noisy instead. So what we want to do is find some way of extracting this temperature from the noise here. And what we do is we pick two adjacent channels. So we'll call them S1 and S2. These channels are adjacent in frequency space, which means that they are measuring from adjacent parcels of plasma in real space or inside the tokamak. So, again, if we have our tokamak cross-section here, we have the plasma, we zoom in right at the edge of the plasma here. No, that's not going to work. We would have two volumes very close together but not quite touching. And these volumes we could call 1 and 2, like that. And what we do is we say, we arrange-- we have some simulation or some prior, which tells us that the size of a turbulent eddy, the size of one of these swirls, is larger than the separation between these two points. So we say, our term is greater than delta R. Which means that, if there's a temperature fluctuation associated with this little vortex, it is the same temperature fluctuation in S1 as in S2. And, again-- oh, I don't need this to be much greater than. I just need it to be greater than or on the order of. And, again, these are really, really closely spaced, 100 microns or so apart. This is really, really tiny. But, the nice thing is now that these two signals are carrying the same components that we're trying to measure. And this is where the correlation comes in. We are going to correlate these two signals. And we're going to find that the noise is uncorrelated at random, but these two signals will correlate together, and we'll be able to measure it. So we have-- just to be clear, we have chosen the frequencies omega 1 and omega 2 such that delta R 1 2 is less than the length scale associated with turbulence. And we can choose these frequencies because we have control over all of these bandpass filters and funky things like that. So you might need to do this experiment a few times to get it right. But once you get it right, your signal will leap out. Because what we find is that the temperature-- well, we find that the temperature is correlated. So if do S1, S2, and we do a correlation operation on these. And there are a few different ways to do correlations, and I'm not going to go into them, but I will give you a citation at the moment. We get out a term here that looks like the temperature term squared, the thing we want, the cross-correlation between the two noise terms, plus two cross-terms between the noise and the temperature signal from channel 1 and the noise and the temperature signal from channel 2. And because this noise is just random, when we do some sort of averaging, this could be in time or best to think of it is a short time integral, then these terms are all going to drop out. And we'll just be left with a correlation signal that is proportional to the temperature squared here. And, again, I've been a bit hazy about what these angle brackets are doing here. You could think of it as equal to some sort of short time integration of S1 and S2 from T to T plus delta T, like that. That would be a time correlation. But there are actually lots of different ways of doing this. And the citation here for you is a review paper by Watts, 2007. I don't know if this is an example of nominative determinism. Someone called Watts goes around making power measurements. [LAUGHTER] There we go. Yeah, and this technique is incredibly powerful because it's enabled people to measure, again, delta Te upon Te on the order of 1%. And they've done this at 13-- on ASDEX Upgrade-- 13 radial locations. So 13 positions they can measure these temperature fluctuations. And they've done this with a time step of 100 kilohertz. So 100,000 times a second they've been able to measure these temperature fluctuations. So on very small scales, very fast time scales, we can now measure temperature fluctuations in a tokamak. And this is something that has revolutionized our understanding of turbulence and its importance in tokamaks, because now we can finally characterize it. That was a lot. Any questions? We're going to move on from ECE [INAUDIBLE]. Yes. STUDENT: [INAUDIBLE] JACK HARE: I think the idea is that this is so noisy that if you try and just-- that's effectively a bandpass filter at that point. Yeah, I don't think you can get this out of an autocorrelation. So this is more sensitive than that. Yeah. STUDENT: [INAUDIBLE] JACK HARE: Yeah, so the idea is that we have positioned these two volumes, which are producing frequencies omega 1 and omega 2, we've chosen omega 1 and omega 2 so they're very close together. And we think that that distance is smaller than the size of our turbulent eddy. And within that turbulent eddy, the temperature should be the same going up and down. And if you do this correlation and you get out nothing, that probably means your volumes are too far apart because there is-- this T correlation would just go T1 T2. And there's no good reason to believe those temperature fluctuations are correlated, because they'll be part of a different turbulent eddy. And so, that would just go to 0 as well. Yeah. STUDENT: [INAUDIBLE] 13 times [INAUDIBLE]. JACK HARE: Yeah, so the question was, what is the actual spatial range of this? Do you have pairs and then a gap and pairs? I believe that, for this, if I remember correctly, they literally just had 14 channels side by side, and that gave them 13 points of correlation between each adjacent pair. And so, they were interested in measuring transport in the edge. But I believe, in a separate shot, you can then tune all of this slightly differently and move slightly further in depending on where you think transport is important. But if you're looking at transport in the pedestal region, you have some idea where that is. And so you focus your measurements in the pedestal. But, yeah, you're right, this does not get you a huge spatial range. It gets you about a millimeter. But that might be enough for your measurements. STUDENT: [INAUDIBLE] would you have [INAUDIBLE] in this localized [INAUDIBLE]? JACK HARE: Yeah, so the idea here is that your eddies are sort of like-- there are eddies like this, or maybe, at some other point, there is an eddy like this. And so, it will depend a little bit on how you do your time integration step here. So this is part of the art of it as far as I understand, is that you need to pick pairs because they're the only ones that could possibly be close enough to correlate. If you try and pick this one and this one, it will go to 0. But, as well, the eddies are going to be moving in time. So you may end up with a situation where, at some point, your eddy has moved across, and these two are no longer correlated because the next two are correlated. So it-- yeah. I think we're reaching the limits of my knowledge. Yeah. [LAUGHS] STUDENT: [INAUDIBLE] correlated, [INAUDIBLE] second and the third one are [INAUDIBLE]. JACK HARE: I mean, there's no reason to believe that each of the fluctuations is the same size. And there's no reason to believe they're stationary in space with time. So this is going to be a time-evolving system here. Yeah, exactly. Cool. Any other questions? Anything online? Now we're going to go to bremsstrahlung. Some people these days just call this breaking radiation, which is just the translation from the German, and that's a perfectly reasonable term to use. But if you haven't come across bremsstrahlung as a word before, that's what it means. And so, what we're dealing with is heavy ions, and all ions are heavy compared to electrons. And we've got some electron whizzing by. As the electron whizzes by, it feels a force of attraction to the ion. And so, it is briefly an accelerating charge. And so, it is going to emit photons as it's deflected from its trajectory. Now, this electron isn't then just going to sail off and never see an ion again. In fact, it's going to see one very, very soon. And so, effectively, our entire plasma is full of electrons which are gently being deflected and breaking and emitting these photons. So the main thing we can say in a classical treatment is that the bremsstrahlung is going to be isotropic. It's going to be the same in every direction. That's actually quite different from electron cyclotron emission, even though we didn't really look at the anisotropy of that in any detail. There are lots of different ways to deal with bremsstrahlung to do the actual calculation. Remember, we said we wanted to have v and v dot, that was our first thing, by solving the equation of motion. And then we want to integrate over the distribution function f of v e 3v That would give us the emissivity that is the input that we're looking for here. So there's lots of different ways of doing this, and Hutchinson lists a few of them. There's a purely classical picture. That classical picture looks an awful lot like what I've drawn here with the ions and electrons as point particles. There's a semi-classical approach where you start bringing in some quantum physics and treat, I think, the electron as a wave. And then there's a full quantum approach. What's remarkable about all of these approaches is they all give the same answer with just a very slightly different coefficient. So we get a small change in coefficient. The scalings are the same. So, in some sense, although it's important to get the exact coefficient, it doesn't matter exactly which one of these techniques you use. And that's why I'm not going to go through any of them in class. They're also, I think, quite complicated derivations. So we'll talk about it in a moment, but this j has some scaling, which is n squared T to the 1/2 E to the minus h nu upon T. And then there's a load of stuff out the front here. And these coefficients change from 1 to 2. So it's not really a big difference. I mean, it makes a huge difference if you design your reactor. Because [? if it's ?] 2, you could be a very long way off from breakeven. But, from the point of view of this course, it makes no difference. Yeah, I think it's kind of remarkable that it doesn't make any difference. So, again, if you want the full treatment, go have a look in Hutchinson. And there's also a long treatment in Jackson of this same problem. I'm just going to quote some results. I'm going to quote results at you. I kind of already spoiled it now. It's here. For the Maxwellian average, because, again, we can have all sorts of different distribution functions, but our plasma tends towards a Maxwellian. So a Maxwellian averaged bremsstrahlung emissivity, this is equation 5.3.40 in Hutchinson. This looks like 4 pi j nu the emissivity. I'm just putting 4 pi here because it's isotropic. So j is normally in terms of per steradian. But, because it's isotropic, I can just multiply. I can integrate over the solid angle, and I'll just get a 4 pi here, and it won't change the j at all. This is ne, ni, z squared, T to the minus 1/2 e to minus h nu upon T. And then, there's a factor called G bar, the Gaunt factor, which we'll talk about in a moment, and lots of other constants. And these constants are things like e, the electron charge, and the electron mass, and epsilon 0, and h bar, and c, all arranged in some way to make all the dimensions work. So we're not going to go in too much detail. This, therefore, has units of watts per hertz per meter cubed. Yeah. STUDENT: [INAUDIBLE] JACK HARE: Yeah, it should be minus 1/2 of my notes. That should be. So this Gaunt factor here, this is the bit where you can spend a lot of time refining your treatment, and you'll get different values of the Gaunt factor. And you'll also find this Gaunt factor varies very weakly with the temperature of the plasma here. So the Gaunt factor G bar goes from about 0.3 to-- in a range where the photon energy compared to the plasma temperature goes from 0.1 to-- I'm not writing this very well-- where the photon energy compared to the plasma temperature goes from 0.1 to 10. So this is a pretty weakly varying function. We can change it by two orders of magnitude in this parameter here, but this only changes by one order of magnitude. And so, in general, it's reasonable to just treat G as being a constant and then, for this calculations we're doing, where we're going to drop the absolute intensity, we can just drop G with all the other constants as well. So we can drop [INAUDIBLE]. But if you want to go back and do this properly for some measurement that you're doing, then you'd have to include this. And so, this emissivity, as a function of photon energy, is a very simple function because it just decays exponentially with photon energy. So that is the spectrum coming out of a plasma that's emitting bremsstrahlung radiation here. And that is all you need to do Problem Set 3. Any questions about this? Yeah. STUDENT: [INAUDIBLE] JACK HARE: You mean you take the bremsstrahlung into account? Well, so the thing about bremsstrahlung is, it's always there. It is the irreducible minimum amount of emission from your plasma. Cyclotron emission is a very specific frequency. So although, when we were drawing it before, we drew it over long range. Inside your plasma, the cyclotron emission might be just at some very narrow window here. But this is everywhere inside your plasma at all frequencies, like a black body kind of spectrum here. That goes down to very low frequencies and goes up to very high frequencies. We're going to talk about lots of other effects which produce emissivity which is higher than the bremsstrahlung. But, often, when we're doing power balance calculations for a tokamak, we will just use this, and that represents our most optimistic take. So, I guess, you always need to worry about this, even if you managed to clear out all the impurities, you don't have any lines, you don't have any electron cyclotron emission, you don't have any recombination, which we'll talk about next. This is still going to be there. STUDENT: So [INAUDIBLE]? JACK HARE: It's going to be on top of this. You're going to have this on top of. Now, the nice thing is, over a short frequency range, this does not change very much. So if I'm doing my ECE and I zoom in on this region here, I'm going to have i frequency here, this bremsstrahlung is going to be basically constant. And, on top of that, I'd have my ECE lines or, in reality, my black body ECE spectrum or something like that. But this bremsstrahlung may be quite small. But even if it's not quite small, it is constant because it varies only slowly as a function of a frequency. So for a small frequency window, it looks constant. And then you can just subtract off some background intensity and look at the actual signal you're interested in, like you see there. Yeah. STUDENT: [INAUDIBLE] JACK HARE: Ooh, ha, yeah. I mean, it should couple into-- So I mean, so the important thing to realize is what mode it couples in actually depends on where you're looking from in some sense. So if this is being emitted isotropically, some of it's going along the magnetic field, some of it's going perpendicular to magnetic field. If I'm observing perpendicular to magnetic field, I must be observing it coming out as O mode or X mode. I'm observing it along the magnetic field, I see it coming out as R mode or L mode, right or left-handed circularly polarized light. So, actually, in a way, as the observer, we choose what mode it goes into. But this emission is isotropic. So you can imagine you have a blob of plasma that emits. And then the wave has to ask itself, well, what sort of wave am I? At this point, well, if I'm going perpendicular to the magnetic field, I'm going to be O mode or X mode. And, in reality, there'll be some emission in O mode, some emission in X mode. And the exact coupling between those will be related to the polarization of the bremsstrahlung. And the polarization should be pretty random as well. So I would guess, without thinking about it very much, you get roughly equal into both of those. And if you're then-- if your wave that's being emitted along the magnetic field, you'll be roughly equal into the R and the L. But I haven't thought about that too much. Yeah, and then there will be differences in speed of propagation and things like that. But the bremsstrahlung is being constantly produced, so you probably won't be able to notice that very [INAUDIBLE]. Yeah, good question. Other questions? Sorry? STUDENT: [INAUDIBLE] JACK HARE: If it's optically thick. So, yeah, we haven't really talked about that. So from this j, you can now calculate alpha, the opacity, because we had j upon alpha is equal to nu squared upon c squared T. That was Kirchhoff's law here-- that's a terrible j. So I can now calculate alpha. And might look at this and I might go, ooh, interesting alpha is actually quite high for low frequencies. And so, that means for any reasonable distance of plasma that I'm looking through, that will be absorbed, and it will become black body. And so, in reality, for some plasma, we might have a spectrum that looks like this, where in this region here tau is greater than 1, and in this region here, tau is less than 1 or much less than 1. So it goes back to being optically thin here for the high energy photons. But the low energy photons will get absorbed as well. There's actually another effect, which is in Hutchinson's book, which I haven't covered here, which is, of course, there is no mode in a plasma which propagates below the plasma frequency. And so, the spectrum will be even further modified because that wave will be evanescent. But we're skipping over that this year. But it's in there if you're interested. Yeah. STUDENT: [INAUDIBLE] JACK HARE: We're not going to look at synchrotron, yes. STUDENT: [INAUDIBLE] JACK HARE: I think we had-- Sean's not here this time. Sean asked me that a couple of classes ago, I said I didn't know. And then he came and told me the next class. So cyclotron is when the particles are spiraling around the magnetic field. Synchrotron is when the particles are following the magnetic field in a curve. So there's two types of magnetic field doing acceleration, but they are distinct ways of producing light. Yeah, I think someone told me that maybe for a high field tokamaks synchrotron could start being significant. But I actually have no idea how big a deal it is. And I don't know if people are using it as a diagnostic. I've not heard of someone using it. But synchrotron light is used as a source of X-rays for diagnosing many other things. So it's interesting in its own right. But I don't know whether people use the synchrotron radiation from a high field tokamak if we've ever built one high enough for it to be a problem as a diagnostic of something. And I don't want to be a diagnostic of, because I think it depends mostly on the magnetic field and the radius. And those are two things in a tokamak you already know. So you probably-- but maybe there's a really clever diagnostic you can do, like fast particles or something like that. So worth thinking about. STUDENT: [INAUDIBLE] JACK HARE: Right, but I'm not interested in [INAUDIBLE]. This is a diagnostics course. [LAUGHS] Any questions online while we pause? So this is bremsstrahlung radiation, and often people call this free-free. And the idea here with free-free is that your particle starts off free, and it ends up free. So it hasn't changed. This is in contrast to some of the other types we'll be dealing with, like bound-free, free-bound, and bound-bound, you get the idea. Electron cyclotron is, of course, also free-free. So there are multiple types that people often just call bremsstrahlung free-free radiation. Now we're going to do free-bound radiation or recombination radiation. So recombination, otherwise known as free-bound. So our particle is going along, our electron, [? past ?] this ion here. And this particle is going along. And it's got some velocity to start with. And it's emitting, let's say, a single photon of energy h nu, like that. That entire acceleration yields one photon, one energy. And so, initially, we have a kinetic energy, a 1/2 mv squared. Then, afterwards, we're going to have an energy 1/2 mv prime squared minus h nu. So I have e, that, e prime, that. This is the energy here, and this is the energy here, [INAUDIBLE]. --b prime. Now, if h nu is less than the initial kinetic energy here, then we still have some kinetic energy left. So v prime is greater than 0, our particle is still free, and it can continue. That's the case we've just considered, bremsstrahlung. But, of course, there's another case where this photon takes away so much energy that this starts, I guess it becomes imaginary. So it's sort of pointless to continue at this point. But it's clear that the electron no longer has any kinetic energy. And, indeed, it's going to have to pay back any debt it has in some other way. And it's going to do that by becoming bound. And you can do that because we're going to switch to a slightly quantum model of the atom though, to be honest, the Bohr model works for this as well, where we have a range of different discrete energy levels that the electrons can occupy. So these energy levels are labeled by the principal quantum number n. n equals 1, 2, 3, 4, and so on. And, up here, infinity, this is ionization. If your electron gets this much energy, it becomes free again. And the energies of these levels are given by this unit, Ry, which is the Rydberg z squared of our ion over n squared. Just to be clear here, n is this principal quantum number. It is not the density anymore. We're dealing still with the single-atom picture, one electron, one ion. The ion has a charge z here. Anyone remind me what a Rydberg is? What its value is? 13.6 electron volts-- good number to know. So that's the ionization potential of hydrogen. So that means that when we have an electron coming in, that's initially up here, if it wants to become a bound electron, it has to drop down to one of these energy levels. And these energy levels only have discrete values. And so, what we're going to see in our spectrum is that this is only allowed if the electron energy fulfills this equation, which involves these discrete energy levels. And that equation is going to be the photon that is emitted. And that's the thing that we're going to see with our spectrometer is going to be equal to 1/2 mv squared plus z squared upon n squared times the Rydberg. We're going to get out different photons. Now, of course, these are not going to be at completely discrete lines because it's going to be broadened by our distribution function. There's a range of different electron velocities available. So each of these photons has some sort of variance around it. It's going to be some value, which is strictly greater than the Rydberg energy. So what does this look like? If we integrate over a Maxwellian-- so, again, this is Maxwellian averaged-- we get out two terms here. So we have-- do I want to say two terms? Well, I'm going to write it as the emissivity per principal quantum number n, so the emissivity per each of these discrete levels. And then we can sum them all up in order to get the total spectrum. So that just has our familiar bremsstrahlung type coefficients, electron density times the ion density times z squared e to the 1/2, e to the minus h nu upon T times a constant. But now we have an additional term, which is a new Gaunt factor for level n. But, don't worry, these Gaunt factors are all about 1 again, so it doesn't really matter that much. And then we have a term z squared upon T Rydberg energy 2 upon n cubed. So the strength of this emission drops very fast with n. So it's going to be strongest for n equals 1 and less strong for the higher principal quantum numbers here. And then, there's going to be a factor of exponential z squared upon n squared Rydberg energy upon T, like that. If you're wondering where the energies are canceling out here, the Rydberg is in electron volts, and we're putting T in energy units. So that can be in electron volts as well. These will cancel out quite nicely here. So you end up something that looks a lot like the bremsstrahlung but with an additional emissivity for each of these principal quantum numbers. And then, to get the total j, you have j brems plus the sum over n of jn here. And the spectrum, therefore, looks like we had something like this for bremsstrahlung before, and now we have a spectrum that contains a series of edges, like this. And this lowest edge here-- I should draw this as a straight line-- this is n equals 1, 2, 3, 4, and all the way back up here, where, again, this is our brems result, and this is our recombination. STUDENT: Professor? JACK HARE: Yeah. STUDENT: So if the photons released are greater than the kinetic energy, but it's not equivalent to one of the discrete energy levels, what happens then? Does it just not get bound? JACK HARE: Sorry, could you rephrase that? STUDENT: Yeah, so earlier you mentioned that it needed to be equivalent to one of the discrete energy levels. JACK HARE: It has to fulfill this equation here, where the photon that's emitted is going to have an energy equal to the initial kinetic energy and plus the energy that the photon gains by virtue of the electron occupying one of these energy levels. So if you're asking, what happens if you can't fulfill that, it just doesn't happen. Quantum mechanically, that would be a forbidden transition. STUDENT: Oh, I see. So it either becomes-- it either doesn't recombine if it doesn't have that and it's still free, and then that's just bremsstrahlung radiation, or it does recombine at these certain locations? JACK HARE: Yeah, so you could imagine that you-- I feel like for every electron there should be, at some velocity v, there should be some wavelength that it can emit at. I mean, this is a spectrum of solutions, it's not a discrete number of solutions. If I put in some velocity v and I occupy some principle quantum number n, then I will get out a photon here. There's no forbidden solution to this. I think what's being reflected here with these sharp edges is, these correspond the sharp edge here, corresponds to v equals 0, and this corresponds to v greater than 0. And if you get up to a point where your energy is so high that you can then access the next level, then that becomes very favorable and, in fact, there are some-- these ones down here, it might continue from the 2 level or it might start occupying this level instead. STUDENT: Yeah, I think I understand. So I was just wondering, because it's not the condition that there are photons that are released with greater than 12 mv squared the kinetic energy. It has to be either it's not released with greater than, and it's not-- JACK HARE: So see, it's the photon energy that's-- yeah, maybe we're doing this backwards in some sense. STUDENT: I see. JACK HARE: So if you see a photon energy with less than the kinetic energy of an electron, then that photon will have been emitted and the electron will remain free. You see a photon energy, how you would know what the velocity of your electron is, I don't know. But if you're in this single particle picture, if you had all the information, if you see a photon that's released with less than 1/2 mv squared, then the electron still has some residual kinetic energy left. And so, therefore, it must be free. But if you see a photon being emitted with more than 1/2 mv squared, then the electron has over-emitted. And the only way it could have done that is if it fell down into one of these principles. STUDENT: Oh, I see. OK, we just did it in reverse. I understand. JACK HARE: Yeah, I think maybe I motivated this the wrong way around. Yeah. Other questions? Yeah. STUDENT: [INAUDIBLE] JACK HARE: Yes, exactly. So this long tail here is due to the fact that we have electrons not just with v equal 0 but much more than that, yeah. STUDENT: [INAUDIBLE] JACK HARE: Hmm, it's a good question. Now, are they actually sharp or is it that we want f of distribution of v squared, the distribution of v squared, like that? So perhaps these are actually like this. I have to admit, in the book, it's drawn with very sharp lines. But maybe my eyesight is failing me and I should look more closely. I'll have a look into that. Good question. Any other questions? Yeah. STUDENT: So [INAUDIBLE]. JACK HARE: [LAUGHS] It's quantum, it's worse than that. [LAUGHTER] STUDENT: [INAUDIBLE] JACK HARE: Right, there was a probability that this process happens, and that is related to the overlap in the wave function between the wave function and the free electron, like a plane wave, and the wave function of a bound electron there. So the probability of that happening is related to the overlap integral between those with a dipole operator, the dipole operator being the thing that emits the photon in between them. So you could write it as Psi prime dipole operator, like that. That's the probability of this thing occurring. Yeah, but-- STUDENT: [INAUDIBLE] JACK HARE: I mean, this is where, again, our quantum picture and our classical picture are irreconcilable, because if we're treating the electron as a plane wave delocalized over the entire volume. So, yeah, this is-- you can't really-- [LAUGHS] even in a single particle picture, this drawing is not reconcilable with quantum [? theory. ?] So, yeah. STUDENT: [INAUDIBLE] JACK HARE: Yeah, it depends on the ion charge. So if you've got some helium with 2 times ionized helium, and you've got some hydrogen, then it will have different recombination lines here. And that's because, fundamentally, the energy levels are in different places inside here. And the energy levels shift by the ion charge here. Yeah. Any other questions? We'll go into some ways of actually making some use out of all this stuff in a moment. Yeah. STUDENT: So an electron gets bound by an ion, then, presumably, at a later time, [INAUDIBLE] that absorbs some [INAUDIBLE]. JACK HARE: Ooh. STUDENT: Wouldn't this balance out, electricity going in and out of ions? JACK HARE: We'll get to that. There's a very good question. We're going to get to that and the idea of detailed balance and thermodynamic equilibrium and things of that ilk. Yeah, I'm not going to skip ahead. But, yeah, we have a section where we talk about all these inverse processes. But these are the two simplest ones, free-free and free-bound, that I wanted to discuss before we did some diagnostics. But, yeah, you're right, there are inverse processes for all of these that will affect the spectra that you get. Any other questions? Let's keep going. So one use of this is a diagnostic called bolometry. In fact, bolometry is, in some sense, a very simple diagnostic. It cares not at all about the detailed spectrum of what the emission is. It just wants to know how much power is being radiated by the plasma, so bolometry. And to motivate this, let's go back to our 0D power balance from a fusion plasma, where we have alpha heating, we have some external heating, and in steady state this is balanced by conduction losses and what we, at the time, we assumed were bremsstrahlung losses, but in general could be any sort of radiative losses here. So I'll write this as S rad here. And so, bolometry is focused on measuring the total power that's being emitted. And so, that is going to be the integral d nu of j of nu. So we take our beautiful spectrum like this, and we integrate over all of this. We lose all of this detail. But, of course, the detail matters when we're doing the integral because you can see the recombination increases the amount of emission above that baseline bremsstrahlung value. So there'll be more emission here. And the reason, of course, we want to measure this is we want to know how much power our plasma is losing by radiation. That's a quantity that, in general, we want to minimize. Maybe we want to know where it's losing that power. And so, we can say, OK, well, that part of the plasma is clearly radiating too much. What can we do about it? So this is what bolometry is trying to measure. And a really simple way to do bolometry is we have some radiation coming out of the plasma. We'll call it P rad here because it's an integral over S rad over the distance of the plasma here, whatever. It's going to have different units. And we have a little sensor sitting at the wall of our vacuum vessel. And that sensor is just a resistor. And we'll call this resistor M. It has a resistance M. And that resistance M is a function of temperature here. And we apply some voltage, V0, over this resistor. And we also put another resistor that is shielded from the plasma that we call R, and we measured the voltage over this resistor R. Effectively, what we're trying to do is measure this volt-- this resistance M of T. We'll choose R to be less than or roughly equal to M so it doesn't dominate this measurement. And the voltage that we measure, VR, is equal to whatever voltage we used across all of this system R divided by M. So by measuring this voltage VR, by knowing the voltage V0, by knowing the reference resistor R, we can measure the voltage M. This is a very simple resistive divider just drawn in a complicated way. And the reason why this is interesting is that, in general, M is going to change with T. We're going to have some resistivity as a function of temperature. And so, as this resistor heats up from absorbing the radiation, its resistance is going to change. And so, we can have-- yeah. STUDENT: [INAUDIBLE] JACK HARE: Have I broken this? Yeah. Thank you. It was right in my notes and I decided to innovate-- it's a terrible mistake. Never do that. Thank you. Right, that looks better. Good. So it's a voltage divider. We're dividing V0 between M and R. And we're measuring the voltage across R, which then, of course, tells us the voltage across M, which then tells us what resistance M has here. So if we're measuring the resistance M, we can then use some table of rho of T to get out the temperature. And, from that temperature, we can make an estimate of the radiation power incident upon it. And we'll talk a little bit more about how to do that in the next step. But this, effectively, serves as a very, very simple system. And you can be digitizing this as a function of time. And so, this is also a function of time and temperature. And so, we could get out some graph of our fusion reactor's radiation power as a function of time like this just by looking at the resistance. Do I want to introduce this equation yet? OK, I'm going to do this, but I might regret it. So the way that you link these back to each other is you say, for some radiation power incident on my resistor-- and I'll make it explicitly a function of time-- that's going to cause some change in temperature with time. And all of this is time some heat capacity. And, in general, this resistor is going to be connected. It has its own heat capacity c, but it's also going to be connected to some heat sink, which is unavoidable. Even if you don't connect it to a heat sink, it'll be connected to a heat sink through these wires. And that heat sink will have some thermal transport time tau. There's actually a second term here, which is delta tau delta T upon tau, like that. So there's two different ways that the radiation can affect this. And so, by measuring delta T up here, the change in temperature in our resistance from the change in resistance, we can invert this equation and we can back out the radiation power. So this is just a really simple way of using a resistor to measure the power coming out of the plasma. And bolometry was one of the first diagnostics that we had on many MCF devices. Questions on this? STUDENT: [INAUDIBLE] JACK HARE: [INAUDIBLE], we'll talk about that in a moment. I worked on the bolometers for [INAUDIBLE], so neutrons were the bane of my life. Now, this itself is a very simplistic system, and it does not work because it is very susceptible to noise. So we have to, as always, come up with a clever system, which is noise resistant. And think I'm just going to-- ugh-- get myself another board. What time are we on? Oh. OK. So what we do in-- to make this measurement more precise is first we do two things. First of all, we do something called a Wheatstone bridge, which many of you have come across before. A Wheatstone bridge is an arrangement of four resistors. So there's a resistor here, another resistor here, a resistor here, and another resistor here. And we connect these resistors up. And the clever thing, in a bolometer, is we allow two of the resistors to see the plasma. And these are the ones we call the measurement resistors M1 and M2. So you can imagine these resistors just having a little window that allows them to see the plasma, and the other two resistors cannot see the plasma. They have a large lump of metal which gets in the way. And these are the reference resistors, R1 and R2. I think I'm just going to call these R2 and R1. And, in a Wheatstone bridge, what we now do is we still have a potential drop, V0, over the entire bridge. But we now measure the potential difference across these pairs here. And that is called V bridge. So these measurement resistors, they see the radiated power P rad, and their temperature is equal to the temperature of the reference resistors plus a change in temperature due to radiation, whereas the reference resistors just have a temperature which is equal to the reference resistors. The point of this setup is your entire bolometer is going to heat up because the entire vacuum vessel is going to get hot. And so, what you want to know is not just how hot they are, but how hot they are relative to the vacuum vessel. And so, this is why we have this system where we're trying to measure a very small change between R1 and R2, which would be identical, and M1 and M2. And it's this small quantity here, delta T, that's due to the radiated power that we're really trying to measure. And we do this because-- or the way that we do this is we notice that V bridge is the difference between two potential dividers, one potential divider that goes on M1 R1. And so, the potential that this floats at here, Va, is going to be different in general from the potential that this floats at, Vb, where the two resistors are arranged in the opposite way around. So V bridge is going to be equal to V0 R upon R plus M. That's just a potential at a here-- minus the potential at b, V0 M over R plus M. And if you look at this, quite simply, you can see that it's proportional to R plus-- or R minus M. So it's a differential measurement. We're just measuring a voltage which is proportional to this small quantity. And that is entirely due to the change in temperature. So we're now able to isolate the change in temperature very, very precisely. And, in general, we don't just use this Wheatstone bridge, we also use a phase locked loop measurement, which is a form of heterodyning again, my favorite technique. And so, we actually have V0 is oscillating at around 50 kilohertz or so. And so, we can see that our signal, delta T, or the change in temperature as a function of time, is a slowly varying signal which is embedded on top of this 50 kilohertz. And then, using heterodyning techniques, we can measure it very cleanly without noise. This may be an experimental technique you're familiar with. What the bolometer head actually looks like is we have some large lump of gold here. And by large, I mean it's about 100 microns thick and about 1 millimeter long. This is what faces the radiation. The radiation is coming in from this side. This is often made out of gold. Gold is chosen because it absorbs all wavelengths relatively evenly. So you don't need to worry too much about the spectral response of this. This thin layer of gold has been deposited on top of a substrate. And it's on the back of the substrate that you have your resistor. And your resistor is literally a little zig zag of gold deposited on the back side of this. So this is M or R. And depending on whether it is M or R, you either have this open to the plasma or you have a thick block some distance in front of it so it can't see the plasma. It's the heat capacity of the absorber, c, that goes into our previous equation. And it's the heat transport kappa grad T that gives us the time constant for thermal conduction through the substrate from the absorber to the resistor. And I'll just write that equation again because it's disappeared now-- that we can work out the radiated power by looking at how the temperature of the measurement resistor M changes in time. And that change in time is due to heating up and heating down from direct heating and then also a time lag phenomena delta T on tau. And, effectively, this tau here sets the timescale at which we can measure. So the larger tau is the slower our measurement of the radiated power is going to be. And if tau gets very large, because we've got a very thick substrate here, or it doesn't have very good heat transport, then we're going to have a very poor time resolution of our radiation power. And, of course, we'd like to have a nice time resolution of that. I'm just going to finish up with a couple of details. Then I'll take some questions and we'll leave it there. So the trouble with these, as someone asked, when it comes to radiation, first of all, they do actually sense, basically, all energy coming out of the plasma or all power coming out of the plasma. It's not just radiated power, but they will also sense neutrons and particles. So if you've got ions or electrons coming out and hitting these and being absorbed, that will also heat up the absorber. This will measure those as well. So it's very hard to tell the difference between those. But, also, the neutron damage leads us to use much thicker substrates, which gives us a longer time response and so, therefore, a worse bolometer. So, ironically, the bolometers that will be used on [INAUDIBLE] are significantly worse than the bolometers used on existing devices, not because they couldn't work out how to make better ones, but because the radiation forces you into a regime where you can't use a good bolometer anymore. And I'm sure that [? Spark ?] will have exactly the same problem. Almost everyone in the world uses this design of bolometer that they pioneered on ASDEX Upgrade in the '80s. No one has come up with a better system yet.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_12_Electron_Cyclotron_Emission.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So just a quick recap of what we covered on Tuesday, and then we will continue and look at electron cyclotron emission. So first of all, we had a look at the radiation transport equation. And we wrote this radiation transport equation in terms of the intensity of light-- spectral radiance, to be correct-- coming out of the plasma and how it changes along the path through the plasma. So we've got a little plasma here and we've got some path going through it that we parameterize by s. The change in this spectral radiance is equal to a quantity j, the emissivity, minus alpha times the spectral radiance. Alpha is the opacity here. And it's convenient to rewrite this equation in terms of a dimensionless parameter tau by dividing through by alpha. And then we get this compact form of equation, di d tau is equal to j from the alpha minus i. And what we've done is we've defined this parameter tau as being equal to the integral of alpha ds here. And again, this is an indefinite integral. It's only when we start putting limits on this that we know whereabouts we're integrating from and integrating to. And the other thing to say is that this equation is true for all frequencies. Maybe we'll write these in terms of cyclic frequency or rads per second or wavelength or energy, whatever it is. Just some way that we're measuring the difference in the spectrum here. And this equation then can be solved separately for all the different frequencies in your system. In our assumption here, we're not allowing the frequencies to interact in any way. There's no second harmonic generation. There's no absorption and fluorescence and things like that. And this tau parameter is a nice dimensionless parameter, so of course the first thing we want to do is work out what it means for that dimensionless parameter to be small and what it means for that dimensionless parameter to be large. In the case where the parameter is small, we find a plasma which we call optically thin, which means the radiation streams through it without being absorbed and there's minimal emission. In the opposite case where tau is greater than one, our plasma is thick and incident radiation is strongly absorbed and the plasma is also strongly emitting. And in fact, we found out that in this thick case, our plasma is going to be so strongly absorbing that it's going to approximate a black body. The black body has a spectral radiance that looks like mu squared upon c squared times the temperature. So it doesn't depend on any particular properties of your material. It just depends on its temperature. This is the ratty jeans approximation. The full black body is more complicated, but most of the time we can use this ratty jeans approximation. And the neat thing about that is we found that there was a correspondence between this equation here and the optically thick limit and the black body equation, which is Kirchhoff's law. And Kirchhoff's law lets us set the emissivity over the opacity equal to the frequency squared over the speed of light squared times the temperature. This effectively tells us that the emissivity and the opacity are linked together. And so we've calculated, one we automatically have a calculation of the other. And simultaneously, if we have a region with high emission at some frequency, that's also a region with high absorption as well. So these two quantities are linked. OK. And then we talked about how to actually calculate j, and therefore alpha, so that we can use this radiation transport equation to see how plasma allows electromagnetic radiation to propagate through it. And we did a sort of pictorial representation and Tucker sent around a very good web demo that you can play with to play with that pictorial representation of y accelerating charges radiate. And we also wrote down a very long equation and then immediately dropped the near field term. But the remaining term, the far field electromagnetic radiation from moving charge, was equal to something with a bunch of constants and a few new variables related to where we are with respect to the particle. But the key part of it was that we started getting these cross products here that have within them some properties of the particle itself, namely its velocity crossed with its acceleration. So it's clear that in order to estimate the radiation from a moving charge, we're going to have to know these two properties. And so the recipe that we're going to follow is we're going to firstly calculate v and v dot. And we're going to do that with the equation of motion. And then secondly, now we're given the radiation from just a single charge. We then need to integrate this over the distribution function f of v v3v to get out the emission from all of the charges in the plasma together. And that is going to give us the emissivity at a single point. And then of course, because they're linked, also the opacity. So this is what we're going after. Any questions on that before we move on to an example of this? Yes? STUDENT: Can you explain again how we got the near field in the electric field? JACK HARE: Yeah. So the near field term-- which I haven't written down here. The near field term goes as 1 upon r squared whereas the far field term goes as 1 upon r. So when you calculate s is equal to e cross b-- [INAUDIBLE] The magnetic field is going to have the same radial dependence, the same drop off in distance as the electric field. And so the Poynting vector is going to go as one upon r to the 4. And that means that if you go far enough away from the particle, that electric fields and therefore the radiation from it is going to drop off faster than r squared. And so you're not going to see it from any reasonable distance away. In fact, the distance you need to be away is a few wavelengths of the radiation, and so that's usually quite short. So we don't have to worry about this. Effectively what your electric field looks like-- and I had a little chat with Sean about this after the class, so hopefully I remember it. The electric field that you as an observer would see at some function of distance has 1 over r component, which is the far field, a much more steeply dropping 1 over r to the 4 component, which is near field. And so you don't, as an observer, get to see the difference between these two. They look the same. What you'll end up seeing is just some field that looks like this. The near field component won't show up at any sensible distance away from the plasma so we never need to-- we do not need to worry about it in plasma physics as far as I know. But maybe there is some special case you do need to think about it. We're going to drop it in these lectures. Yeah, Shawn? STUDENT: It makes sense that we don't need the near field for trying to observe the radiation that's emitted, but if you're trying to couple the emitted radiation back to the plasma for the absorption, shouldn't the near field contribute to that because you do have particles that are near the emitting particle? JACK HARE: OK, so to the benefit of people online, the question is, doesn't the near field perturb the plasma? I don't know. I could handwave and say something about the y lengths. The plasma is very good at shielding out electric fields. Right? And so maybe it will depend on the wavelength of your radiation compared to the Debye length of that. But I'm just spitballing here. I don't know the answer to that. Yeah. Thank you. You're going to love my next-- I went to talk to Paul Bernoulli and I'm like, hey, Paul, is this right? And he's like, yeah, that sounds right. So you're going to love my next approximation and a little bit, so. OK. Any other questions? Anything online? OK. Let's keep going. So we're going to be looking at electron cyclotron emission or ECE. This is a key diagnostic in magnetically confined flasks. This is a very, very important diagnostic, which is why we're going to spend a significant amount of time on it. This diagnostic works on the basis that we have magnetic field lines inside our plasma and we have particles, electrons, which are spiraling around those magnetic field lines. There is also ion cyclotron emission as well, which we are not going to cover here. I believe that's because it has a much harder time propagating out of the plasma. We'll discuss a little bit about that later on. At the moment, we're just focusing on the electrons here. And so what we want to do, as we said before, is write down an equation of motion. And that equation of motion is going to be simply that the time rate of change of the momentum of our electron is going to be equal to the force on our electron. So this is ev cross B here. We're assuming in this formula that there are no electric fields. So we're just dealing with the particles gyrating the magnetic fields. You've seen this before. You've solved this before. The one thing that Hutchinson puts inside here that we didn't have previously is this factor gamma. So this is the relativistic gamma here, and that is 1 minus v squared upon c squared minus 1/2. The gamma is always greater than 1. And this is a factor that effectively in this case, you want to think about it as increasing the rest mass of our particle, and this is because our particles tend to be moving in a MCF plasma where the electrons are at 10 keV or 20 keV. That's a significant fraction of their rest mass, and so there will be some relativistic effects on top of this. So in this case here, we know that this-- because we've solved this equation before, we know that this electron is going to oscillate. I guess I can be precise. I can say it's going to gyrate at a frequency omega c, the cyclotron frequency. And this is just ever so slightly different from our normal gyrofrequency because it's different by a factor of gamma here. So the gamma changes a little bit, and so this is just e0 over electron mass times my gamma. Again, if you just keep the gamma sticking with the electron mass you won't go too badly inside this. Because the electron is gyrating at this frequency, it is reasonable to believe that there is an electric field that generates which is also gyrating at that frequency. So this means we expect to have radiation at this frequency as well. And that's true. We do have radiation at frequency. But the thing I've been struggling to find an intuitive explanation for-- I went to go talk to Dr. Bernoulli about this just before this class-- is that we don't just get radiation at this frequency. When you go into the mathematics of it, you find out that you get radiation at integer multiples of that, at harmonics of that. So we also have this factor of m here where m is some natural number. Now mathematically, I can tell you where this comes from. It comes from, for example, Hutchinson's equation 5.2.7. What this is is there's an exponential of some horrific quantity and Hutchinson and everyone else says, well, fortunately there's a really great trigonometric identity that converts the exponential of this horrific quantity into a sum over an infinite number of Bessel functions. And those Bessel functions are all oscillating at harmonics here, and so therefore, you've got all these harmonics. Right? So mathematically it all makes sense. No one is upset about that. The trouble is that it doesn't make any physical sense. As far as I can tell, there's no reason why electrons should have to care about Bessel functions any more than I do. So something is going on physically, and so I went to go talk to Paul and ask about it. And the thing we came up with is whenever you have harmonics in a system, in some sort of simple physical system, it's usually due to some nonlinearity. So we need a nonlinearity in our system where the system acts back upon itself in order to do so. And what our best guess is that the electric field that you produce at the fundamental frequency here, omega C, is going to go back and couple back into the electron equation of motion. So effectively the electron is going to be forcing itself slightly out of phase, and that is going to give rise to these other harmonics here. So this is sort of a way of trying to make the system nonlinear because, of course, this electric field depends on v dot and v, and that's the thing we're solving for over here. So we think that it's a nonlinearity. So we think that these harmonics are due to nonlinearity. We looked at Sticks. We looked at Chen. We looked at [INAUDIBLE] piece. We couldn't find anyone who was willing to tell us what the answer is. So if anyone has any insight into why it's the case and if it's better than just waving my hands and saying nonlinearity, then please go for it. But the alternative is you just have to treat this as a mathematical exercise and churn through all the algebra. And I assure you if you go and have a look at this, you'll see that it really is an awful lot of algebra. So I'm trying to keep the mathematics out of this as much as possible. But we're going to do this properly, you find out-- and this is equation 5.2.16. You're going to get a sense of peaks in your emission, and those peaks are going to be at frequencies omega n, which are equal to m omega c, and they're actually going to be modified for future relativity. They're going to be 1 minus beta squared to the 1/2-- I'm going to explain what beta is in a moment-- 1 minus beta parallel theta. I'm going to explain all of these terms in a moment here. So we've still got this m inside here. It's just that there's also-- m is the control number. It's just that we also have some modification due to relativistic effects here. So what are these betas? Some of you have come across these before. This is absolutely not the plasma beta. It has nothing to do with the plasma beta. The beta is defined as b upon c. So it's the ratio of the particle's velocity to the speed of light. And we also have theta parallel, which is equal to b parallel upon c. That parallel is with respect to the magnetic field, c0 here. I don't think it comes up, but you can also define beta perpendicular if you want to as well. OK. There's something else I wanted to mention in all this. Hm? STUDENT: [INAUDIBLE]. JACK HARE: What? STUDENT: [INAUDIBLE]. JACK HARE: Oh. That was the one. Thank you. Yes, I forgot to build a diagram early on. So theta, in this case, is the same theta that we had when we're talking about x mode and o mode and all that sort of stuff. It's the angle between the observer and the magnetic field here. So this is the term that is largest when you're looking along the electron trajectory towards you and smallest when you're looking perpendicular. And so you can see straight away this is a signed quantity. It's going to end up looking a little bit like a Doppler shift. So that is the Doppler shift that you were expecting to see. But it's relativistic Doppler shift here. So for a single particle, just some electron moving on a magnetic field, we would have an emissivity that looked like a series of these peaks. OK? And these are delta functions in the theory. So you just have these very sharp emission lines and they're evenly spaced. STUDENT: Are heights the same? JACK HARE: The heights are not the same. If you want to get the heights, you have to go look at all those nasty Bessel functions, which I'm hiding from you. But if you want the height, the argument of the n-th Bessel function. And the thing that goes into the Bessel function is some complicated factor of all these other things here. So again, I'm not going to explain it, but they do not have to have the same height. I've drawn them a little bit like this. And if that particle is moving particularly fast, then all of these will be shifted in one way. If you're looking at it from a certain angle, we shift it another way. But they will all be shifted in the same way because this is just for a single particle. OK. Any questions on that before we find out what happens when we have more than one electron? [INAUDIBLE] OK. So the next thing we do is we look at many particles. Many electrons. And for this, it's useful to split our three dimensional distribution function f or b the vector v3v into a distribution function in beta parallel and beta perpendicular. It would be like this. So this is looking at particles which are streaming along the magnetic wall. This is looking at the component of velocity along the magnetic field and the component of velocity perpendicular to magnetic field. That's because this problem is symmetric. It doesn't matter how I rotate my axes around the magnetic field. The velocity overall is going to be the same. So I don't have to actually deal with three dimensions here. I've just turned it into a two dimensional thing. And if you do this integration, you'll find that your betas are different from different particles, right? Remember your Maxwellian. I just take a one dimensional slice through it, it has lots of different velocities available. And so that means some particles are going to be going with a velocity, which changes the sign of this to be bigger than one and some part of this denominator to be bigger than one, some to make it smaller one. So we can see that we're going to have broadening of these peaks, and this is the Doppler broadening that we expect. So let's just draw a single peak. It was a delta function at m omega c. It doesn't matter which harmonic it is for our purposes. This delta function is now going to be broadened out. So the delta function was always rather unscientific, unphysical, and so it's going to be broadened out by these natural broadening processes. And there's two of them. There's this top process and the bottom process here. I'll just write them out again down here. So we have a factor of 1 minus beta squared to the half over 1 minus beta parallel cosine theta. So let's start with the numerator-- the denominator here, the term on the bottom. For some symmetric Maxwellian like distribution function, what does this term do to this line? How does it broaden it? Is it symmetric, asymmetric? Imagine we're looking at a fixed angle because we are. We don't have eyes all the way around the tokamak, so you can just fix theta to be some angle. Pick an angle where theta isn't zero or otherwise it gets very boring. What will this term 1 over beta-- 1 minus beta parallel do? Yeah? STUDENT: It'll broaden it symmetrically. JACK HARE: Broaden it symmetrically, right? This is going to look like-- say I look along the field line. Cos theta equals what? Theta equals zero. This is going to be 1 plus or minus b upon c. The plus or minus there is saying for every particle going along the field line towards me, there's another particle going away to the other side. And this is also going to be very small because the particles are not-- we're not dealing with actually ultrarelativistic particles here. And so this is going to look 1 plus or minus delta, and that means that it's going to broaden this peak symmetrically. What about this term at the top? So you can call this the Doppler term. What does this term on the top do? Anyone online? Is it always bigger than 1, always less than 1, sometimes bigger than 1, sometimes less than 1? Yes? STUDENT: It's always going to be less than 1 since theta squared is always positive. JACK HARE: So this is always less than 1 because, as you say, theta squared is always positive. So it doesn't care about the sine of v. So it's going to be asymmetric here. So this is the relativistic mass term. And that will broaden the peak further, but it will only broaden it in one direction like that. So your peaks end up looking like this. If you want to know which of these two terms is larger, then you're asking the question, when is beta cosine-- beta parallel cosine theta greater than theta squared? That's the case when cosine theta is greater than dt upon c, the thermal velocity of the electrons. And that's the case for beta less than but almost up to pi upon 2. For almost all angles, this Doppler term dominates. STUDENT: [INAUDIBLE] JACK HARE: Yes. You're right. JACK HARE: Almost all theta apart from theta exactly equal to pi upon 2, the Doppler term dominates. So you'd see mostly symmetric lines with some small asymmetric shift. Apart from if you're exactly pi upon 2, which is a place you're often at when you're looking inside a tokamak. So this could happen quite a lot, in which case you're going to see a line that is very asymmetric indeed. OK. Questions on this? Yes? STUDENT: What is the j in the single particle graph? JACK HARE: That is the emissivity. The thing we're trying to calculate. Hutchinson writes it in a sort of differential form, which is nice for keeping track of the units. I'm just going to call it j. Sometimes there's some steradians and stuff lying around that you have to deal with. But roughly this corresponds to your-- this corresponds to your intuition of brightness. Right? This is how bright the plasma is with this magnetic field and this velocity for this single particle. And then of course, all of these lines would be broadened by these terms. OK. Yeah? STUDENT: I'm a little confused in the picture with the asymmetric peak. JACK HARE: Yes. STUDENT: Right? The orange equation on the right is saying that slower particles will have a-- like less reduced. I'm just trying to think about the way that you drew the-- JACK HARE: The relativistic mass correction, or the-- STUDENT: Yeah, yeah. JACK HARE: Well, actually this part, this is due to fast particles. STUDENT: OK. Yeah. JACK HARE: They're so fast, they're so heavy, their gyrofrequency has gone down. STUDENT: Oh, OK. JACK HARE: Yeah. Good question. Yeah. Yeah, you're right. That's really counterintuitive because actually for the Doppler shift, the particles down here are the fast ones. That's true in both cases. Yeah. It's the fast ones in both cases. Yeah. These ones up here are the fastest, but. Any questions online? OK. So we use this in practice by having a very good idea of what the magnetic field profile is inside our plasma. And a good example of a plasma where we have a very good idea of magnetic field profile is a tokamak, or maybe something like a stellarator. Something that's very low beta. So I'll show you what we do in that situation. So this tends to be used in magnetic confinement fusion with beta. That's the thermal pressure over the magnetic pressure-- I don't make the rules-- is much less than 1. Nothing to do with the other theta. And this condition effectively means that no matter what your plasma is doing, your magnetic field is pretty much the same. So you know your magnetic field profile. And a good example of this would be inside something like a tokamak where we have an axis of symmetry here and I have some cross-section of my tokamak like this, and maybe the magnetic field in this tokamak is going to be dropping off as 1 upon r. That just comes from Ampere's law, the fact that you have current flowing up through the magnets in the middle. It can hardly be anything other than 1 upon r, and that's true. And so the interesting thing here is that different regions of the plasma have different magnetic fields. Maybe some of you can see where this is going. So we can call these, for example, r3, r1, and r2. Make sense? r1. Three different radii here. And what we're going to do is we're going to observe this plasma with some microwave antenna, some sort of horn. That's going to collect the electron cyclotron emission, which is in this gigahertz range here, and that electron cyclotron emission is going to stream out in every direction. But one of the directions it's going to stream out into is our horn, so hopefully we can catch it. And then what we'll do with the signal that comes into this is we will split it a load of different times. I'm just going to draw n times here into n separate channels. We'll have a little bandpass filter on each of these which selects out for different frequencies. This is bandpass. And then each of these will go on to some detector. And that detector will measure the power as a function of time contained within this frequency band here. And so this is effectively a time resolved spectrometer kind of system. OK. Why is this helpful? It's because, as you said before, B of r is well known. We think of the intensity that's detected on our detector over here. We're going to have some emission that's coming from r1. r1 has the highest magnetic field, therefore, it has the highest frequencies. So its first emission is going to be, for example, at this frequency here, which is omega 1, like that. This is the frequency corresponding to position r1 and it's the first harmonic of it. In the same way, r2 corresponds to the middle of the plasma. The magnetic field is lower, the frequency is lower. Omega 2 and r3 corresponds to the lowest magnetic field. There's no good reason for these to be the same in general, aside from [INAUDIBLE]. But zigs around a little bit, so it's clear that they don't have to be [INAUDIBLE]. But of course, as well as that, you're also going to get the harmonics. You're going to get a harmonic. This is why I should have drawn these much closer together. It would have made it much less clear at 2 omega 3 and at 2 omega 2 and at 2 omega 1. And then of course at the third harmonic as well, now you can see, oh, dear, these are beginning to overlap. This is getting a little bit tricky. So maybe I shouldn't have done it like this. But they will. Eventually they will overlap. With high enough harmonics, you're going to have different harmonics all interacting with each other. So these are the third harmonics up here. But if you focus just down in this region, for example, every frequency corresponds to a specific magnetic field, which corresponds to a specific radial location. So if you're seeing emission at this frequency, you know that an emission is coming from this bit of the plasma. So this is extremely powerful because you have, as well as the spectral resolution and time resolution, you now have spatial resolution. OK. Questions on that? Yeah? Yes, question on that? STUDENT: Hi there, professor. So the spatial-- you can get spatial resolution because of the decreasing r field, but is there not going to be any overlap with the earlier harmonics in those? Couldn't you be closer to 1, then they'll overlap and-- JACK HARE: Depending on where your tokamak sits out here that gives you the gradient in magnetic field-- and that gradient in magnetic field is going to determine the spread of frequencies that you have that you're observing from. If your frequencies are closer together because, for example, your tokamak is further out in the 1 over r field, it's a large aspect ratio of tokamak, if those frequencies are closer together, you'll simply have to have a better spectrometer. There, of course, comes a point where all of the broadening that we talked about before is going to screw you over a little bit, and so that's going to fundamentally limit the spatial resolution you have. But broadly, you can take this signal and maybe deconvolve it and say, I've got some spatial resolution here. Yeah. Really, your spatial resolution is also going to be limited by the number of these detectors that you can field. So, like, people will do a 12 channel system, and then they'll effectively be able to resolve 12 different spatial locations across the plasma. STUDENT: And the radiation field-- JACK HARE: You're breaking up quite badly. I don't know if it's your microphone or what, but I can't really hear you very well. STUDENT: Can you hear me now? JACK HARE: Slightly better. Yeah, go for it. STUDENT: The detectors, the emissions, they're not happening isotropically, are they? Are they happening all over the tokamak, or? JACK HARE: They're happening in every direction. They're not happening isotropically, and we haven't derived the pattern that they make. And it's in Hutchinson as well. And you haven't asked about the polarization yet, and they've got different polarizations too. So it's definitely not isotropic, but you should definitely imagine that these are going out in lots of different directions. And if you're one of these sorts of people who likes putting things on the high field side of your tokamak, then you could also measure this just as well from that side as well. STUDENT: So you can put the detectors anywhere in the tokamak? JACK HARE: You can put the detectors anywhere. We tend to-- yes. Yes. You can put the detectors anywhere. Obviously this line of sight works particularly well because you're looking along the gradient in magnetic field. If you look from above, it's a little bit more complicated. But there was a very good paper just published on TCV, the tokamak at UTFL, where they had vertical electron cyclotron emission. Apparently it helps with some issues as well. So just saying, you can put it anywhere. It would definitely make more sense to put it in the direction or against the direction of the magnetic field gradients because you have this advantage that all the different frequencies then very clearly correspond to different places in the plasma. STUDENT: [INAUDIBLE] plane? JACK HARE: Yes. I think so. OK. Do you have a question? STUDENT: Yeah. I was wondering for the higher harmonics, you had the intensity increasing. Would the intensity of the higher harmonics be larger? JACK HARE: So I haven't really talked too much about what the intensity of the harmonics is. There are formulas to work it out. I think in general they drop off, but we're about to talk about opacity effects on these lines. And so what actually happens is the lower harmonics are absorbed more and the higher harmonics aren't. And so at least, I think, Hutchinson has a figure in his textbook where the second harmonics are the strongest and the third are actually stronger than the first and things like that. So it's complicated because there's a lot of factors at play in terms of what the final intensity is. Yeah. So here I'm just drawing them as different heights to show that they don't have to all be the same. But I don't mean anything by the specific heights that I chose for anything. Yeah. Shawn? STUDENT: On tokamaks people tend to neglect the poloidal field effect on the total magnetic field. JACK HARE: Yeah. So the question is, do you neglect the poloidal field? You do neglect it. Of course, you don't really neglect it. You would take it into account. But in this simple sketch here, I don't need to put the poloidal magnetic field in. But you could imagine that if you did know what the poloidal field was because you've got all your wonderful magnetic diagnostics and you can do a magnetic field reconstruction, then you know what the magnetic field is at every point in your tokamak and then you know what the frequency is and so then you can back this back out again. So it is possible to have poloidal field and still do this technique. It's just the sketch I'm giving you here doesn't include it. STUDENT: Then is it more complicated to get the theta, because it's now the magnetic field is not just straight around the torus? It has some-- JACK HARE: Well, you're still going to be-- even if you've got a twisted magnetic field line, if you're looking from the outside in it's still going to be at 90 degrees. It's just that it's going to be 90 degrees like that, not 90 degrees like this, because your magnetic field might be tilted up or down. It's still 90 degrees. If you start looking at another angle, not at 90 degrees, then, of course, you have to think about the geometry a lot more carefully. STUDENT: OK. JACK HARE: So if you're looking from-- yeah. If you're looking tangentially, right? If you're doing tangential viewing of ECE, electron cyclotron, like trying to look along the field lines, then the poloidal field will make a bigger difference. Here it doesn't make a huge difference. STUDENT: Thank you. JACK HARE: Cool. Lots of questions. OK, we'll go with Grant. STUDENT: I'm just a little confused on the spatial resolution. So is the idea we know the magnetic field so we can back out where the emission came from? Is that? JACK HARE: Yes. STUDENT: But that also relies on us knowing the velocity of the electrons to know their rest mass-- or not the rest mass. Their relativistic mass, right? JACK HARE: What you're saying is these peaks could be sufficiently broadened by these mechanisms that they overlap. Right? And of course, effectively, they will always overlap because I've just gone three spatial locations, but there's an infinite number of spatial locations. And so you're absolutely right. That broadening sets a limit on your frequency resolution. You're effectively convolved your signal with that asymmetric peak. And because you've set a limit on the frequency resolution, it sets a limit on how well you can resolve different magnetic field regions. And because of that, it sets a limit on how well you can resolve different spatial regions. STUDENT: Sure. But even just for a single particle, so there wouldn't be the broadening effect. If I gave you a set of peaks and a magnetic field profile, would you be able to figure out where those peaks came from? Would you also need to know what energy electron was-- like the mass was different to figure out? JACK HARE: I think this broadening is very small And so this is not a huge change to your signal. And then to me it seems like a deconvolution problem, which, they're hard, but not impossible to deconvolve signals. STUDENT: OK. Maybe I'm overcomplicating then. JACK HARE: I think you're thinking about reasonable complications, but not ones that are the biggest problem with this diagnostic. We're about to go on to the fact that most of these lines are optically thick and so you don't see them anyway. So yeah. But OK. Other questions. Yeah? STUDENT: What's the wavelength and frequency of these? JACK HARE: What it would be a typical frame-- yeah, like is there any-- STUDENT: Picking up other-- JACK HARE: Like 100 gigahertz or so. STUDENT: OK. JACK HARE: It depends strongly on the magnetic field. So in something like [INAUDIBLE] c model spark, it will be much higher than it is on other devices. STUDENT: If you get some other kind of radiation that kind of falls in the same spectrum, then-- JACK HARE: We will actually talk about that, yes, when we talk about the accessibility condition. This frequency may be close to other natural frequencies in the plasma, and that may modify what we can observe. So good question. Yeah. OK. Any other questions online or in the room? OK, let's keep going. OK. So because you know the emissivity or you can know the emissivity-- hopefully you believe me that it is possible to calculate with lots of algebra, the emissivity. We can also calculate alpha. That means that you can ask yourself what is the optical depth of one of these waves propagating through this plasma. You can solve tau is equal to alpha dx. And what you will find if you do that is that for the lowest harmonics m equals 1, m equals 2, often but not always, that tau is much greater than 1. So for these lower frequency harmonics, you are optically thin. Now that's bad because you can no longer use detailed structure inside these in order to work out clever things about your plasma. But it's actually incredibly useful because what we find in practice is that the intensity at these different frequencies is now just going to be the black body intensity. Right? So the intensity i of mu is just equal to nu squared upon c squared times t. You'll notice I continuously switch between omega and nu, and hope you realize that omega is just 2 pi nu and are not too confused by this. Hutchinson uses nu because most diagnostics work in Hertz, or at least we think about things in Hertz when we're digitizing them rather than omega. But to be honest, I think you should be comfortable enough to switch between these unit systems. OK. So the neat thing about this is that you will then have, for this region of the spectra, a signal that maybe looks like this. And again, this is frequency here and intensity here. And that means that at each frequency-- what colors did I use? Actually, I will go like that. This is just the black body curve. I've actually gone and done the full thing, not just the nu squared. Do we want to say that? No, don't say that. Ignore that. That's not the scope. Right. You'll get a spectrum that looks like this. And that means for every frequency, you can measure the intensity. And so therefore for every frequency, you know the black body temperature that corresponds to it. So we have omega. This frequency codes for a very specific magnetic field, which codes for a very specific. Also the plasma. And this intensity divided by nu squared times by c squared codes for a very specific temperature. And so from this, we get out temperature as a function of radius. So we can convert this spectrum into a plot of temperature inside our tokamak. Again, we are measuring in this toy example that these three frequencies are this. But we can, for example, get out of the peak temperature profile. So this is extremely powerful. If we look at just the optically thick lines down at low frequencies, the intensity of those optically thick lines depends only on the plasma temperature and not on any other complicated physics. Because of that, because we have this very strong mapping in frequency of magnetic fields position, we can also take the intensity map at the temperature and we can get out the temperature as a function of position. So this is why ECE is so incredibly important. Because of course, as I said before, it's also time resolved. You get temperature as a function of r and time. And then tokamaks, because of the existence of flux surfaces, once you measure the temperature along the line, you also know the temperature along all the flux surfaces. So from your magnetic reconstruction, you've effectively got the temperature everywhere inside your plasma. OK. This is a little bit hard to get your head around, so I'll pause there, let you think, and ask any questions. STUDENT: If this is such a great property, why does Thomson scattering exist? What's the issue here? JACK HARE: Thomson scattering can use as many more plasmas than electron cyclotron. So even in the nonexistence of tokamaks, we would still have Thomson scattering. Why does Thomson scattering exist? Why is Thomson scattering used in tokamaks? Well, this requires plasmas of sufficient temperature to make high enough frequencies to be detected. So it will not work in the edge, and edge Thomson scattering is a big, important tokamak. This Thomson scattering is defined along a chord by your laser beam. And so you can get truly local measurements of temperature, whereas this diagnostic is kind of a little bit more global. We'll talk in a moment about fluctuations and the fact that this default diagnostic is not good at seeing small temperature fluctuations. Feel like that's enough. There are probably other good reasons. Oh, we'll talk about accessibility in a moment. So the fact that, in fact, sometimes it's very hard to do electron cyclotron emission because your waves as opposed to exiting the plasma and hitting your detector here get cut off and get reflected by density, regions above the critical density inside the plasma. So we'll talk about accessibility. Yeah. I think those are good reasons. Yeah? STUDENT: Do we do anything with the higher harmonics, or? JACK HARE: In this course, I'm no longer talking about the higher harmonics because it gets quite complicated. If you go to Hutchinson you can apparently find out things about high energy electrons in the tail of your distribution from the high harmonics. And in general, your emissivity contains lots of information about the distribution function unless you end up in this optically thick regime. And so if you look at the shape of the high harmonic lines, you might be able to say something clever about the electron distribution function. So if you want to study runaway electrons or something like that, you can do that. But I had to cut something out of course and that's what I cut out here. But yeah, go to Hutchinson's book if you want to know more. Yeah. Other questions? Anything online? Gone so quiet I can hear the cryopump compressor. OK. Good. Well, actually, yeah, the only thing I will say here is that this signal will not, of course, be nice and smooth. It'll be very noisy. And the point is that in general noise limits your measurements of delta te over te. These are the temperature fluctuations. And the reason why you might be very interested in measuring temperature fluctuations is because, as we discussed I think a little bit ago with reflectometry, if you have fluctuations in density and temperature driven by turbulence, that will cause anomalous transport of particles and heat out of your plasma and limits its efficiency as a fusion reactor. So a ongoing problem is how to measure these density fluctuations, temperature fluctuations so that we can understand turbulence better. And so one thing we'll now talk about is correlation electron cyclotron emission, and that is a technique which is specifically designed to measure very small temperature fluctuations inside. Is that what I'm going to talk or am I going to talk about accessibility? I'm going to talk about accessibility. And all that run up. STUDENT: What is the noise in that [INAUDIBLE]? I'm used to thinking of thermal noise [INAUDIBLE] like you're looking at a big [INAUDIBLE]. JACK HARE: Obviously there's fluctuations in the temperature, which will produce a different intensity, but there are also thermal fluctuations in the number of photons being produced, this sort of shot noise type thing. There's also noise from the fact that your detectors are in a noisy environment and they're not perfect and things like that. So yeah, we'll talk a little bit more about the sources of noise when we talk about correlation electron cyclotron emission, which, again, I thought I was going to do now, but it turns out I'm doing accessibility. So we'll get back to that, but yeah, think about just in general, whenever we make a measurement, we're going to have some noise of it. In this case, it just is very difficult to make-- these detectors are very difficult to work with, and at 100 gigahertz or so you tend to have a lot of noise. Yeah. Any other questions? OK. OK. So the question of accessibility can be phrased as a question of can the electron cyclotron emission reach the detector? If it can't, we can't detect it, so all of this beautiful technique is useless. Now in general, when we're doing something with an MCS device, we're going to be measuring perpendicular to the magnetic field. That's just a simple geometric constraint that our device, whether it's a stellarator or a tokamak, looks like this, and therefore, you'll tend to have magnets that look like this. And so the only gaps are to look in this direction here. You, of course, can design other geometries. I'm just using this as an example here. Because the magnetic field is predominantly the toroidal magnetic field, our angle theta is equal to pi upon 2. So what modes are propagating-- what modes is the electromagnetic radiation, the ECE radiation propagating as inside our plasma? We discussed four modes. X mode, and we'll start with the O mode. It's much easier. The other mode is easy. So once again, we think about our plasma as a circular cross-section and a magnetic field, which folds up as roughly 1 upon r. And so what I'm going to plot is a drawing with the r-axis and the frequency of different modes within the plasma here. And I'll plot this r axis. I'll have zero. This is going to be r over r0. r0 is the coordinate to the middle of our circular cross-section. And so this is the central plasma one here. That's zero. Call that 2. The plasma is going to have some boundaries here and here. Very tight aspect ratio you're talking about. OK. And so for the O mode, we know that the spatial relationship is just n squared equals 1 over omega p squared upon omega squared, like that. If we think about a typical tokamak, we know that the density is going to be peaked. So we're going to assume that we have a tokamak where we've achieved good particle confinement in the core. We know the density has to go to zero at the edge anyway, otherwise there wouldn't be the end of the plasma. And so we're going to assume that we've got some density profile that's a little bit like this. And that means that at the edges of the plasma, the plasma frequency, omega p, is zero. Remember the plasma frequency goes as any squared plus a few other constants. And that means in the middle it's going to peak. It's going to be the highest possible frequency. That plasma frequency is important because waves below the plasma frequency are evanescent. So if you have a wave that's propagating in this region of frequency position space, it's going to be evanescent. It won't propagate. It won't reach us. OK. What about the cyclotron harmonics? Well, they go as magnetic fields. They also go as 1 upon r. So this is maybe the first harmonic, the second harmonic, the third harmonic. Let me just label these a bit better. Omega, 2 omega, 3 omega, like that. So in the picture I've drawn right now, if I have a cyclotron frequency, they're here born at this point near the far edge of the plasma, and I follow the trajectory of that wave outwards, I don't pass through the evanescent region. This wave is going to make it to my detector. Of course, it may also be absorbed as we talked about and become black body, but the general idea is that this wave could propagate out without absorption. And certainly the second and third harmonics could all propagate out. But what if my plasma was a little bit denser than this? What if I increase the density in the core a little bit more? Then I could end up in a scenario where there is now an evanescent region. And in fact, almost all of this wave will get reflected back to the high field side to the r towards 0 side, and only this region will propagate. So this stuff will be reflected and only this. And the stuff that's born inside here can't even be born. There's no mode in which for that electromagnetic radiation to be emitted. OK. So this can still propagate. So that means that for a sufficiently high density, if I draw my tokamak cross-section again, there is a region where omega capital omega is less than omega p-- that's this region here-- where I will not get electron cyclotron emission from. So this effectively says at high density what's ECE. And you could imagine the density could even get so high it blocks the second harmonic or something like that. So this is a challenge if you're designing a tokamak and you want to have some magnetic field and some density. And obviously, those are not free parameters because we have data limits and things like that. If you're designing some tokamak, you might end up in a situation where you don't have accessibility to this first harmonic and so you can't use it as a diagnostic like you wanted to. STUDENT: Just to ask again, what happens when it's emitted under the plasma frequency? JACK HARE: I don't think it can be emitted. There's no mode for it to go into. STUDENT: [INAUDIBLE] so you have you still have orbiting electrons. For example, the far field would get destroyed by the plasma effects or [INAUDIBLE] get reflected back [INAUDIBLE]. JACK HARE: It's a good question. For those online, the question is, what actually happens to the electric fields generated by gyrating particles in a single particle picture in this region? What happens if you try and launch a wave evanescently? Well, you were telling me the other day that if you're launching your waves from too far away from the plasma, they evanescently decay until they couple into the plasma, where there is a mode that they can fulfill. So what happens to the energy that is lost? Some of the energy is coupled into the plasma. What happens to the rest of the energy from when you do low field side launch? Is it reflected back into the antenna? STUDENT: Well, I think it's lost in the scrape off layer to some extent, but reflections also. JACK HARE: Well, so you're saying it's collisionless, but maybe this is one of those places where a small collisionality comes to the rescue and saves us. Right? So there's lots of cases where you're like, if it's collisionless, physics breaks. That probably means that something happens such that the collisionality becomes high enough to save the day again. What am I thinking of specifically? Back to the sheaths, actually, where you have-- you say my sheath is collisionless and then you violate all sorts of fundamental laws of physics. And then you're like, well, perhaps it's slightly collisional, but not enough that I care about it, but just enough to save the day. So I don't know. That's a great question. I'll have a think about it. I'm probably not going to find the solution to it, but if anyone else does, feel free to put along the answer or something for that. STUDENT: But radiation does need to be admitted or emitted at least, right? Because it's still a moving charge? JACK HARE: Right. So the question is like, does the particle lose energy? Does it just decide not to emit because it's moving? Yeah. I don't know. I suspect there are also some limitations to the WKB picture that we're using here and things like that. So I don't know the answer yet. STUDENT: I have an even more evil question. JACK HARE: OK, go for it. I see. STUDENT: [INAUDIBLE] towards the process that generates the harmonic components? JACK HARE: Oh, yeah. That would be good, wouldn't it? Yeah. No, so I guess don't know the answer to that one either, but that's a great question. Yeah. Cool. Nikola? STUDENT: So we can look at it from the way we looked last class and when you entered the evanescent [INAUDIBLE]? Anyway, the point is you never get oscillations. You still get an energy release via the wave, but it's not a wave. It's just an exponential decay. JACK HARE: I don't think you do get energy released by the wave. The point of the evanescent wave is that that energy that you've lost gets reflected back. It's not absorbed inside the plasma. That's for sure. And that's true whether it's a plasma or a block of wax with some microwaves. Right? So that's just the property of whether the waves can propagate or not. But I don't think the energy gets dissipated in the medium. STUDENT: And that's what's spooky about when you have-- JACK HARE: Yes. That's what's spooky about having a gap that the wave just sort of goes through and then appears out the other side. Yeah. I mean, so in this case here, you could imagine if you had a small enough evanescent region-- oh, that's interesting. OK. If you have a lot of evanescent region, you could have some radiation that couples to the other side where there's a mode you can propagate into. Similarly, if you have an evanescent length scale which is long enough, then the plasma inside this region could couple to some modes outside, right? And evanescentally produce radiation. I will say this is much more complicated than what Hutchinson has in his notes. I am not a ECE person. These are great questions. I'm going to keep going before someone asks another question. I've almost run out of tea as well. I'm so stressed. Good. Right. Let's do the X mode. So the X mode has n squared equals 1 minus-- you really shouldn't write this down. We've talked about modes before. I just want to write on the board so that we are staring at it a little bit. OK. And just to remind you with the symbol mode example, this surface is when n equals 0. And so this is the cut off. So the question-- and it's pretty obvious that the cutoff, in this case, is when omega equals omega p squared. What's much less obvious for the X mode is where the cutoffs are. Right? So this is a complicated formula. We can go and rearrange it and find out what we get out here. And what we find out is that we have cut offs for a wave frequency omega less than omega l. And I'll explain that to the left resonance. And also for a wave frequency that is between what's called the upper hybrid and the right resonance as well. And I haven't defined any of those. I will do that in a moment. But just to say there are now actually two regions in which we have cutoffs. Previously we just had this region with a cutoff for omega less than omega p. OK. So you can go to Chen and you can go look up all of these and you find that the upper hybrid is the combination of, well, upper hybrid squared is equal to the electron gyrotron frequency squared plus the plasma frequency squared. That's why it's called a hybrid frequency. It's like a sum of two other important plasma frequencies. And then these left and right cutoffs, which I'll write in one go, with left over right meaning there's going to be plus and minus signs. And for the left way, if you take the top sign, and for the right way, if you take the bottom sign, this is equal to a 1/2 minus a plus omega plus omega squared plus for omega p squared to 1/2. So looking at that you think, oh, someone had to solve a quadratic. Indeed, that's what happened. OK. Doesn't necessarily mean very much to you at the moment apart from these-- I'm just going to move this 2 down a little bit so it's closer to the symbol it's actually meant to be referring to. It's clear that we have a complicated set of different cut offs here. So what I'm going to do is draw again this diagram upon on r0 frequency of the mode inside plasma 1, 0, 2. And we'll do the edges of the plasma here and here. Now it turns out that the lower cutoff here is going to look very similar to this O mode cutoff. And the reason is that if we go to a region where ne at the edge is equal to zero, then omega p is equal to zero and we have minus gyrotron frequency plus gyrotron frequency. Oh, that's zero. OK. And then we can also start this a little bit and go, OK, it's going to peak in the center. So this is the lower evanescent region down here. So far so good. And again, we can draw on the same 1 over r falling modes for their cyclotron emission. Now, what's very interesting is when we go and ask ourselves, well, where is this second region here? This is the region bounded by the upper hybrid and omega r. Well, let's do the same thing. Let's ask ourselves at the edge of the plasma where the density is zero and the plasma frequency is zero, the frequency of the upper hybrid is just going to be the frequency of the first harmonic here. And actually, conveniently, the frequency of the lower hybrid is also going to be the frequency of the lower harmonic. We can play the same game at the other side here and here. And if you stare at these for a little while, you'll realize that the upper harmonic is always slightly higher than-- sorry. The upper hybrid is always slightly higher than the first harmonic. And the right hand resonance is even slightly higher than that. So this is the right hand frequency and this is the upper hybrid. Of course, let me write it like that. It's all based on some excellent notes that Hutchinson made for 22-611, which is still floating around amongst the grad students if you can get a hold of them. These are very nice diagrams. They are in some form in the textbook, but the ones in his notes are better. What does this mean for accessibility? STUDENT: You need a minimum of the second harmonic. JACK HARE: Yeah. If we're going to observe from the high field side-- sorry, the low field side out at large R, which is where we tend to put our detectors, there's no way this wave is going to get through this evanescent region, evanescent coupling notwithstanding. Right? Because if we emit here and it's got to travel in this direction up, it's hit the region. If we emit here, it's going to travel up. It's hit the region. So we have got a complete cut off in assembly here. And if I'd left myself slightly more space, I would draw it. I'll draw it directly underneath this one. So now, I, again, have my tokamak cross-section like this, and I effectively have an evanescent region that looks a little bit like a banana right in the middle here. This is the upper hybrid. This is the right hand resonance. So if I have any emission anywhere at the fundamental capital omega, that's going to reflect back. Say it. STUDENT: Say what? JACK HARE: I don't know. I thought you're going to tell me I should put my detector on the high field side. STUDENT: I was going to, but now I'm not. Is this helpful in the sense of because you now have no first harmonics electron emission you can very confidently do your black body thing? JACK HARE: Well, no, you can't because that's a mission, which becomes black body. STUDENT: OK. JACK HARE: If that's blocked out you can't see it at all. STUDENT: OK. JACK HARE: Basically what it's saying is, so there is a black body spectrum at some frequency that the plasma should be emitting here at some frequency with that black body spectrum, but that frequency gets blocked. It cannot get through the evanescent region. So we can't see any of the radiation from this side on the far side. We can only see radiation from here. STUDENT: So yeah. High field side sounds like a great idea. JACK HARE: High field site is a good idea for this. Yeah. Exactly. Now didn't go into this at all, and it seems slightly obscure the exact place where it crops up in Hutchinson. But it turns out we had this conversation before about whether you have O mode or X mode. And of course, it's not at all clear to us which of these two modes the gyrating particle couples into, right? There's no reason-- we haven't done nearly enough mathematics to know that. It turns out that predominantly ECE is in the X mode by, like, a factor of 10. STUDENT: That's extremely unfortunate. JACK HARE: Extremely unfortunate for us. And yet, we soldier on. So yeah. If you have high enough density, this region, of course, could go like that. Block out the second mode as well. Completely possible, so. It depends an awful lot on the density of your tokamak or whatever system you do. All right. I want to do the correlation ECE, but I don't think we're going to-- I really want to start and not finish. Questions? Yeah? STUDENT: Do we have a general rule or intuition for the second harmonic intensity in terms of if we can completely block the fundamental frequency, if we go up to try to detect the second harmonic frequency, would that be possible practically, or is it just too low? JACK HARE: No, no, no. It's definitely possible. If you look in Hutchinson's book with his example spectra, you will see a spectrum with second harmonics in, and they're very clear. The trouble is they may be optically thick, which may be good. So that may be fine. You'd be happy if they're optically thick because you will see some emission from them. So it just is complicated is the short answer. So they should be detectable, but they may have been attenuated by the optical thickness. They may have been blocked off because our density is too high. And so there's an evanescent region. So yeah. There's lots of things. You need to do the calculations for your plasma to work out whether you're going to see them or not. Yeah. Any questions online? OK. I think we will leave it-- oh, yeah. Go on. STUDENT: Why is it that the upper hybrid [INAUDIBLE] and-- JACK HARE: The left and right resonances. Yeah. STUDENT: Yeah. Why do they have to meet the cyclotron frequency? JACK HARE: Yeah, absolutely. No, it's a great question. It's not completely obvious. The upper hybrid one, imagine we're at the edge of the plasma here. So this is the plasma edge, and at this point, ne at the edge is equal to zero, which means that our plasma frequency at the edge is equal to zero. STUDENT: OK. Yeah. JACK HARE: And so then if you stare at this for long enough, you get-- you get the idea? Cool. But yeah. When I first saw this picture, I'm like, wait, that's too much of a coincidence. Is it always like that? And the answer is, yes, it is always like that. There's no combination of density and magnetic field you can come up with where that's not true, which is kind of cool. OK. I think we're good for today then. If anyone does have any questions, they can come and ask them. See you on Tuesday.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_23_Thomson_Scattering_Advanced.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: Welcome, everyone, to the final lecture. We will be discussing some of the ramifications of the collective Thomson scattering spectrum, which we derived last week. So just to remind you, we derive first of all that the scattered power into a given solid angle into a given frequency seen by our spectrometer was equal to the classical electron radius squared, the intensity of our laser and its power over the area. There was some shape function like this that told us where we should look to see the scattered light. It was the polarization of the electric field. It depends on the number of electrons we're scattering off and how many electrons are within our scattering volume, and then it depends on this interesting object, the spectral density function, which is what contains all of the actual frequency resolved information here. And this spectral density function, we found, depended on the time and the volume over which we were integrating. And it was an average over the absolute value of the Fourier transform of the electron density, squared-- this normalized by the electron density. So after we derive that, we said, OK. Now we need to find out what these electron density fluctuations look like. And we did that using a test particle formalism, where we let some electrons or ions flow through our plasma. And we looked at the response of the plasma to these electrons' ions, with the electrons being repelled from other electrons and ions being attracted and all sorts of things like that. And from that, we were able to write down a formula for the spectral density function for some arbitrary distribution of electrons and ions. And this was 2 pi upon k and E. And then there was a factor that looked like 1 minus pi E upon epsilon squared. The electron distribution function in the k direction, evaluated at the phase velocity of the wave, omega upon k, and the component that looked like pi E upon epsilon squared times pi v. Charge on the ions and the ion distribution function in the k direction, evaluated at omega upon k, like that. And we said that this was the number of electrons at omega on k, moving with that velocity, and that this was the response of the plasma to this number of electrons. And the same here-- we have the number of ions and the plasma response here. And so these we call, respectively, the electron component and the ion component. And as I've stressed several times before, we are only ever doing scattering from electrons. This turns out to be Thomson scattering from electrons looking at other electrons being repelled by them. And this is Thomson scattering from electrons which were following the ion particles around. So we're scattering off electrons in both cases, but some of the electrons tell about other electrons, and some of the electrons tell us about ions. And we said that if we specify this now so our distribution function is a Maxwellian so we have for the j species, the distribution function in the k direction is a function of velocity. That's going to be the number density of the j species from factors to do with normalization-- 1 over square root 2 pi, 1 over the thermal velocity of the j species, exponential of minus v squared upon the thermal velocity of the j species squared, where this thermal velocity is equal to the square root of 2 Tj upon mj. So if we assume that this is Maxwellian for the electrons and the ions, which may have different temperatures. So we're still allowing Te to not equal the I here. We found that these susceptibilities, chi, for the electrons was equal to alpha squared times this mysterious w function, squiggle e. I'm going to define all of these in a moment. And for the ions, was equal to alpha squared zT e upon TI w squiggle of I, where if alpha is our collective scattering parameter and it's defined as 1 upon k lambda Debye. And if you're curious about which Debye length it is, it's the Debye length of the electrons. And this function, w of squiggle is omega upon k times vTj. So squiggle for the j species looks like that. So it's the ratio of the phase velocity of our wave or mode over the thermal velocity of our distribution function. And this w here of squiggle as a form that looks like 1 minus 2 squiggle e to the minus squiggle squared, integral from 0 to squiggle, e to the x squared. The x, that's the real part. And then there's an imaginary part, I square root of pi squiggle aetla minus squiggle squared. And this, of course, is the imaginary part of the function. And we often write this as equal to the real part of w of squiggle plus I times the imaginary part, w squiggle. And we're breaking it down into the imaginary part several times in what's to come. OK. The physical meaning of this squiggle parameter here is to do with what part of the distribution function you're interacting with. So if I draw a little plot here of velocity versus log of the distribution function here, and I say for Maxwellian, this is just going to look like minus v squared so it'll be something like that. So this is the distribution function for the ions. Then if I'm looking at how modes with low phase velocity compared to the thermal velocity interact with this distribution function, maybe down here. So this is squiggle ion much less than 1. Or I could look at high frequency modes up where there are very few particles with a velocity that matches the phase velocity. This would be squiggle greater than 1. And of course, if I'm somewhere in the middle, this is squiggle about 1 here, where we're sampling in the middle of this distribution function. And it's worth noting, of course, that for roughly equal temperatures, the electron distribution function at e is going to be much broader. And so that means that, in general, squiggle for e is going to be less than squiggle for I when the temperatures are equal. So if you're asking, how do the ions interact with the mode of a certain frequency? Then the electrons will be interacting as if that mode had even lower frequency. Because the electrons-- this line, for example, here, which was high frequency for the ions is going to be low frequency for the electrons. And you'd have to go all the way out here to find a mode where squiggle e is much, much greater than 1 here. So in general, the fact that the ions and electrons have very different masses will divide the spectra that we see into modes which interact with the ions, the modes that interact with the electrons. And those are the anacoustic waves and the electron plasma waves that we heuristically discussed already. So that is my summary. What we're going to do next is take this rather complicated looking formula and do some limits on it in order to recover the modes that we know are hidden within all of this hideous mathematics. But before we keep going, does anyone have any questions? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: No. Well, yeah. You can solve it for a bit. I'm pretty certain it's positive. No. It's chi e. So if you remember, again, we're just doing the electrons. The term here, the 1 here, is just the test particle itself. So each electron in the distribution function. This term here is e minus repelled from the test particle. Over here, there would be a 1 if we had Thomson scattering off the ions, but we don't. So we don't see any scattered light from the ions themselves. So there's no 1 here. All we have is electrons attracted to the ions. One sort of formula that maybe you remember, but I will write down just to be very, very clear, is that this epsilon is equal to 1 plus pi e plus pi I, like that. So that's the permittivity, and it's related to the two susceptibilities like that. OK. Another question? AUDIENCE: Wouldn't it be 1 over [INAUDIBLE] JACK HARE: Yeah. The volumes can cancel very nicely here. I think the time comes from the integration of the incident power. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. Once we rewrite it in terms of this, they've disappeared. I'd have to check my notes. I think it might be related to one of the conversions between f and density, yeah. No, this is like an average over an ensemble of articles. AUDIENCE: I don't think it's a time average. JACK HARE: Yeah. I can't remember off the top of my head. I can certainly check it out later. OK. Any other questions? All right. So now we are going to approximate this w here. And just to rewrite it here, we have w of x. I'm just going to use x instead of squiggle because it's easier to write. It's equal to the real part of w plus the imaginary part of w. And for x less than 1, it turns out that the real part of w has the following expansion as an approximate solution-- 1 minus 2x squared times 1 minus 2x cubed upon 3 plus a load of other terms that you could keep going to, but we won't in this class. And then for x much, much greater than 1, we have this real part of x being approximately minus 1 upon 2x squared, 1 plus 3 over 2 x squared, plus some other terms. And the way you get this is by carefully thinking about what's going on with this real part here. It's not completely trivial to get. And just to write down again that the imaginary part for all x is just equal to square root of pi x e to the minus x squared. In general, we're very often going to try and take this imaginary part to 0 for small x. But when it goes to x, it goes to 0 for small x and for large x, which is the two regimes we're going to be working in. There is one place later on where I'm going to talk about the actual value for this. But we tend to be working in these two limits because that's what makes these results analytically tractable. Of course, you can always just go and calculate the full s of k omega using this full tabulated function that you can find in Python and Julia and Matlab and all sorts of things like that. So the first limit, we're going to pick the limit where alpha squared goes to 0. Yeah, I wish I had more boards. You can see the principal thing that this is going to do is that these susceptibilities both scale as alpha squared. So for both of them, the susceptibilities are going to go to 0. So chi e and chi I are going to go to 0. Remember the limit where alpha squared goes to 0 is the same as saying that the wavelength of the fluctuations we are scattering off is less than the electron Debye length. So this is scattering within the Debye cloud, and we expect that this scattering will therefore give us our incoherent spectra, which we derived a long time ago under much simpler circumstances. But I did promise that this new formulation would recover that. So as these two go to 0, our connectivity, which is just 1 plus e plus pi I just goes to 1 like that-- quite boring. Because it goes to 1 for all frequencies, we don't have any waves here. The permittivity is never 0, so there are no waves. And indeed, what we see is that s of k omega, which is proportional to 1 minus chi e squared upon chi e upon epsilon squared f of e omega upon k plus Ie upon epsilon squared, f of I omega upon k. When we send these chi e's to 0, we just get that this is proportional to the electron distribution function, exactly as we found for the incoherent case. So in this case, we've dropped the response of the plasma. We've lost all information about the plasma response. We've just kept the scattering off each of the individual electrons here. So no plasma response. Just individual electrons. Does that make sense? Any questions? I just want to clarify when I say let's take alpha to 0. This is the thing that you have complete control over. So alpha is defined as being 1 over k lambda divide e. This is the size of k. So this is this, which is equal to 2kI sine theta upon 2. You've chosen the wavelength of your probe if you make that wavelength larger. So you go to a longer wavelength, you'll get a smaller alpha. So larger-- actually, the other way around. Sorry. Yes. Larger. I'll write it in terms of wavelengths. So we tend to think of lasers like they're smaller, lower, alpha. Or we can also do the same thing with this. We can have a larger theta. Taking this limit here is not some mathematical abstraction. This is a choice that you've made as an experimentalist. Now, of course you might be constrained in what your laser wavelength is. You might be constrained where you can place the detector. And then you'll go, OK, well, it turns out my alpha is very low. So I guess my Thomson scattering spectrum is incoherent. But perhaps there's a chance to place your detector somewhere else or use a different wavelength. And you could start looking at the coherent particle spectrum, which we'll discuss now. OK. So any questions on that before we keep going? Yes. AUDIENCE: Is measuring the electron distribution function cytoplasm useful? JACK HARE: Very. Is measuring the electron distribution function cytoplasm useful? Yes, extremely useful. Because this could include information about fast particles, like runaway electrons, particles which have been heated by IDRF. Non-thermal particles doing chemical reactions in a low temperature plasma-- this is incredibly useful, yeah. AUDIENCE: [INAUDIBLE] JACK HARE: It depends what you're trying to do. In general, it's very hard to get both in the same experiment, but if you can, you can do some really cool things. Yeah. OK. Other questions? AUDIENCE: [INAUDIBLE] JACK HARE: Oh. What I mean-- sorry. Yeah, there's no mode to the plasma. What we found when we looked at s of k. What we found when we looked at this before is we said, OK, if we do get epsilon going to 0, then there will be singularities in f of k omega. There'll be resonances. And so our spectrum will contain resonances which are related to modes in the plasma, waves in the plasma. But I'm talking about plasma waves here, not electromagnetic waves. I'm just being a little bit brief here. You're quite right. AUDIENCE: [INAUDIBLE] JACK HARE: Yes. You just scatter off the individual particles. The wavelength of the scattering is smaller than the Debye length, and so we only see individual particles. We don't see these collective plasma effects do Debye shielding. And it's those collective plasma effects which give rise to the modes and the waves within the plasma. So that's a reasonable way of thinking about it, just too fast. Yeah. Yes. What we're going to see very soon is that yes, of course, you get the electron distribution and the distribution function. But they're multiplied by this thing, which is a very complicated function of frequency. And so it's like your electron distribution function put through the mangle. And so you may not be able to work out what your original electron distribution function was from that-- again, because as we're about to show, these are present when epsilon goes to 0. And so at those particular-- I should be clear. All of these here-- this is pi e of omega upon k and epsilon of omega upon k. So what this means is that for a specific omega upon k which makes epsilon go to 0, this function will really blow up. So maybe before, you had scattering just off the electron distribution function, and it looked like this. But now as you raise alpha, the presence of these waves become more apparent. And these waves are a singularity, so the scattering is absolutely massive. So now your spectrum looks like this. And so now you can't really see the electron distribution function at all. It's just some busy background noise. And all you're actually seeing is the scattering of the electron plasma waves on anacoustic waves. Any other questions? But it sounds like we should go on to the next because we're getting ahead of ourselves. OK, cool. So now let's talk about arbitrary alpha squared. So alpha, not necessarily going to 0. And I'm going to cover two cases here. The first case is the high frequency waves. So for the high frequency waves, we're going to say that squiggle I is much, much greater than 1. These waves of frequencies way out, they have phase velocities much higher than the thermal velocity. So there's very few ions around here. And this effectively means that w of squiggle goes to 0, which is the same as saying that there's no ion response. So just to be clear here, what we've done is we've taken this limit here as x gets very, very large, and we've just gone, well, the first order effect gets very, very large. Even this term vanishes. There's no just one term up front here, so we're just going to approximate the real part of this as 0, and the imaginary part will also go to 0, because for very large x, this exponential for minus x squared kills off this as well. So we don't have to worry about what the ions are doing anymore. They're irrelevant. What you do find with the electrons is that you get s of k omega is equal to 2 square root pi. And then you get the term which is quite interesting. And I'll put the kve e squared here. You've got a term that looks like exponential of minus squiggle e squared upon something a bit more complicated. This is 1 plus alpha squared real part of w, evaluated for squiggle e. All of that squared plus alpha squared imaginary part of w, the squiggly e, all squared. This is just by taking the previous expression I gave you and reducing the susceptibility down towards 0. The reason this is interesting is it goes directly back to Grant's question. This here, this looks exactly like e to the minus v squared upon eTe squared. So this is equal to your Maxwellian. You've got a spectrum that knows about the Maxwellian, but the Maxwellian is being modified by this horrific denominator down here. I think there should be another closing square bracket there. Maybe I didn't need those, but there. So you're not just getting back the electron distribution function. You're getting back the electron distribution function, and it is strongly modified. So now we want to go see what modifications this interesting denominator gives us. So we're going to be looking for resonances, as always, where the real part of the permittivity goes to 0. We're going to be using, again, high frequency. So we're going to say that the electron squiggle factor is much, much greater than 1. Note we said the same thing for the ions. It's not as big as the ions. So squiggle for the ions is still much bigger than squiggle for the electrons, which is much bigger than 1. So we're not going to be as brutal in cutting off all of the terms for electrons as we did for the ions. We're still going to take the imaginary part of w for squiggle e to be 0. We're going to ignore the damping here. But we're going to take the real part of w to your e, and we're going to keep the first two terms. And those first two terms are minus 1 upon 2 squiggly squared minus upon 3 upon 4 squiggly to the 4th. And so if we're seeking solutions where this is equal to 0, we'll end up with something that looks like 1 plus alpha squared, real part of w, squiggle e is equal to 0. Because again, just looking at this denominator here, this is permitivity. And that gives us out a dispersion relationship, which is omega squared epw is equal to omega pe squared plus 3 Te on me k squared. So we have recovered from all of this complicated mathematics the electron plasma waves here. And so our s of k omega-- I'm going to put it like this. I'm going to put some little crosses here to suggest that these are pretty high frequency compared to other ways we'll be talking about. It's going to have some little peaks in it. Remember, all of our analysis so far has been for high frequency, so we don't actually know what the spectrum looks like at these low frequencies here. But we do know what it looks like at large frequencies. It's got these very sharp resonances. The reason that they are not actually singularities is because there will be a small amount of damping. We've neglected it in our treatments here just so that we can get the dispersion relationship. But if you allow a small amount of damping back in, that will prevent the denominator from going to 0. And what you find with these-- these are occurring at this frequency, the electron plasma wave, and at minus the frequency. I think I'll cover that later on. And so the position of these depends on the plasma parameters. This depends on the density, and this of course, very, very clearly depends on the temperature. So the exact location of these peaks gives you information about the temperature and the density. And it turns out the broadening of the peaks-- so this sort of thickness in frequency space is related to the temperature of the electrons as well. And you can get that out if you do this more thoroughly with the Landau damping term, this imaginary component. That imaginary component broadens these peaks, and the amount of broadening is to do with how hot the electrons are. OK. So that is the first mode we wanted to get. And this is the mode related to the electron plasma waves, or Langmuir waves, if you like. And these are what we call high frequency waves. So of course, their frequency is still much less than the frequencies of the radiation you've heard. OK. Questions? Yeah. You can rewrite this so it looks like something plus alpha squared. And then in the limit where alpha is really big so you're very collective, it turns out that it really just depends on the density. And this is the limit that lots of people work in. So you'll find papers where they're like, this frequency omega is proportional to density, and the width is proportional to temperature. But that's only true for alpha much greater than 1. If your alpha is like 3-ish-- and I've done experiments in this regime. Then the position depends on both of these parameters. And so you have to go find the width. That gives you the temperature, and then you sub the temperature out from the displacement of the location of the peaks, and that will give you the density. In reality, you don't do any of those things. You fit. You do a least squares fit to the full spectral density function. But if you don't want to do a least squares fit, you can still actually estimate all these parameters. And that's what my old Russian professor did. He would just go and point and go, ah, this is the frequency shift. And he would calculate temperature and density just from looking at the raw data. You can also do it using computers as well. Any other questions? OK. We're going to go to the low frequency. So just to remind you, we did arbitrary alpha, alpha squared, and high frequency. Now we're going to go looking for low frequency waves here. So for these low frequency waves, we know that squiggle for the electrons is going to be much, much less than 1. And so we're just going to say that the real part of squiggle for the electrons is going to be about 1. Just to remind you, the small x. This is 1 minus 2x squared plus a load of other terms. So for very small x, I can just drop this term and just get the 1 here. OK. So now we find that our permittivity, which is equal, as always, to 1 plus pi e plus pi I, is equal to 1 plus alpha squared I, the imaginary part of w for the electrons. I haven't dropped the imaginary part because they play an extremely important role in damping these waves. So I've got to keep this. Plus zte upon TI times alpha squared, the real part of squiggle I plus I times the imaginary part of squiggle I. And again, we seek resonances where the real part of the permittivity epsilon equal 0. And if you play around with this formula for long enough, you'll find that these resonances are damped. That means they don't appear. And they're damped for the condition that zTe over TI is less than 3. I haven't proved this at all. You can go and prove it to yourself by trying to solve this condition here. And it's in Froula's book, and it takes a few lines of algebra. So I'm not doing it on the board. So we don't get any resonances if we have this condition. So damped in this case means don't appear. So let's just quickly take a look at this. We're very used to having plasmas where the ion electron temperatures are equal. But also, many of us are used to having plasmas where z is equal to 1. So in this case, this would be Te is less than 3 TI for a hydrogenic plasma. So this would be ions hotter than electrons. So that's not a plasma you necessarily find very often. So this is a slightly uncommon thing to not have these resonances appear. In fact, most of the time, we're going to be in the regime where zTe over TI is greater than 3. And this will be possible even with electron and ion temperatures being equal, if the charge on the ions was 3 or more. So this could definitely happen if you have a carbon plasma. The electrons and ions are in thermal equilibrium, but because of the charge on the ions, this condition might be fulfilled. So this is relatively easy to find. And we're going to often-- this greater than 3, this is really the same as being much bigger than 1. Because as we all know, 3 is much larger than 1. So if you fulfill this condition, then you find that the velocity of your anacoustic waves here, which is equal to zTe upon mI at 1/2 must be much, much larger than the thermal velocity of your ions, which is equal to TI upon mI at 1/2. Just by plonking this condition that it's much greater than 1 into here, we can see that the modes that we're looking for-- and these will be the anacoustic waves-- are going to be moving much faster than the thermal velocity of the ions themselves. And so that means that we can now write down that squiggle for the ions is going to be much greater than 1. That means that the real part of w or the ions is roughly minus 1 upon 2 squiggle ion squared minus 3 upon four squiggle ion to the 4th. Remember, for the electron plasma waves, this is the limiting factor that we use for the electrons. But now we're doing it for the ions here. And we're again going to assume that there are no damping from the ions. And that's because these waves are velocities much faster than most of the ions in the system. And so there's no ions to damp up. So we're going to set the ion damping to 0. Just remember, we have not set the electron damping to 0. So the reason that these resonances don't blow up is because they damp on the electrons. And then we can do the same thing we did before. We can substitute all of this in to the permittivity. We can find the resonances when this is equal to 0. And we find that we have resonances now at the anacoustic wave frequency. And this is equal to k squared. What follows may surprise you-- alpha squared 1 plus alpha squared zTe upon mI plus 3 TI upon mI. Why do I write it like that, not just like this? Oh, because actually nobody asked that question. So this looks a little bit like what you're expecting, the anacoustic waves. But it looks most like what you're expecting if alpha squared is much, much greater than 1. This goes to zTe plus 3zI upon m hat, because this factor, this goes to 1 at that point. And that looks a fair bit more like the anacoustic velocity that you're familiar with. So now we know that this spectrum is going to have resonances which are appearing at this anacoustic frequency. So questions on that before I sketch what this spectrum looks like? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: There's no [INAUDIBLE] or radii in here. No point to-- sorry. What does it mean physically? If alpha is small now, this term just goes away. OK. I'm going to show you what it means physically in a moment. Actually, no. That's not in these notes. That's something else. Alpha is small. If alpha is small, although these resonances will appear, they will disappear because a small alpha-- the ions and electrons don't respond anyway. And we get back to our incoherent scattering spectrum. So if I made alpha smaller here, although it would look like this term would drop out, and I'd get anacoustic resonances happening at 3 TI upon mI, in reality those anacoustic resonances wouldn't occur because we'd just be doing incoherent scattering. Can I explain exactly why it's alpha square upon 1 plus alpha squared? No, not at the moment. This comes about because we have modes. But alpha squared. I don't know. We have derived this assuming that alpha is-- no, we haven't derived assuming alpha is not small because we've done it for arbitrary alpha. So how would we get back the incoherent result? OK. So the interesting thing is that-- I think I might have erased it, sadly. Let me just go back. Chi e is equal to alpha squared w of squiggle e. And chi I is equal to alpha squared zTe upon TI squiggle I, like that. So when we're doing the incoherent, we're just looking at this first term, and we're taking that to 0. So no matter what w is doing-- and in this case here, W has got all sorts of funky resonances in it. Mathematically, at least, that will just not be important because whatever this response of the plasma is goes away because we are scattering with modes smaller than Debye length so that we can't see this part anyway. So it's like they're two separate arguments. We use the alpha going to alpha squared going to 0 argument to get rid of this term to make it small, setting these equal to 0. If we did that again-- I'm trying to think what you would end up with. But I have a feeling you're going to end up with a term that looks like alpha 2 the 4 or the strength of this resonance. And then that is still going to go to 0. So we could feed all of this back through. Yeah. I don't know the full answer to it. I think we're talking about two separate things here. I think that the limit that we took to get the incoherent scattering is separate from what we've done here. So although this term would still exist in the coherent/incoherent limit, it would be inside this bit of the equations. And so whatever these are doing is irrelevant because this. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. We took this very brutally to start with so we didn't even have to think about these. Once we restore the alpha being some arbitrary value, then we can go on about these things. AUDIENCE: It's not like [INAUDIBLE] JACK HARE: No, I don't believe so. There's nothing related to alpha inside w, yeah. w really has to do with the distribution function, whereas alpha is more fundamental than that. Alpha can be evaluated regardless of distribution function. So probably if you're trying to work out what the Debye length is for a nonthermal plasma, you also have problems. So I'm not quite sure how this gets defined if you're very non-Maxwellian. We're going to move on. if you happen to be in the case back here where zTe upon TI is less than-- we've got 3 here, but I'm going to say less than 1. It turns out you get a spectrum s of a omega. And this is for low frequencies. So we don't care about what the electron plasma wave is doing. You get a spectrum that actually looks like the ion distribution function. Now, this is because the anacoustic waves are strongly damped in this limit here. So the anacoustic waves would be somewhere here. But they're Landau damping on the ions when we have this criteria fulfilled. And so the waves are completely damped. As far as I know, I've never seen a paper where someone has done this. This is actually a pretty strange place to be, where you have really rather hot ions and cold electrons. I assume that it is doable, but I've just never seen it done in an experiment. The next limit we could have is where zTe upon TI is roughly 1. So that's in between. The resonances are completely damped, or the resonances are not damped. And so what we end up with is a spectra that looks like a strange hybrid. It almost looks like a flat top like this, which is effectively adding some anacoustic waves onto your distribution function. These anacoustic waves are still pretty damped, but they're not as strongly damped here. So there is an anacoustic wave contribution. Finally, we have the case where zTe upon TI is much bigger than 1. This is the case we just derived, where we're going to have very strong anacoustic resonances. And in this case, your spectra-- although it does have some contribution from this f of I, as we've discussed before, the contribution from the resonances are much larger. You get something that looks a little bit like that. There we go. And these occur at the anacoustic free. So now we have a large ionic contribution. And of course, if you write a code to implement all of these equations, and you change the value of z upon TI, you will get these different shapes. I'm just describing to you hand-wavingly what shapes you get and why they end up looking a little bit like that. OK. Any questions on the anacoustic waves? Wherever you want to be-- it depends what you want to measure. No. it's pretty irritating. I ended up there once. You'd much prefer to be here. You can learn an awful lot more from it. This is hard to fit properly. And there's other things that can cause your spectrum to look like that as well. So there's probably a better place to be, but of course you don't get to choose this in your plasma. Unlike where we're choosing alpha by moving our spectrometer and changing the wavelength of light, I have no choice whatsoever on what my temperature of my plasma is. That's the thing I'm trying to measure. So sometimes you just end up here by accident. OK. So we derived the anacoustic waves and the electron plasma waves. And a reasonable question is, in my spectrum, which of these is more important? Which do I see more light from? Because that might tell you which ones are going to be easier to detect. So again, we might have a spectrum where alpha is much bigger than 1 and zTe upon TI is much bigger than 1. And we might go, OK, we're going to have some anacoustic peaks here. And then at some much higher frequency, because they really are very, very different frequencies. We're going to have some electron plasma waves, like this. And which of these contribute more to our spectrum? So we get this by calculating what's called the total cross-section, which we call s sub T, as a function of a, where our detector is. And this is just the frequency integrated cross-section of s of k omega. So we can do this analytically for the ion and electron temperatures being equal. So you have to make this assumption in order to make progress analytically. And this is called the saltpeter approximation. The saltpeter was the first person to come up with it when he was looking at Thomson scattering from the ionosphere. And he showed that our total scattered cross-section looks, therefore, like 2 pi upon 1 plus alpha squared. This is the contribution from the electron feature. So this region. And then there's another term, which looks like 2 pi upon 1 plus alpha squared, z alpha to the fourth power, 1 plus alpha squared plus alpha squared, the Te upon TI. You're going to say, we assume these are equal. We assume they are approximately equal. There we go. Good. So this can still be slightly different than 1. And of course, the z makes a big difference here. And this is due to the scattering from the ion feature. But now we look at this, and we go, OK. Well, depending on my alpha parameter, this seems to depend very, very strongly on alpha. So if I look at alpha squared going to 0 here, it's very clear that this term, the aisle term, is just going to drop off. And we're just going to have electron feature only. Which is what we've seen several times already. But when we go to incoherence, we're just scattering off the individual electrons. If we have alpha equals to 1-- so some intermediate alpha parameter-- then we're going to get something that looks like pi times 1 plus z over 2 plus zTe upon TI. And so there are two limits to this. There's the limit we already saw, zTe upon TI is much, much less than 1. Then we got both these terms. So again, this is the electron term. This is the ion term. This gives both e minus and ions. And we actually end up with coherent electrons and incoherent ions-- or incoherent ions scattering from the ions. This is a strange situation, as I've said. It requires you to have hot ions and cold electrons, which we don't often encounter. The other one would be zTe over TI is much greater than 1. And then we get coherent electrons and ions. And finally, just so-- well, I'll create some space here. We could look at when our alpha parameter is very large. Alpha much greater than 1. And then we'd have this s of k omega go to sk total go towards 0 for the electrons plus z over 1 plus zTe over TI, the ions. And so we just have ions dominating this. If we have a very large alpha, your spectrum would just look like-- and then flat out to here. In some of these other cases, for example, we just have electron scattering. And we wouldn't have any contributions from the ions. And this is important because depending on what alpha you have, if you're like, I really want to go look at the electron distribution function directly, but then you calculate your alphas large. But you know you're not going to see any of the electron feature at all. The electron feature is going to disappear. So you won't learn anything directly about your electrons. You'll only learn something about your ions. And this is important because in general, the eTw frequency is much, much larger than the Iaw frequency, as I alluded to here. So it's very hard to measure both on the same spectrometer, so we can't. The eTw's and the Iaw's on one spectrometer. Because if you set up your spectrometer to have a nice, wide frequency wave to get the eTw's, then the Iaw's will just appear on a couple of pixels. And if you set it up to see the Iaw's, then the eTw's will be way off to the side. And so that means you generally have to choose which of these you're going to measure or you have to buy an alpha spectrometer. So it is actually a pretty big practical concern. And so you want to know which of these features is going to be brightest so that you can set up your spectrometer to go look at that, because that's going to make life better. OK. Questions on this? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Yes. We haven't talked about magnetic fields at all yet. Oh, yes. If we have time, we'll get there. But the very, very, very brief answer is in certain conditions, the anacoustic feature gets a modulation so it wobbles up and down. And that wobble is the ion cyclotron frequency. And the electron feature gets a wobble, and that wobble is the electron. But you have to be looking precisely perpendicular to the magnetic field, which is very hard. If you look even half a degree off, this effect washes out completely. So it's a very challenging measurement to make. I haven't seen anyone do it since 1972. I didn't see it myself. But that's the last time someone published a paper actually measuring it in Thomson scattering. And I've not seen anyone do it. It's a really hard measurement. So people say, oh, Thomson scattering can measure magnetic fields. Because you measure the cyclotron frequency, so you're like, OK, now I know the magnetic field. But in reality, it's really hard to do. Other questions? I will show you something in a moment that can help you infer the magnetic field, though. OK. So now I'm going to talk about electron ion drift. And as you probably realize, if your electrons and ions are drifting with respect to each other, that means you have a net current in your system. And so if we can measure how much electrons and ions are drifting with respect to each other, we can measure the current locally within the plasma, which is pretty cool. So we're going to say that there is some drift velocity which is equal to the difference between the electron velocity and the ion velocity, like this. Now, this is not the velocity inside your distribution function. This is the bulk velocity of your entire distribution. So this is a big capital V here. So your entire distribution of ions is moving with some VI, plus the thermal velocity on top of that. The entire distribution of electrons is moving with some Ve. And we define this drift velocity to be the difference between the flows of the electron ion species. And we know from all the other Thomson scattering that we've done that we're only really sensitive to a component of this in the k direction. So we can only measure Vd dotted into k because everything we've done we only measure it along the wave vector of the scattering mode. And so that means that we can define a squiggle, which we call squiggle d, which is defined as Vd dotted into k over Vde. Once again, this is analogous to the other squiggles, where we were using the phase velocity. Now we've got the component of the drift velocity along k. And that means we're going to replace the real part of the w function, which was normally in terms of squiggle. And we're going to now evaluate it in terms of squiggle e minus squiggle d. So we're effectively looking at the damping from a different part of the electron distribution function. So this is our shifted Maxwellian. The reason this is important is because it was the electrons which were damping the anacoustic waves. And both the anacoustic waves were damped by the same amount, and they had the same intensity. But if we shift the electrons, some of the anacoustic waves are going to see a different part of the electron distribution function than the ones going the opposite direction. So we've broken that symmetry. Previously, for example, our electron distribution function was symmetric and our anacoustic waves were, say, here and here. And they both saw the same, df, dV, which is what causes the damping. If now we've shifted our distribution function, we can see that this wave now sees no df dV because it's at the maxima. This wave, these are different amounts. So we can imagine that the damping will be differently effective for the plus and minus frequency waves, and so they'll have different heights. So just to make that clear here, the damping of the anacoustic waves is proportional to the gradient of the electron velocity, e, at the phase velocity of the wave, omega upon k. And that, it turns out, is proportional to this imaginary part of the w function that we've been carting around all this time. So it's proportional to the imagined part of that at x. And that is proportional to x exponential minus x squared. And you shouldn't be surprised that it's x exponential minus x squared because that looks like V exponential minus V squared, which would be the derivative of our Maxwellian distribution function. And so this is what we're saying, that the amount of damping we have is directly proportional to the derivative of the distribution function. So if I now draw this again, but now I draw it in terms of the damping, I have axes with squiggle I down here. And up here, I have log damping. Previously, for example, I may have had the anacoustic resonances, these frequencies, minus omega Iaw plus omega Iaw. And the damping coming from the ions would have looked like this. This is just the derivative of the Maxwellian here. It's just this function plotted. And the damping from the electrons would have looked like this. So the ions are feeding the ion acoustic waves because of the slope here. And the electrons are damping these waves. If we shift the electrons now-- I put the anacoustic waves in the same place. I put the ion damping-- I'm going to try to make this one a bit more symmetric-- in the same place. But I now shift the electrons over so that they've got some drift velocity like this. We can see that we have increased damping here and reduced damping here. And what that means, in terms of the spectrum, is that we go from something that looks like, in frequency space on our spectrometer, this. And we end up with something that looks like this instead. There's a very pronounced asymmetry between these two peaks here. Now, unfortunately there's no simple formula I can give you for the ratio of these two peaks that gives you the drift velocity straight away. This is pretty complex. In general, as far as I know, you have to fit the full s of k omega, allowing there to be a drift of the electrons here. And it depends on the exact-- the ratio of these two peaks depends on exactly what the drift velocity is, what zTe upon TI is, what the alpha parameter is. And so we have to fit the full s of k omega to our data. But once we've done that, we can get out this drift velocity here. And then we know that our current is equal to minus en V Ve plus ze and I VI. So this is simply equal to e and e Vd. So if we have some way of knowing the density-- for example, from interferometry or from scattering of electron plasma waves of a separate spectrometer-- by inferring the drift velocity from the spectrum, we can now get out the local current. Of course, it is the local current in the k direction. So if your current is pointing upwards, but you've chosen your scattering geometry so that k is perpendicular to it, you won't measure anything. So this doesn't tell you the direction of k. It just tells you the size of it. If you know the direction of J from symmetry, and you have measurements of J at different positions-- for example, if I do Thomson scattering at several positions along x. I've got my laser beam going through the plasma, and I collect the scattered light several places along here. And I calculate J at each of these points, then with some symmetry arguments, I may be able to solve curl of B equals mu 0 J and locally find the magnetic field. But that requires a lot of assumptions about symmetry. So it may not be completely general, but even getting to current is pretty cool. You get the magnetic field. That's really nice. OK. Questions on that? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: No. We're just going to work in the ion frame. So this Vd here, I've basically set this equal to 0. And the ions we can assume just aren't drifting and [INAUDIBLE]. Just choose a frame, whether or not. So I can always choose a frame with the arms stationary. This only depends on the relative drift between the electrons and ions. It doesn't matter to the Thomson scattering which of them is actually true. Yes, exactly. I've never seen a case where the-- I don't think I've seen a case where the ions are drifting, and you're like-- that's unusual because the electrons tend to be the ones that move. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Yes. AUDIENCE: Why do we have this? JACK HARE: Oh, I'm just saying in terms of solving the equation. Yes, it definitely is. And the solution only depends on the difference between the two. Yeah. So all the Thomson scattering will tell you is the difference between the two of them. It's then up to you to say, and I believe that the ions are stationary in my lab frame or the electrons are stationary in my lab frame or whatever else. But the drift velocity will be invariant, regardless of what frame you choose. Other questions? We talked a little bit about what you can learn from the electron plasma waves before. Just in terms of what you can learn about these-- these anacoustic peaks here, their frequency depends on z times Te. So it depends on the charge of the ions times the electron temperature. The width of these turns out to depend on the ion temperature here. And to some extent, the ratio between the peaks and the center here is also sensitive to the ion temperature. So you can use both of these two things to get the ion temperature out. Because these are relatively low frequency modes, you can also see bulk shifts of the plasma. So if the ions are moving, your spectra will be shifted slightly. And that shift here will be proportional to the ion bulk flow velocity. So you can measure the whole plasma moving. This ion velocity is the V in the mhg equation because the ions have much more momentum. And then finally, as we just discussed from the ratio between these two peaks-- if they are asymmetric, we can get Vd dot k equals J dot k. That gives us J dot k. So from the anacoustic spectrum, you can measure an awful lot. You can measure the electron temperature, the ion temperature, the ion flow velocity, and the drift between the electrons and the ions. The only thing you can't get directly from the spectra is the electron density. From that, you need the electron plasma waves. But still, there's a remarkably rich amount of information we get out of this by scattering off these anacoustic waves. OK. So I have many pages left, and I knew I wasn't going to finish. So we discussed magnetic fields already, to the extent I said they were complicated. We vaguely discussed collisions. The only thing I want to say about collisions here-- if you do include collisions, and you do this in the form of some-- sorry. Some collision operator acting on the distribution function inside your Vlasov equation. So it'll be about Vlasov Fokker-Planck equation. You include collisions here. The main result I want to tell you about is that we end up usually damping coherent features. So we don't get these nice waves anymore. We just get the incoherent features. And it also further-- you might think, OK. We just get the coherent spectrum, incoherent spectrum where s of k omega is just related to the electron distribution function. But it turns out that this is times another factor to do with collisions. So if you have collisions present, they will modify what you're measuring. And so if you don't include the collisions in your theoretical treatment, you will be interpreting the spectrum wrong. So collisions can be a problem. As we discussed before, to be collision for Thomson scattering, you really need to have very short mean-free paths. And most of the time, people are not in the collisional regime, even if their plasma is collisional, in the sense of having a mean-free path less than the size of the plasma. OK. I like finite volume. So a lot of the arguments I use here also apply to finite time measurements. So all of our measurements will take place in a finite volume of plasma or over a finite time scale. We'll have some plasma like this. We'll focus our laser beam through the plasma and we will collect the scattered light along the direction ks. This is the collection volume of our spectrometer, and so we'll be looking at light scattered from this region here. So of course, one thing that could trip you up is that your f of V and r, and I guess time can have spatial variations. So if, for example, the temperature on one side of your collection volume is different from the temperature on the other side of the collection volume, you're going to be collecting two different spectra, but they're both going to show up on top of each other on your spectrometer. So that can be quite hard to do. If it's not temperature, maybe it's something like velocity. So if we have velocity shear, I have a plasma where across this collection volume, the velocity looks like this. And I look at the anacoustic waves, then the spectra coming from this point here would be Doppler shifted in one direction. The spectra coming from the middle of my collection volume would look like this. And the spectra coming from the other side would be Doppler shifted in the other way. And so what you'd have to do-- what you see in your spectrometer is all of these averaged together. And maybe you end up with something like this, where the peaks are much wider than they were in your original case. If you didn't know that you had gradients in velocity within your collection volume, you would interpret this plasma as being hotter. So TI looks larger than it is due to velocity gradient. One way around this is actually by observing from multiple angles. So if you have ks1 but you also observe ks2 or maybe even ks3 as well, the spectra that you get out will not all be broadened in exactly the same way. And so you may be able to infer from these different angles what the true temperature is, but also what the true velocity gradient is. But of course, that's expensive. Now you have to have multiple spectrometers. You might not want to do that. This problem gets much worse if you have a plasma which is turbulent. So instead of having a nice velocity gradient, you have a plasma where there are lots of little hydrodynamic eddies inside here. If you collect your scattered light from over this turbulent region here, then you're going to be having all sorts of different velocity components inside. And what might start out as a nice anacoustic feature might turn out to be just some gigantic, featureless blur. And then it's very, very hard to infer anything from that. So in general, you've got to have a nice, small collection volume so that you have smaller velocity gradients over the volume. And as you make the volume smaller and smaller, you get less scattered light. And so therefore, you get less signal, and then your signal to noise drops. So you can't really win this. Then you'd have to get a more powerful laser. That's a problem. And we also discussed this, but I'll just mention it briefly, is our finite collection angle. So again, if we've got some sort of plasma and we've got our laser going through it, we're generally going to be collecting the light that's coming out of here using a lens or something like that-- something with a finite solid angle. This is the e omega we kept going on about it-- until it goes on to our spectrometer here. But that means that there's going to be several different k vectors. There's going to be ks0 here, the one that goes to the center of our lens. But we're also going to have light that's going to an angle, ks0 plus delta k. And on the other side, ks0 minus delta k. And these are all going to have slightly different vectors that will have slightly different spectra. So for example, you can imagine that one of your spectra looks like this, one of them is slightly more in the collective regime, an incoherent regime where alpha is smaller than the other spectra. It looks like this. And one of them is even more in the collective regime, and it looks like this. When you sum those three together, you're also going to get a signal that looks a little bit like something you should be able to fit with your standard fitting techniques. But you'll never be able to quite fit this. Or if you do fit it, you'll infer the wrong temperature and density. So the solution here is to make your lens as small as possible because if you make your lens nice and small, then most of the light is being scattered with the same k vector. And of course, if you make your lens small, then you collect less scattered light, and so then you have worse signal to noise, so then you need to buy a bigger laser. But there are no ways to win in Thomson scattering. OK. I don't think I have anything else I really desperately need to tell you. Any questions on that? AUDIENCE: [INAUDIBLE] JACK HARE: Yes. If your laser gets more powerful, then the plasma starts to absorb the laser power through inverse Bremsstrahlung. And there are laser plasma instabilities, like filamentation, so there's an upper limits as well on the laser power, too, as well as upper limits on your budget for a laser. Yeah, so it's not-- so a laser to do Thomson scattering is like a $200,000 to $500,000 object. And then bigger lasers are more expensive than that. So this already could end up costing more than the thing making the plasma along with [INAUDIBLE], yeah. Any other questions? Yes. AUDIENCE: [INAUDIBLE] JACK HARE: What's new at the moment? A lot of work on non-Maxwellian distribution functions. So I've erased it from the board, but for Maxwellian, we have this nice function, w, that people have worked out and they've tabulated it. But if you have an arbitrary non-Maxwellian function, you have to do Landau integration for every single omega over k-- so for every point in the complex plane. And evaluating that function is extremely slow, so you can't really do fitting with it. So people are doing machine learning or they're doing other ways of speeding up those evaluations. People are also doing experiments when they observe the light from many, many, many angles. And so the Thomson scattering experiment there looks like you've got a plasma, you've got some laser light going through. It gets scattered off in loads of different directions, and you put a convex mirror here. And that convex mirror focuses the light onto a spectrometer. And each different bit of the spectrometer corresponds to a different angle. So then you're recovering the full three-dimensional Thomson scattering distribution function, or at least the two-dimensional Thomson scattering distribution function there using these very clever optics. So those are two things people are doing. It's a very active area of research. OK. Good luck on your final, those of you who haven't submitted it yet. And as I said on Canvas, I'd be very grateful for any feedback from this evaluation.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_19_Nuclear_Diagnostics.txt
[SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So we're very fortunate today to have Dr. Maria Gatu Johnson join us to give a guest lecture on neutron diagnostics. Maria is a principal research scientist at the PSFC. She did her PhD working on neutron spectroscopy on jet, on magnetic confinement. But now she works on inertial confinement fusion. And in particular, her work on the magnetic recoil spectrometer on NIF was a key to understanding the record neutron yield that we got last year in the ignition shots. So we're very fortunate to have Maria here, and looking forward to hearing her talk. MARIA GATU JOHNSON: OK, thanks, Jack. So an hour and a half on nuclear diagnostics. For ICF, that's kind of a lot to squeeze in. So I did a bit of a sampling. I want you guys to ask questions as we go through. If there's anything you're particularly interested in, stop me and we'll talk a little bit more about it, try to touch on the key things. Also, there's a lot of familiar faces today. So a lot of you know details of what we'll be talking about today, in some cases, better than I do. So then feel free to chime in as well. So with that, we'll get started. So yeah, as Jack said, we're going to be talking about nuclear diagnostics for ICF plasmas. This, in particular, illustrates three facilities where we do ICF work-- the National Ignition Facility in Livermore in California, the Z facility at Sandia National Labs in Albuquerque, New Mexico, and the OMEGA laser in Rochester, upstate New York. And this is actually a picture of the instrument that Jack just talked about in his introduction, the magnetic recoil neutron spectrometer installed on the NIF target chamber, which is the blue, circular, spherical part that you can see there in the background. But I thought, as part of this, we'll touch a little bit on-- if I can get this to move forward-- some comparisons to diagnostics for magnetic confinement fusion as well. So these pictures here, that's the inside of the jet [INAUDIBLE] time of flight neutron spectrometer at the [INAUDIBLE] which I did build for my graduate work. So we'll talk about that one a little bit as well when we get to that point. Brief outline-- find this a little weird between switching the computer and being behind the camera. We'll start by talking about implosion parameters and nuclear signatures. What are we actually looking for in the ICF implosions? What do we get from the nuclear diagnostics? So that's kind of the broad overview background. And then we'll go into a little bit more technical stuff-- the nuclear diagnostics that we use. And I've divided it into neutron activation, neutron spectrometry, neutron imaging, charged-particle spectrometry, touching a little bit on other charged-particle magnets as well, and then finally, reaction-rate history. And finally, if we have time, I'll just spend a few slides discussing the impact of nuclear measurements on the ICF program that is in particular. So let's start with the implosion parameters and nuclear signatures. So the nuclear emission from an ICF experiment carries information about the state of the fusion fuel. And this is actually what really excites me about nuclear diagnostics is that they carry information directly about what's happening in their reactions. They're the products of the reaction. So they know exactly what's going on. We can count the number of nuclear products, and that gives us a measure of the number of fusion reactions that happen. We can look at the energy spread of the fusion products, and that gives us a measure of the plasma ion temperature. We can look at the energy upshift of the fusion products to fuel velocity. Was there a comment online? Can you guys hear me OK? AUDIENCE: We can hear you great, thanks. MARIA GATU JOHNSON: Great. And we can look at a scatter or downshift of the nuclear products to study area of density, which I'll discuss what that is, of the compressed fuel and shell. We can also take an image of the nuclear emission to study the spatial burn profile, and we can look at the temporal evolution to determine how nuclear burn evolves in time. And actually, if you look at this, quite a few of these are also relevant for magnetic confinement fusion. We can obtain similar information by looking at the nuclear emission from a tokamak, for example. The exception is the area of density, which is a specific quantity to ICF. So I want to spend a few slides talking about why that's important. As I think all of you know already, ICF uses the inertia of a dense shell to confine the plasma before it blows apart under its own pressure. We can express a confinement time in terms of the sounds, cs here. This is kind of illustration of how it works. And if we take a-- [INTERPOSING VOICES] AUDIENCE: Should I turn this one round? And then you've got one fewer computer to look at. MARIA GATU JOHNSON: Trying to switch slides here. OK, that's good. AUDIENCE: There we go. MARIA GATU JOHNSON: We can take a mass average of the local confinement time. And that gives us the confinement time as the radius over the sound speed with a 1/4 of r to the 4 in that. So I'm not going to go through that integral in detail, but it's a very simple one. Yeah, so this is the confinement time. OK, so then if we look at the standard, number density times confinement, and plug this expression from the previous slide in for the confinement time, it gives us this expression. So we see that number density times confinement time is a direct function of [INAUDIBLE], which is this area of density that we keep talking about, which is essentially a confinement parameter in inertial confinement fusion. And that's why we care about it so much. Oh yeah, even highlighting it there. High areal density-- or rho R, which you should refer to it as-- is required for a significant fraction of the fuel to burn before it disassembles under its own pressure. We can express the burn fraction, fb, as rho R divided by rho R times 6 grams per centimeter squared. That's at an ion temperature of 30 K. You can derive it at different [INAUDIBLE] temperatures. So that's kind of high compared to what we usually operate at. And this expression actually is derived from the fusion burn rate integrated over the confinement time. If we throw in some numbers on this, we can throw in that we want a burnup fraction of 25%, which really would be required for high gain. That means we need a rho R of 2 grams per centimeter squared. OK, just to put this in perspective, the best performing NIF implosions to date have had a burnup fraction of about 5%. So this is really high performance. And then if we assume solid D-T, D-T ice has a density of 0.25 grams per cubic centimeters. Then we find that for ignition, we need a fuel mass about 1/2 a kilogram. Again, this is with this expression, solid D-T ice. So then the question is, as I'm sure you've seen this 1,000 times before in ICF presentations, can we really work with 0.5 kilograms of D-T fuel in a laboratory? Of course, the answer is no. That gives us a little too much yield, which is not quite what we want. And that motivates-- that in order to achieve the required areal density, our confinement parameter, i temperature and confinement time without destroying the lab, we need to compress the capsule. We actually have to compress it quite a lot. We get these rough parameters, starting from about a 2 millimeter size capsule. Get down the radius of 30 to 50 microns, density like 700 grams per cubic centimeters from the 0.25 that we started with. And an areal density of 2 grams per centimeter squared, temperatures from 5 to 40 keV, and confinement times from about 20 to 200 picoseconds. So looking at these numbers, these inclusion parameters really set the requirements on the different measurements. Typically, we want to achieve a 5% to 10% accuracy on these kinds of numbers. So that really tells you how well we need to be able to make these measurements. OK, so then we have a lot of different products that we can work with. And they really carry a wealth of information about ICF implosions. First of all, we have our primary products. All of you know that we primarily work with the D-T reaction, which gives us the alpha particle and the neutron [INAUDIBLE]. PROFESSOR: I don't think so unfortunately. MARIA GATU JOHNSON: OK, keep jumping. PROFESSOR: Yeah, sorry. MARIA GATU JOHNSON: You could use a board eraser, but that's only going to give you an extra foot or so. So it's probably not worth. MARIA GATU JOHNSON: OK, so that's the primary one we work with. If you look over here, that gives us the yield, the ion temperature, the areal density. We can also use it for yield versus time and use it to infer the confinement time and the radius of the capsule. There's also another branch, which gives us a gamma particle, which is actually quite useful. And we'll look a little bit at that as well later in the class. In addition to D-T, we can also look at primary products from the D-D reactions. We have two, one that gives a [INAUDIBLE] proton, one that gives a helium 3 and neutron. And actually, a lot of-- many confinement experiments have been working primarily with D-D to date. So you can use that neutron for a lot of measurements as well. And then the helium 3, we also work with in a lot of surrogate experiments where we don't want to do D-T and it's a different thing. And then actually, it's quite similar to D-T. We also get alpha particle, get a proton instead of a neutron with pretty high energy. OK, so those are the primaries. Counting the number of primaries to generate gives us a direct measurement of how many fusion reactions we had, obviously. So that's a quick way of getting the yield for inflation. Not necessarily always easy, but basically that's how it works. We can also have secondary products, which I actually won't be spending much time on today. But if you're interested in that, I'm happy to answer questions afterwards. So that's when one of the fast products produced in a primary reaction goes on to react with a thermal fuel and to give a broader energy spectrum of the reaction products. And then we have knock-on reactions. We'll spend a little bit of time looking at how this works. So that's when the fast neutrons born in the D-T reaction hit one of the fuel ions to give faster fuel ion and a scattered neutron. We actually use this quite a lot. And we can also have similar reactions for the alpha particles scatter on the fuel ion and give [INAUDIBLE] fuel ions. I won't spend much time on this today, but this is the signature that they can use to look at what the alpha particles are doing in the plasma, in particular, when these fast neutrons react again, in a tertiary reaction, which I think the example's in here-- yeah, to give these fast neutrons, which we call alpha knock-on neutrons. Think some of you have heard about that before. OK, yeah, so this is actually pretty cool, these knock-on reactions are the easiest way of measuring rho R. So we look at some examples of that as we go through. OK, we already talked about this-- counting the number of emitted primary fusion products in a set. Solid angle and scaling it up to 4 pi gives us a measure of the total yield. So this is basically the yield reaction. I forgot to put the y in front of it. But if you do the integral over volume and time, the densities of the reactants-- typically, it could be [INAUDIBLE] and [INAUDIBLE] for D-T plasma-- and the reactivity-- and this is just a Kronecker delta. So if it's 2D, you get a factor of 1/2. That gives you the [INAUDIBLE]. And the reactivity is the integral over the reactants of the fuel ion distributions, the cross section. And then you can get-- this is actually an example. Reactivity is calculated according to that formula. That's written up in this paper, which if you haven't seen this paper before, this is really a key reference to go to if you want to look at how likely a reaction is. Obviously, the key is the most probable. Then we have the 2D reactions. We have a probability. And you can go to lower probability, [INAUDIBLE]. OK, yes. Go ahead, John. AUDIENCE: How do you deal with the fact that you're technically not-- so, when you do this calculation of integrating over 4 pi, you're assuming some spherical symmetry. But we know for a fact-- MARIA GATU JOHNSON: Great question. So in ICF, actually, you can typically assume 4 pi. There are some variations, which we can look at-- will look at later in the talk. But in magnetic confinement fusion, you don't have 4 pi symmetry. So then you have to correct for it. And actually, this leads to the work that you are doing. This is an example from a paper in 2010 by Sjostrand, where he's using a neutron spectrometer to infer the total yield of neutrons from the [INAUDIBLE] tokamak. But he needs to correct for the emission profile by using the profile monitor so he can scale that single line of sight [INAUDIBLE] neutrons to what the [INAUDIBLE] emission would look like. Great question. OK. OK, so, thinking about what happens to the particles after they're born, most nutrients escape an implosion and can be counted. But some of them scatter. We'll talk more about that later. But for charged fusion products, like from the helium 3 reaction, stopping in the assembled fuel has to be considered. So the nutrients, they might scatter on their way out, lose some of their energy. But most of them escape directly. Scattering probability is relatively low. Any charged particles are going to lose energy as they traverse the assembled field. And in fact, an example of what that can look like, we have the helium 3 critical spectrum be born in 14.7 MeV. This is the lower [INAUDIBLE] implosion, lose just a little bit of energy on its way out of the capsule. For high convergence implosions with high rho R, the ions will be fully stopped and can't be counted. So then we really can't rely on measuring primary fusion products, charged fusion products, for those experiments. And actually, it turns out that ion stopping plasmas is a rich research topic that our group has been working on. I have a few example references there, and there are a lot of other references. OK, let's see. Yes, so coming back a little bit to the knock-on reactions. So the neutrons, when they scatter off the fuel ions, they will also up scatter the fuel ions and energy. So fuel ions that are starting the thermal distribution, or basically, coal, can be up scattered in much higher energy by these 14 MeV neutrons. The energy of the ion can be calculated according to that equation, where A is the mass number of the scattering nucleus. And theta is the scattering angle. En is the neutron energy. And this is the examples of what it can look like. So this is the scattered spectrum for tritons, deuterons, and protons. And basically, the energy of a deuteron tritonal proton that's scattered in this way is going to depend on the scattering angle. So this is what you get if you integrate over all scattered [INAUDIBLE]. OK, and we can use this fact. We often measure these products. And we use a number of those products to infer areal density because the areal density depends on the ratio or scale, basically the opposite. The ratio of the number of knock-on ions to the neutron yield will be a function of the areal density. Does that make sense? The more fuel that's assembled, the more of the neutrons are going to scatter. So we can use that relationship to infer what the areal density of an implosion is. But this only works up to a certain areal density. Like, the knock-on deuterons, for example, would be fully ranged out [INAUDIBLE] 200 milligrams. And we talked about before, it might be at the order 2 grams per centimeter squared. So this is really relatively low performing implosions that we're looking at. So we often look at the neutrons instead. And we can derive the energy of the scattered neutrons to see that the neutrons carry information about the symmetry of the assembled plasma. Because again, the scattered neutron energy will depend on the mass number of the scattering nucleus and the scattering angle. So we find, if we do that math, that for a detector at a set angle [INAUDIBLE] implosion, neutrons that end up at the detector in a certain energy range are going to be sampling a certain part of the shell of the implosion. So at the NIF in particular, we often infer that compression by looking at energy range 10 to 12 MeV, which is a really clean energy range of the neutron spectrum. I think I might have an example later where you can see. There are no other sources of neutrons that might contribute in that range. So by looking at the number of neutrons in that range, you really get a measurement of only the neutrons that are scattered. OK, so if you do that, if you look at the range from 10 to 12 MeV, the neutrons are scattered out from tritons. It's part of the shell. Neutrons are scared of the [INAUDIBLE] part of the shell. And it's a little bit different because deuterons and tritons have a different mass. And you can also broaden the range, and actually, we'll look a little bit at neutron imaging. Neutron imaging typically looks over a broader neutron energy range in order to get enough statistics in the sample the broader part of the shell. So OK, this looks nice and simple. In reality, typically, the source of [INAUDIBLE] is significantly broader. So this smears out. You're not going to have all the neutrons coming exactly from the center. But this is the basic idea. The unscattered energy spectrum, the neutrons that go straight out, like the green arrow here that don't lose energy on the way out, will carry information about ion temperature and velocity of the assembled fuel. And actually, in principle, this will be true of any fusion products. When they're born, they will be born with this information. The problem is, if it's a charged fusion product, it's going to lose a lot of that information as it's losing energy on the way out of the capsule. It's going to be a lot harder to infer that information. This is the expression for the energy of a single fusion product. Then we can make the moments of this energy over the distribution of reactant plans to find that the width of the neutron spectrum is proportional to the ion temperature. There's a small ion temperature related peak up shift of the spectrum. And the peak will be shifted to the direction of flow of the emitting plasma. There's actually one piece of key information that we've gotten from neutron data at NIF. They found early on that the capsule, when it was pushed through the lasers, it ran off in one direction, basically, based on the neutron spectrum being shifted, which we had to correct for because that prevented efficient conversion of the compression energy into thermal energy of the capsule. So this is a non-relativistic expression. Really recommend you read this paper to get the relativistic math for how this works. I'm not going to go through it today, but this is an excellent reference which everyone that does neutron diagnostics for ICF uses all the time. And it's actually, originally, comes from the magnetic confinement community. So it's another example of the connections. OK, so I think by now you've understood that the neutron spectrum provides information on areal density, iron temperature, and yield. And this is an example of what a neutron spectrum can look like. So the primary [INAUDIBLE] here that are unscattered [INAUDIBLE], the width of that primary spectrum is related to the ion temperature of the plasma. We can also-- not illustrated here-- but we can have an upshift that's related to the velocity. And then by counting them, we get the yield, scaling up to 4 pi. And then by taking the ratio of the neutrons in the down scattered range, the neutrons in the primary range, we get a measure of the areal density. And actually, we often talk about a down scatter ratio rather than an areal density because we can measure the number of neutrons in this range compared to the number of neutrons in this range. And, yeah, go ahead. AUDIENCE: How do you distinguish between neutrons that have down scattered within the fuel versus neutrons that have down scattered in the lab? MARIA GATU JOHNSON: OK, so that becomes a technicality of the instruments. You have to collimate them really well in order to look at that. And it turns out the way that ICF is set up, the capsule is at the center of a large chamber. So the room return from the back wall of the chamber becomes a much, much smaller fraction of the neutrons that go down your line of sight. So basically, the way it's set up for-- Chris is actually looking at exactly this problem-- for the magnetic recoil spectrometer, which we'll be talking about later, the foil is 26 centimeters from target chamber center, back almost 5 meters away. So the solid angle for scattered neutrons coming down the same line of sight is just so much smaller. It becomes negligible. For the [INAUDIBLE], I don't think it's been [INAUDIBLE] I don't think it's been looked at in detail. But what they do is they take reference implosions so that they know if there's no assembled rho R, they know what the background in that range is. Yeah. And then Chris is looking at the concept of putting numerous foil really far away. And I'm making him do simulations to see if that's going to work, or if we're going to have a problem with the returns [INAUDIBLE]. AUDIENCE: Why do you cut off the down scattered region at 4 1/2 MeV? MARIA GATU JOHNSON: OK, so that's actually just kind of random. What we typically do when we do this is we [INAUDIBLE] This is not the best spectrum to look at. We're gonna see if you can see this on the camera. We look at DSR, or down scattered ratios, we call it. And that's the integral in the [INAUDIBLE] sort of see it? Integral in the 10 to 12 MeV range divided by the 13 to 50 MeV range. [INAUDIBLE] just get a quantitative number. So that's this range here divided by that range here. And the reason we look at that range is because we have contributions from neutrons from the T2 reaction contributing up to 9 1/2 MeV. We have the D-D neutrons contributing here. You can kind of see that peak here. And there's also multiple scatters that kind of break the correlation between rho R and the number of neutrons that you get. So it's the cleanest reading to look at in the spectrum. So that's how that works. Any other questions? OK, so I mentioned that we can also use the fusion products to look at the spatial emission. So we can take images of primary and scattered neutrons, for example, to provide information on the burned region, size R, and also the thickness of the high density shell. So in this case, this is actually a reconstruction, taking primary images in the 10 to 12 MeV range, down scatter images in the 6 to 12 MeV range, and then doing a fluence-compensated image, which gives us this artifact here, which gives us the picture of the neutron source, which is the primary neutrons and the high density shell, which is scattered neutrons. And we can measure the nuclear reaction rate to get information about the confinement time and the bang time of the implosion. So what this is here is-- we call it the Lagrangian plot where you follow this simulation. You follow the same fluid element as a function of time. And then you see that the red is that interface between the capsule shell and the gas on the inside. This is for a gas-filled implosion example rather than the [INAUDIBLE] ETIs. You drive it with the lasers. You get ablation of the surface material, which is why some curves are going off, and the other curves are compressing inwards until you get convergence. The shift you will show in particular moves inwards. The rest of it is converging. Get a little bit of burn here when the shocks hit the center, and you get more burn here where the capsule is at peak convergence, when it's maximally heated. And then you can measure the emission history as a function of time. And you can see this shock burn and this compression burn. And this particular example [INAUDIBLE] implosion. So gas [INAUDIBLE], so then you often get both of these components. And then with the implosion, you're going to have very little shock. And you can have a lot more compression where we would be completely dominated by the [INAUDIBLE]. OK, so those are some examples of the parameters we're looking for. So with that, I plan to go into more about the technical detector details. So any questions before we move on? Actually, I have no idea how I'm doing on time. PROFESSOR: Oh, you've got half an hour-ish. MARIA GATU JOHNSON: That should be good. PROFESSOR: Yeah. MARIA GATU JOHNSON: OK, then let's jump into it. So the first one I thought we'd talk about is nuclear activation diagnostics. So they're typically based on indium 115, copper 63, or zirconium 90 isotopes for measurement of primary D-D or D-T neutron yields. If you look at D-D first, that's what we use indium. When a D-D neutron hits the [INAUDIBLE] indium, we get a isomer and the scattered neutron. The threshold for that reaction is about 1 1/2 MeV. And this isomer state will decay, emitting gamma. But it has to have about 4.5 hours. And this is the gamma that we count to ensure how many reactions happen. On omega, we use copper to measure the D-T yield. And again, it's copper 63. And then the neutron [INAUDIBLE] neutron means copper will be an end-to-end reaction. So with copper 62 and two neutrons, threshold is about 11 MeV. We'll look at the shape of the cross-section. I think it's on the next slide. And then what happens is that copper 62 is radioactive, will decay to nickel 62. And the half-life for this is 9.8 minutes. And what we actually count are the gammas here as well. Zirconium [INAUDIBLE] zirconium 90. We again get an end-to-end reaction. Threshold is about 12 MeV in this case, which means we're really narrowing in on the primary neutrons at 14 MeV. This is what we use at the NIF. And again, zirconium 89 is not stable. The end product that we get is gamma 909 keV, which is what we're counting. And if you look at-- OK, so this first plot has the indium and zirconium reaction cross-sections. So you can clearly see why we use zirconium for D-T. The threshold is at about 12. Really covers our primary D-T [INAUDIBLE]. It's also really sharp though, which is actually a useful tool because if the peak is shifted up or down, it's going to impact what you're counting, which means you get an impact of velocity [INAUDIBLE]. You so you can see differences around the implosion. And then indium, on the other hand, is a really broad cross-section, which actually makes it a really blunt tool. If you want to use it to measure D-D, it's by far the easiest. In a pure D2 implosion, we don't have the down scattered D-T neutrons [INAUDIBLE] In principle, you can use a cocktail of different nuclear activation detectors to piece together information about the full neutron spectrum. And here, we have some examples of parts of the spectrum that can be of interest. So in the D-T implosion, this is what the D-D spectrum look like. You actually have some of those secondary neutrons that we talked about before that are at 14 MeV. You have the primary D-Ts at 2 1/2 MeV, down scattered D-Ts, and then just a little bit of scattered in between. And that we can get at with the indium in principle. We have the T-T neutrons, which have a peak at about 9 MeV and go from 9 1/2 all the way down to 0. Those you can also attack a little bit. And then, really, the primary thing we're looking at is the D-Ts, which some of the up scattered then-- and we looked at this before-- where the neutron has hit a fuel ion, giving it a lot of energy. That energy in turn reacts to produce another neutron. That's when we get these really high MeV tertiary neutrons, 15 to 30 MeV. And that's actually, in many cases, also really interesting measurements [INAUDIBLE]. Oh yeah, that completely fell off. There's another reaction here that has this cross-section here. It's an isotope of carbon, but honestly, don't remember which one. So then you can really focus in on just those highest energy neutrons. And this actually also shows you-- we have-- it's carbon-12. [INAUDIBLE] Copper you kind of see is this orange line here. And we have zirconium as the red line. So zirconium is a much sharper threshold of 12 MeV. Copper starts already at 11. So they have a little bit different sensitivity to [INAUDIBLE] neutrons. At omega, copper activation is used for measurements of the primary DT neutron yield. We have, basically, a little retractor tube that allows a puck to be inserted and then dropped after a shot by pushing a button. But it's still very manual. So many times, I've been up there for a shot, and this old Russian guy, Vladimir Glebov, pushes the button, gets the black disk, and then runs it over to the counting detector in a different lab. And it's using sodium iodide detectors to detect the gammas in coincidence [INAUDIBLE]. And it's actually quite useful because then you go down to really low neutron thresholds. So cryogenic implosions at OMEGA produce upwards of 10 to the 14 neutrons, but you can measure neutrons down to 10 to the 7, which means you can look at experiments where you're not producing many neutrons at all and still know what you produce. On the NIF, zirconium is the primary activation element used. And it's used routinely for measurements of primary DT neutron yield. In fact, from the high-performing implosions, there are two measurements that provide the yield that's then reported out. One is the zirconium nuclear activation, the other one is MRS. So it's implemented in a number of different versions. We have the Well-NADs, which is kind of the go-to reference. It's inserted to 4 meters from target chamber center. There's three different pucks that sit very close to each other so you can compare the numbers from the three and make sure you're not making any mistakes. And then you have the Snout-NAD, which you can insert much closer. And actually, it's more common to vary the elements in these packets and have the cocktails to look at different neutron interactions. And then finally, you have the Flange-NADs, which sit on the outside of the chamber. And there's a large number of those attached in different positions around the chamber to look at symmetries. The zirconium detectors are transported-- well, actually this is kind of modified to the Flange-NADs-- we'll talk more about later-- are now counted in situ at the NIF, but they used to be transported. Zirconium detectors are transported to the Lawrence Livermore National Lab NAD Data Analysis Facility, which looks like this with some really old hardware but still does its job. And then yeah, so the Flange-NADs has actually been converted, fairly recently, to 48 real-time zirconium nuclear activation diagnostics, or RT-NADs, that are permanently installed-- semi-permanently installed on the chamber with a lanthanum bromide detector counting the activation from those continuously. And you can see the peaks when there is an implosion. But recently, it used to be 48, which is great. You can really look at the symmetry of the emission, which-- I think I'll get to this on the next slide, but the symmetry emission tells us about the aereal density symmetry. So we could look at the symmetry emission. The reason high-yield implosions have started killing these detectors, so we're now down to 21 version, and we actually have to remove them before every high shot. So they're no longer permanently installed on the chamber. Yeah. So this is what I was trying to get to. The nuclear activation diagnostics often show large low-mode aereal density asymmetries in NIF implosions. So this is when you have this network of 48 detectors that all provide a measurement of the yield above the threshold of 12 MeV. If you're starting to see variations in that above-12-MeV threshold, that means that the birth distribution is uniform in 4 pi. So that means something must have happened on the way out, where some of them, more than others, were scattered on the way out, which means that the aereal density is not symmetric around the implosion. And actually, turns out the typical scattered neutron fraction is about 20%. You can look at them, the number density times the cross-section times the shell thickness, you get just a rough measurement of how many neutrons will scatter on the way out. It's about 20%. [INAUDIBLE] frequently see variations of plus/minus 8% in the unscattered neutron yield, which means it's a large aereal density variation from one side of the capsule to the other. And this has also been a really useful diagnostic tool in figuring out what's going on with these implosions as we're trying to improve them further, make them perform better. And yeah so I've mentioned before that you also have an impact on peak shifts here because the cross-section is higher and higher energy. So if you have a flow where the [INAUDIBLE] runs off in one direction, you can have upshift of the peak in that direction, downshift of the peak in the other direction. It turns out it's a smaller effect than the rho R asymmetries, but it's significant enough where it has to be corrected. First, you have to measure that directional flow and correct the distribution for that effect as well. And we use low aereal density gas-filled DT exploding pushers to set the baseline variations, basically, as-- losing the word-- baselining. There's another word. Maybe it'll come to me later. OK. Did that all make sense, nuclear activation detectors? PROFESSOR: So why is it that you use copper on OMEGA and zirconium on NIF? MARIA GATU JOHNSON: Just historical reasons. PROFESSOR: Oh, OK. Is one better than the other or that can [INAUDIBLE]? MARIA GATU JOHNSON: So I actually-- the guy who runs the neutron diagnostics at OMEGA now would like to start using zirconium instead. And if I remember correctly-- yeah, the reason for that is the longer half life. It's really hard to work with a 10-minute half life. You have to really run to get to that detector fast enough, whereas zirconium, with the three days, is a little bit easier. And also, the threshold's actually better at 12 compared to 11. Yeah? AUDIENCE: For looking at the upshift from zirconium, do you just compare that to a baseline where you have a more uniform emission profile versus a non-uniform one, where that steepness of the cross-section actually matters? Or do you compare it to a baseline cross-section that's flatter? MARIA GATU JOHNSON: I'm not sure if I fully understand the question. But so what you're doing is you're looking in many different directions, and you compare the results in different directions. But you know-- AUDIENCE: So but you just-- you know your baseline just by assuming a 14.1 MeV uniform profile? MARIA GATU JOHNSON: OK, so there's a couple steps to this. If you get a map kind of like this one, I mentioned you have to correct for the velocity, which is the peak shift. We actually don't get the velocity from this diagnostic. We get it from the neutron spectrometers. So if you have neutron spectrometers in six lines of sight, you can measure there. You don't just measure a number above the threshold, you actually measure the neutrons because you know what the option is, which means you can infer the actual 4 pi velocity vector. And then you can correct this for that. Yeah. Neutron spectrometers. Any other activation question? OK. Then with that, let's go into neutron spectrometry. This is kind of touching on exactly that point. We do have a large suite of neutron spectrometers on the NIF. These five, the blue ones here, are all based on the same technology. They're neutron time-of-flight spectrometers, which we'll discuss in detail. And then this one in red is the magnetic recoil spectrometer, which I already mentioned a couple of times. There's actually one more that's not included on this cartoon, which is also a neutron time-of-flight spectrometer based on a different detector technology that's fielded together with the neutron imager on roughly this line of sight. It's been a few years since it was working, but we're trying to bring it back to resolve some [INAUDIBLE] everything. So total of seven neutron spectrometers on the NIF that provide good implosion coverage together. And this similar setup exists in other ICF facilities. Like on OMEGA, for example, think there are six now that run on DT in different lines of sight. So you can compare the results in those six lines of sight to, again, infer the flow vector in addition to measuring the ion temperature and the ion temperature variations. And fewer of them work on DT but still enough to get a good coverage. OK. So let's start with the magnetic recoil spectrometer. So again, this is what I've been working with since 2010, so happy to take any questions on this one. And Chris is working on a very similar concept. Sean's kind of working on a similar concept, too, for Spark. So you guys are very familiar with this already. But for those who have not heard about it before, the way it works, you will have the neutrons emitted from target chamber center, that little blue dot. A fraction of those neutrons will reach a conversion by the plastic conversion foil, 26 centimeters, in the case of NIF, from target chamber center. And this is actually deuterated foil. So the neutrons that interact with the foil, some of them will knock out deuterons. Forward-scattered neutrons will reach this magnet, which is outside of the target chain wall. It's just a vacuum in between. So they all reach the magnet. And then there is momentum separated in the magnets [INAUDIBLE] different physical location of the detector, right, depending on their energy. Then you use that to reconstruct a recoil deuteron energy spectrum. And then from that spectrum, you can infer what the incident neutron spectrum must have looked like. We use deuterated plastic, in particular because the detector we use in this instrument is CR-39, and it turns out the deuteron tracks are much, much easier to distinguish above background compared to using protons. Yeah. And we also-- there's a number of detectors with [INAUDIBLE] the sodium hydroxide, scan them in microscopes after the shot, and then stitch the data together. Looking at-- zooming in on the foil here, so the neutron will hit a deuteron in the foil. And then the recoil deuteron energy will depend on the incident neutron energy, and then also the energy loss that the deuteron has from its place of birth until it hits the back of the foil. So a neutron born at the start of the foil would come out with a lower energy than a neutron born at the end of the foil. And that has to be considered in the analysis of the data. We can look at, also, a couple of other aspects of this. So when I have-- we used to want a high efficiency to be able to count all the neutrons that came out. Today, we're actually running into saturation problems instead, so we don't really want this to be so high anymore. But the efficiency of the MRS can be back of the envelope calculated as the foil solid angle times the number density for deuterons in the foil times the foil thickness the n, D differential cross-section for forward scatter, and the aperture solid angle where the aperture is opening in front of the magnet. And you can throw some numbers on that. This is an example from the NIF. The foil solid angle will be the area of the foil divided by the total sphere at that distance where the foil is sitting. And then number density, we calculate based on manufacturer's specifications. Foil thickness is measured. This differential cross-section in the forward scatter direction is roughly this number. Aperture solid angle take the area of the aperture and just divided by the foil aperture distance squared. And this is a correction for the fact that the aperture actually isn't sitting straight. It's tilted in front of the magnet. And then this is the [INAUDIBLE] of the [INAUDIBLE] foil which you also have to add as the correction. This gives us a rough number. In reality, this is not what we use to get the yield number out. We use MCP simulation. Actually, I thought on the way over here, I should have included a slide on MCP because we use MCP a lot as a tool in understanding the response of the detectors that we're looking at. Even for the nuclear activation detectors that we looked at before, to know how many neutrons that they see and how many might be scattered before they hit the nuclear activation detector, we also have to use Monte Carlo neutron transport tools such as MCMP. So it's not just building detectors. There's a lot of modeling that goes into this as well. OK. We can also look at what we expect for the resolution. What that is, basically, is if you have monogenic neutrons emitted from target chamber center going through that whole system, then we're going to end up with a wider spectrum than just monogenic on the MRS. So we look at, how wide would that spectrum be assuming we had monogenic neutrons to understand the broadening-- the instrument of broadening. So we can look at that as three components. We have a broadening effect due to the foil thickness. And this is, again, where deuterons are born on one end or the other, and they're going to lose energy as they go through, which gives you a broadening. We're going to have broadening based on the scattering geometry. We have neutrons that hit the foil head on, deuterons that go straight out. But then we also have neutrons that hit one edge of the foil and go at an angle. And that gives us a broadening effect. And then finally, we have some broadening depending on the ion-optical properties of the magnet. And that leads us to a total broadening. And it actually turns out that these are counter-related. So if you want higher efficiency, you have to add more foil material, which also enhances the broadening. You want a narrow broadening, you want high efficiency, so it becomes an optimization problem. In the design of a magnetic recoil spectrometer, you have to balance efficiency and resolution against each other. This is an example for the thin foil magnetic [INAUDIBLE] spectrometer at JET, which is very similar to MRS and maybe even more similar to what Sean is working on for Spark. So this is looking at proton collimator radius and foil thickness, and seeing how varying those two will impact the efficiency and resolution, trying to find some set points that would be ideal operation for getting the signal you need and still having a good enough resolution to make the measurements you want. And that, actually, we can take a look at what that looks like at JET. This is the MPR magnetic recoil proton spectrometer. In this case, we do use protons not deuterons, because they use scintillator detectors they can count protons without any problems. This is what it looked like before the shielding was added. This is the magnet housing. And we add a lot of shielding to prevent contributions from scattering neutrons that you don't want to look at. And it's the same concept as for MRS. You have the neutrons emitted from the plasma over here. You have collimators, so they only look at a certain fraction of them. The foil is inside the magnet housing here. The second collimators [INAUDIBLE] the protons are born in foil here. And then the protons are momentum analyzed in the magnet [INAUDIBLE] in a different physical location on the detector array. In this case, it's an electromagnet rather than a permanent-- maybe I probably forgot to mention that for MRS. But for MRS, it's a permanent magnet. An advantage with an electromagnet is that you can tune it so you can operate at different energies. The MPR, in particular, can be tuned to operate either for 14 MeV DT neutrons or 2 and 1/2 MeV neutrons. And we've used them with the JET tokamak with an oblique angle like that. If you look from the top, you can see how it traverses all the way through the plasma. PROFESSOR: Why is the oblique angle used? MARIA GATU JOHNSON: I'm not sure. That's an interesting choice. I think it might have actually been to maximize efficiency because you see more of the plasma that way. AUDIENCE: OK. MARIA GATU JOHNSON: It's a question for, Johan, though. He was involved in the actual building of this system. OK. And then so just look at what a recoil deuteron energy spectrum can look like, this is an example from the NIF. It's a really old example. But you have the primary DT peak, and then you have the down-scattered neutrons. And then from this, you infer the total neutron yield by scaling for height, the aereal density by comparing the number of neutrons here-- deuterons here and deuterons here and the ion temperature from the width. And what you do when you analyze this kind of data is you take a model neutron spectrum and fold it with the instrument response function simulated using GM-4 or MCP or a combination thereof and get it on the deuteron energy scale, and then adjust the ion temperature the peak position, which is related to velocity, the amplitude, which is related to yield, and the rho R, which is related to the down-scatter relative to primary diffraction. So you get those numbers out of the analysis. Yeah? AUDIENCE: You're going past just fitting a Gaussian to this peak. You're using a more sophisticated analytic model? MARIA GATU JOHNSON: So actually, in most cases, I fit a Gaussian to the peak and then just have a second component that accounts for the scattering. But yeah, there are some slightly more advanced models as well. And for magnetic confinement fusion, you'd typically have to have more advanced models because you have fast ions due to heating that contribute to broadening of the peak. And to resolve those, you have to have a model for the beam-thermal reactions or the beam-beam reactions that have slightly different shapes. And you can find the relative contributions of those by fitting those different shapes to the peak. Other questions? OK. So with that, let's look at the neutron time-of-flight technique. So for ICF, this is actually simple because all the particles are assumed to be emitted at the same time. The burn time is so short, order of 100 picoseconds, so you can make that assumption. So then you really only have to measure the neutrons as they arrive on a scintillator at a set distance, d, from the implosion. And you use that time to infer the neutron spectrum. Yeah. And I mean, this in particular, it's already been converted to temperature expression. But really, what you're looking at is the neutron energy just on a time scale. Yeah. And the same as for MRS, the ion temperature is determined from the width. And actually, so on OMEGA, the nTOFs are still used as the yield measurement, too. On the NIF, we gave up on that a while ago because it's so hard to know how the gain of the electronics [INAUDIBLE] drifts in time. So what we do is we calibrate it relatively routinely to the nuclear activation detectors and MRS. And since it's cross-calibrated, we no longer use it as the absolute yield number. nTOF detectors are also used to diagnose aereal density. And I think already mentioned at some point that we use that by comparing to what we call zero rho R implosion, where we really just put DT gas in the really thin shell, drive it really hard directly with a laser, so we know there's no aereal density or negligible aereal density, and then we can look at the difference between zero rho R implosion and one with significant rho R to [INAUDIBLE] the rho R's. And yeah, this is identifying on the time scale what the 10 to 12 MeV neutron energy range will be. The detectors at the NIF-- I think the closest one is 18 meters from the implosion, and the furthest one is at 27 meters from the implosion. This is what the original equatorial nTOF looked like. It's since been upgraded to look more like that. That actually looks like this, but it's kind of hard to tell. The way it works now is instead of having these PM tubes directly attached to the scintillator, which is in this volume here, like [INAUDIBLE] you have the photomultiplier tubes facing. So implosion-- I'm the implosion. Detector is over here. The photomultipliers are facing this way so that neutrons hit the scintillator, and the light is collected in this direction, so you have [INAUDIBLE] contributions from scattering in the detectors themselves. And there's four photomultiplier tubes at each scintillator, so you can have different settings on different photomultiplier tubes to optimize them to look in different parts of the neutron energy spectrum. There's collimators on the way. This is an example of how neutrons come through a wall collimator, with this detector. In this case, you can also see that both neutrons and gammas coming through the collimator hit this high-sensitivity and fast detectors, which we call a spec detector, which is used to measure rho R. Also recently installed these quartz Cherenkov detectors to just a really thin rod in the same line of sight, which you can use to look at both the neutron and gammas. And these are actually more optimal for the velocity measurement because you get a really precise measurement of the primary structure from those. Yeah. And it's similar at OMEGA. So this is at OMEGA. It's below the target chamber center in kind of a basement, which we call LaCave. It's this large detector which is a liquid scintillator material. It's quenched xylene, which allows you to have a really fast time response. So it falls off as quickly as possible after a primary peak, which makes it easier to measure the down-scattered neutrons. Actually, another detail-- on OMEGA, the nTOFs times do not measure down-scattered neutrons in this energy range, because the rho R is much lower at OMEGA than on the NIF's much lower-power laser. So instead, we're using the backscatter edge of n, D backscattering. So neutrons that hit the back of the implosion scatter off of tritium and reach the detector on this side, which gives us an edge at 3.4 MeV, which is much easier to distinguish on OMEGA. And so that's done with this detector. Again, for photomultiplier tubes, they're optimized for different ranges of spectrum. And look closely, you can see there's two detectors in front here-- one Cherenkov detector as well-- that thin rod-- and this pattern detector, which is one of the primary ion temperature detectors. So it has much better resolution than with large xylene detector. Any questions about that? Ben? AUDIENCE: Is the thickness of the detector a significant source of uncertainty? I guess I'm guessing that the thickness of these detectors versus the length of these beam lines is really small to be negligible. But is that a source of uncertainty? MARIA GATU JOHNSON: So it depends on what you mean with "uncertainty." I mean, so definitely, you get a broader spectrum from this large detector than from those thinner ones in front, which reduces your resolution, so it's harder to measure ion temperature. So that's why, like in this case, this detector is optimized for the rho R measurement, and this one is optimized for the ion temperature measurement. This one needs high efficiency to get that weak component of down-scattered neutrons. This one needs high resolution to measure the peak accurately. AUDIENCE: So similar to MRS, it's a trade-off between efficiency and resolution? MARIA GATU JOHNSON: Yeah, yeah. Mm-hmm? AUDIENCE: Someone asked at APS, and I didn't know why-- what are the benefits of MRS over the nTOFs if they all give temperature, rho R, [INAUDIBLE]? MARIA GATU JOHNSON: So OK. My perspective, the primary benefit is having more than one technique because you really need to know independently what you're measuring. You can compare the results from both. And many times over the years, as one technique started drifting, and then we figure out what's going wrong by comparing with the other technique. So I think that's the primary advantage. You can also say that one is that the MRS gives you the absolute yield. It's calibrated from first principles compared to cross-calibrated to other detectors. Yeah. But I think it's really important to have both. PROFESSOR: But this data is available directly after the shot, whereas the MRS is a while. MARIA GATU JOHNSON: That's true, that's true, which is a huge, huge advantage. And yeah. And this could be scaled up to rep rate as well because you can make sure you can analyze it quickly after a shot, whereas MRS would CR-39 indefinitely. [LAUGHTER] Well, Chris is working on electronic detection, so we'll get there. Other questions? OK. So then the magnetic confinement fusion equivalent-- we do have a neutron time-of-flight system here, too. But here, it becomes more complicated because we can no longer assume that all the neutrons are emitted at the same time. So here, we have to have two sets of scintillators. We have a start scintillator, which we call S1 here, and a stop scintillator, which is S2. This is what it actually looks like in real life. There's a collimator through the floor here. The detector sitting in the roof lab above the JET tokamak. So the neutrons come through the collimator in the floor, hit the start detector first and then the stop detector. We have the start detectors layered to allow us to count at a higher rate. There's five layers in there. And then the stop detector is divided into 32 segments to actually-- that's more of a resolution [INAUDIBLE] thing because you want to know where the light is coming from in order to be able to measure the [INAUDIBLE]. And replacing them on the constant time-of-flight sphere so that you can compare the data from all the different detectors and stitch to make one spectrum. Yes, Kai? AUDIENCE: How do you know if a neutron hits the S2 is the same one that you just measured at S1? MARIA GATU JOHNSON: Great question. So you don't. And that's where this comes in. AUDIENCE: Oh, OK. MARIA GATU JOHNSON: So we look at data from the different scintillators in coincidence. So you take-- what this example is, is all its events that you can get in the S1 detector on the top. You get a lot more events in S1 because it's closer and it's directly in the beam of neutrons from there. The S2, the beam actually goes through in the hole in the center, so it doesn't hit the S2 directly. The S2 only sees scattered neutrons. But that means you get a lot fewer events in S2's. And what you do is you go through and look at coincidences between the two detectors. And actually, what you get is you get all the coincidences. You get the true coincidences, and you get background random coincidences. And you have to subtract that back out. But when you do that, the peak will appear because that's then the true correlation that it will be the same between all neutrons [INAUDIBLE]. You were saying? So in this case, the flight time for a 2 and 1/2 MeV neutron between S1 and S2 is about 65 nanoseconds. AUDIENCE: But the beam drift time or the amount of time that the neutron spends moving between S1 and S2 is very short, because looking like these detectors, they're very-- MARIA GATU JOHNSON: 65 nanoseconds. AUDIENCE: So does this function for DT neutrons, or is this [INAUDIBLE] MARIA GATU JOHNSON: So it's best for DB. So this, it will [INAUDIBLE] they show up at 27 nanoseconds. The time resolution isn't anywhere near as good simply because the flight path is shorter. If you wanted really good resolution, you'd have to make the flight path really long [INAUDIBLE]. But of course, another difference here is here, we didn't really make that point with the inertial confinement fusion nTOFs. But what we're looking at here is running the scintillators in current mode. We're just opening them up, looking at the signal current as a function of time. In this case, we're looking at individual pulses from single neutrons interacting with the scintillator. So we can divide it into-- we recorded over the entire duration of the pulse and we can reconstruct neutron spectra for any time interval we want where we get enough statistics. So we can actually look at the time evolution of the neutron spectra this way. And at ICF, we simply get too many neutrons at the same time so it becomes complicated. But Chris has spent quite a bit of time trying to figure out how to do that, too. OK. So that's all I plan to say about neutron spectrometer. Any more questions before we move on? OK. OK, I think I have just a very short section on neutron imaging. So what we do in neutron imaging is we use a pinhole or aperture close to the implosion and the detector really far away to obtain good magnification. This is actually very-- we have the source again here, which is a target chamber center, which is where the neutrons are emitted. You have a lined aperture, which can either have penumbra, which will encode the signal, or a simple pinhole, where you have a direct correlation, basically, between the neutron emission and opposite, kind of inverted to the detector. The magnification will depend on the pinhole standoff distance and the detector distance. OK, so I took this slide from somewhere else, and I don't know what numbers they actually threw in there. Oh, assuming a magnification of 200, which you would get depending on what L1 and L2 values you have. If you have 5 microns at a source, it's going to be 1 millimeter at the detector, which magnifies your radius. I talked about before, that implosion would be 30- to 50-micron radius. You magnify them so you can separate individual features much easier on your detector on the outside of the chamber. And actually, for the NIF in particular, I think typically the aperture is about 20 centimeters from target chamber center. The detector is 28 meters away. Yeah, this is what-- ha, actually, those are the exact numbers. So this is what it looks like. You have the NIF target chamber over here. That's target chamber center. The aperture is fielded right here, 20 centimeters from target chamber center. And then the neutrons that are selected by the aperture will travel through this collimator structure all the way to the detector back here 28 meters away. And so the neutrons-- this is just a [INAUDIBLE] it looks like. The neutrons come out here through the line-of-sight collimator. This is that other nTOF detector I talked about. You can kind of see that it's a different technology. It's flat plastic scintillators with photomultiplier tubes. But then the primary-- so that that's just another neutron spectrometer. The primary for the imaging system is the scintillating fiber array, which is fiber coupled to a camera and it's done this way in order for you to be able to gate the camera and get two snapshots. So you can get the primary neutrons and then the scattered neutrons at a later time. There's actually also-- this is the original NIF neutron imaging system. There's two more now, so we can look at symmetries around the chamber. One of them only has image plate detectors, which I planned to bring image plate, but I forgot. But it's-- PROFESSOR: We discussed it, actually. We talked about X-ray diagnostics, so we talked about it a little bit. Yeah. MARIA GATU JOHNSON: Yeah, so you know you can't time gate on image plates. So then you just get one image. OK. And I wish I had a better picture. But these pinhole apertures are actually extremely complicated. So they're made of gold. They're about that long. And you have to make pinholes that are precise all the way through. It's too hard to drill them circular, so they make them triangular instead. And then they're tapered to minimize scatter. So basically, you have an opening, and then the neutrons that go through that opening are all going to be captured at the back end. None of them are going to stop because of the taper inside. It's still a really hard machining problem to get those even triangular pinholes precise all the way through. We need gold because, as we talked about, neutrons have a pretty low likelihood for interacting in matter. And you want to only select the ones that go through the pinhole. You don't want all the ones around to also make their way all the way back to the detector. Yeah. So that's fun. And these are examples of the penumbral apertures where the information will be decoded in the penumbra. You'll also get straight through neutrons in [INAUDIBLE] that you can't use to infer anything about the shape of the implosion. So this kind of aperture array is used on all three lines of sight now, but there's development going on to make it coded aperture, which is supposedly going to be simpler and thinner and easier to manufacture. We'll see if that actually works out. See. Oh, yeah, so these are examples of primary and scattered neutron images. And again, they're routinely obtained. And all DT NIF implosions in these days, it's even in three lines of sight. And there's a lot of work going into tomographic reconstructions to make sure we understand the full emission region. And we have our equivalent in magnetic confinement fusion, which John is working on. And we call them neutron profile cameras. So it's, again-- the idea, again, is to probe the shape of the neutron source distribution. But it's much bigger here and more complicated. So instead of that pinhole array that's trying to reconstruct the 50-micron spot, we're looking at a much larger emission. And we do it by using a number of different lines of sight and counting particles along this line of sight, and then try to reconstruct the full emission [INAUDIBLE]. And John can answer a lot more questions here. OK. So with that, I'm going to jump right into charged-particle spectrometry. And I think I touched on this in the beginning, but it's routinely used to diagnose low to medium rho R implosions. And they can be filled with deuterium, DT, or D helium-3 fuel. And the reason we don't typically use it for high rho R implosions is, again, because the charged particles stop in the assembled fuel, so they don't become indicators on the outside. OK. This example is showing the proton from the interaction for 14.7 MeV downshifted to about 11 MeV You can look at this downshift to infer the aereal density, which in this case was about 84 milligrams. And you can-- actually, let's see. Yeah. I have an example, one with nice [INAUDIBLE] spectrometer. The cool thing is you can do this thing in a number of different locations around the target chamber to look at symmetry again to see if things are compressed uniformly or if there's non-uniform issues. So this is a super simple spectrometer. We call it wedge range filter spectrometer. It's just an aluminum filter that's shaped like a wedge. Pass this around. You can see it. So the way this is fielded, it's got a bunch of holes in front. And you know the holes are used to register where the detector is fielded behind that wedge. You know the thickness of the function of precision on this. And based on, actually, the hole size of the CR-39 detector as a function of precision relative to those holes, you infer the proton energy spectrum. And it's a little bit more complicated than it sounds because you need to know the diameter versus energy response of CR-39. And that varies from piece of CR-39 to piece of CR-39. So you have to come up with a pretty intricate method for inferring that from the data. But that's done, and these are routinely used to measure rho R from the helium-3 gas-filled implosions in many different locations around the target chamber. Pass that around. It's a small 5-centimeter round packet. So you really can get it in many locations. And this is also the detector material that's used for MRS, as we talked about before, CR-39 plastic. So the wedge range filter spectrometer is one example that you can see here, too. We also have charged-particle spectrometers which are very similar to MRS. It's a magnet outside of the target chamber wall. The difference is we don't have a conversion coil, so we're just looking at charged particles directly from the implosion. And you can actually-- this is another advantage of MRS. You can also run MRS in charged-particle mode for experiments where we're not interested in the neutron spectrum. And then you can look at the charged particles that come directly from the implosion. Yeah. And this is an example of looking at that symmetry and how it can vary around the implosion. This is an insertion module on the NIF, where we can field, actually, up to six of these wedge range filter spectrometers on a single insertion module. There's four insertion modules that have capability of fielding these. So you understand we can field a lot from one implosion. On OMEGA, we can field up to seven on one implosion in different directions. And in each direction, you can add a few more if you want. So a lot. Yeah. So in this case, these are fielded at 50 centimeters from implosion. These are fielded at 10 centimeters from implosion [INAUDIBLE]. So we can look in different directions. And this is actually a really old example from the paper by Johan in 2004, where he's fielded protons with [INAUDIBLE] in different locations around the OMEGA target chamber [INAUDIBLE] look in different [INAUDIBLE] spectrum. And if you think back to this, for D3, you often get these two peaks in time, a sharp peak and a compression peak. So you can also say something about the evolution of the experiment. Here, you have the time evolution. You see the small, sharp peak and the larger compression peak. And then you can also see that in the energy spectra, you have less range down-- more range down compression profiles. So you can tell the difference in aereal density between those two types of implosion. It's kind of neat. OK. I feel like I'm running out of time, so I've got to speed up. We already talked about image plates, so don't need to talk about that. CR-39, I kind of touched on. This is an example of what it can look like after we've etched it in sodium hydroxide at 80 degrees Celsius for of order hours. If we put it on one of these microscopes, step over, and take pictures for roughly 400-micron frames, the microscope automatically picks up tracks which are due to particles interacting in the CR-39, and records their roundness or eccentricity and the track size. And then we use that to reconstruct whatever information they wanted from that diagnostic. OK. So then, spend a few minutes on reaction-rate history. So there's a number of ways to do this. And again, so we're looking at really short burn, order of 100 picoseconds. We can field a plastic scintillator really close to the implosion and combine it with the streak camera to measure their reaction-rate history. This is done at OMEGA. So this picture is of the OMEGA target chamber. This is the laser. So this is not part of the diagnostic. This is how we drive the actual implosion. This is what the detector will look like. So it will be fairly close to the target chamber. There is the plastic scintillator. The light from the scintillator will be coupled through an optical light path [INAUDIBLE] camera to record a streak image. And this is where the burn history is encoded. And that scintillator will have a rise time of about 20 picoseconds but a fall time of 1 and 1/2 nanoseconds. So the information, really, is encoded in the rising edge of the signal. So we have to unfold it. But when you do that, you can get the burn history as a function of time for [INAUDIBLE] compression. And this is used a lot for neutrons on OMEGA in the neutron temporal diagnostic. There's also the particle temporary diagnostic or the particle and X-ray temporal diagnostic-- similar concept, where you can tweak that scintillator setup in the center to have a number of different channels, some of them optimized for X-rays, some optimized for protons, some optimized for neutrons, depending on how you filter them and what neutron density focus you put behind [INAUDIBLE] on the streak camera and reconstruct it after. So that's actually a really neat diagnostic that's useful for a lot of things. I told you, promised you, early on that we'd get back to what we used to gammas for. So one cool thing about the gammas is they don't have the same time dispersion that neutrons do. So the neutrons, when they're emitted from target chamber center, they're going to disperse in time, which is really why we can use just the neutron time dispersion for neutron time-of-flight. So if you put the nTOF detector 20 meters away, look at the neutrons, we measure the energy spectrum, not the emission history. For the gammas, they don't disperse in time, so we can have a gamma detector relatively far away and we still retain that time history. So that's the cool part. We have the lower probability for getting gammas. I think we saw the branching ratio is about 10 to the minus 5 of the neutrons. But there is enough of them to count. And by counting the gammas-- in this example, we're using the gamma reaction history detector, which is based on converter gamma rays that are converted to electrons. And electrons generate Cherenkov light in this gas cell, and then it's detected as a function of time. And that's how you infer the bank time or burn history. You get both from this measurement. This is what this detector looks like on OMEGA, and this is what it looks like on the NIF. And on the NIF, there are four different channels, which you can vary in gas pressure set at different gamma detection threshold. You don't get spectral information, but you can set a threshold and then compare the results from the four. And actually, it even made it into a movie, this detector. This is the gamma reaction history detector. Hans Hartmann, who built this detector, was really proud to be able to take his kids to this movie at the movie theater. [LAUGHTER] OK. So coming back to magnetic confinement fusion, again, here we typically use fission chambers to measure the nuclear reaction rate. And the way this works is you have the fissile material, you get the fission products going into the fuel gas. They ionize-- or, sorry-- yeah, they ionize the fuel gas, and then you get an electric pulse that goes out. And you measure that as a function of time. And here again, you need MCP model in order to determine what your measured signal actually means in terms of nutrition. These are often also in-situ calibrated, where you move the source around inside the chamber and see what the signal looks like on efficient chambers on the outside. OK. And I think have five more minutes, Jack, right? PROFESSOR: Go for it. MARIA GATU JOHNSON: So we'll take a few minutes on the impact of nuclear measurements on the ICF program at the NIF. And I think we've really touched on this throughout the talk today, right? The nuclear data have been essential for guiding the initial experiments to ignition. This is the time axis of the yield from the experiments from 2010 through now to 2024. And you can see that this is a logarithmic scale. It's obviously increased a lot over that time frame. And MRS has been part of it from the beginning. I started here at MIT in August 2010. I think the first data from the NIF came back from MRS a week-- two weeks after I started. They shipped it back in this huge moon lander looking container. It was, like, octagonal box with lots of cool packs to keep this [INAUDIBLE] cold. And we had to work day and night to etch and scan it and turn it around. And that's when we're at this yield level, right? Not registering on the scale. And then we've been working our way through up to the regions where we actually have target gain. And we've looked at many of these diagnostics today that have been essential for [INAUDIBLE] experiments to ignition. We looked at how we get ion temperature, hotspot velocity, fuel density or aereal density, and yield from the neutron spectrometers. We looked at how we get the burn width and bang time from the gamma reaction in-situ detector, which is related to the confining time. We also get neutron yield from the activation detectors as well as the map of the fuel uniformity from the real-time activation detectors. And we use neutron images to get the hotspot and fuel shield shape. And really in particular, these two have been essential for identifying those asymmetries. And seeds to asymmetries have been really hard to eliminate along the way to get there. OK. This is an example that Johan put together two years ago. On August 8, 2021, an implosion experiment at the NIF ignited and generated a then record neutron yield of 4.5 10 to the 17 1.35 megajoules. This is the MRS spectrum from that particular experiment. We were so excited about that experiment, which really, internally in the community, that was ignition. And I'll explain why in the next slide. But so this explains what I was talking about before. We have a model neutron energy spectrum, which is a Gaussian with a width governed by the ion temperature, the mean energy determined by the birth energy plus the peak shift, which is related to the velocity. And then we have this component, which is related to the aereal density. We vary those parameters to fit into our mesh and reconfigure an energy spectrum, and then we get a best fit neutron energy spectrum which explains what we actually had. And so if we look, in particular, at this implosion from 210808, many of the key nuclear observables point to this implosion being in a fundamentally new regime. We saw how the ion temperature took off. Earlier implosion under the same campaign had 5 keV. Now we measured neutron average ion temperature 10 keV-- a dramatic step up. We saw how-- the burn history is a little bit hard to see, but what happened actually is the burn history peaked later and got narrower. And what this is saying is that the yield took off during compression. So where the yield would have previously tanked, you start having it climb more and more instead. And then it becomes a really narrow burn because it's burning on as the explosion is exploding rather than as you're compressing. And that, you can see in the neutron images as well. These are neutron images and as well as one X-ray image on the top row from two predecessor implosions. This one, you can see how it gets a lot bigger. And this is, again, because it's going on the expansion. Yeah. And then the other one-- well, you're actually measuring-- this is not the best spot to show it, but what we're finding is we're actually measuring a lower down-scatter ratio. And the reason for that is we're now probing-- the density ratio that we're measuring is probing the timing the implosion after peak compression. So even though the actual down-scatter ratio-- the actual compression is the same, we're seeing fewer scattered neutrons because the peak compression is before the probing. Does that make sense? OK. And yeah, this is just another illustration of the same point, where you really see the temperature climb or jump. This is the fusion yield on the y-axis. This is an inferred hotspot mass and hotspot energy, also jumped for this one implosion. Neutron radius increased and burn rate decreased, really showing that we're in a new regime. And that was this one couple years ago. So definitely done some better ones since then. And there's another one from October 29 that's not yet on this chart, too, which is the second best performing ever, so falls right between these two. And we're at 4 with gain over 1 at this point. OK, I think that's all I had for today. PROFESSOR: Thank you very much. Any other last questions? MARIA GATU JOHNSON: Yeah? AUDIENCE: What's next to try to go even higher on the gain? Because I think a lot of the changes were capsule quality, et cetera. And is there anything that you're seeing in your new neutron data from this new regime that's guiding further changes? MARIA GATU JOHNSON: Good question. I don't think there's anything super obvious right now. Part of it is pushing for bigger implosions, which is not directly related to the neutron data. Yeah. Yeah, no, I don't think there's any defect signatures right now that we're going after. AUDIENCE: Are there any open questions that you see in the neutron data [INAUDIBLE]? MARIA GATU JOHNSON: Yes, there's a big one. Great question. So actually, I talked a lot about peak upshifts and how we infer velocity from that. So there's two aspects to that. There is the peak shift that's different in each of the different lines of sight that's showing us in which direction the implosion's taking off in. But also turns out that there is a uniform [INAUDIBLE] that's the same in all lines of sight that's anomalous, that's not explained by the ion temperature and not explained by the direction of velocity. And right now, it looks like the only way to explain that upshift is by non-Maxwellian effects in the fuel line velocity distributions. It's not clear why those would arise. So that's definitely a big outstanding question that we're looking at, which excites me a lot. I like puzzles.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_5_Refractive_Index_Diagnostics_I.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: Right, so today, we are going to start a series of several lectures on refractive index diagnostics. So these are diagnostics which use the fact that the refractive index of a plasma is not 1 in order to make measurements about the plasma. So first of all, what we're going to do is a very quick recap of electromagnetic waves in a plasma. And you have doubtless seen this before in some courses that you've done, like it's in Chen. There's also a very good explanation of it in Hutchinson. And I'm going to give a little taster of how this derivation goes, just to remind folks of how this result actually comes about. And I'm going to do it in a rather restrictive way here so that we can make some rapid progress. And we'll go back and add in more bits of theory later on as we need it. So for our electromagnetic waves in a plasma, we have two fields. We have E and B, like this. And we have Maxwell's equations. So we have that the curl of E is equal to minus B dot, and we have C squared times the curl of B equals E dot plus J over epsilon 0. You might be more used to seeing this equation with a mu 0. I've just moved the C squared over the other side because it makes the math a little bit easier on the next step here. And so these equations are just true. They're true in any medium. And so what we want to do is try and reduce them down a little bit and then put some plasma physics in. So the first thing we normally do when we're deriving electromagnetic waves is we say, what if all of our vectors had some sort of time and space variation that looked like exponential of i, some wave vector k dot, some position vector x minus omega the frequency times time, like that. And so this is a little bit like Fourier transforming our equations here. And we end up with k cross E equals i omega B, and we have C squared k cross B equals minus i omega E plus J upon epsilon 0. And the astute amongst you have noticed I made a sign error in the first equation here. So this is plus i omega B, not minus i omega B. Very good. And so we could call these equations 1 and 2. And we can note that if we do k cross with equation 1, that is equal to i omega upon C squared of equation 2. So what we're doing here is we're just looking at this term and this term and being like, hey, they both got B in them. We can probably make them look the same. So then we can equate. We can do this calculation and we can equate the two sides of the equation. And with a little bit of vector magic, calculus magic, we would end up with something that looks like k k dot E plus k squared E is equal to i omega J over epsilon 0 C squared plus omega squared upon C squared E, like that. Now, we are searching for transverse waves here, electromagnetic waves that tend to be transverse, and that means that k dot E is 0. So we're just going to look for transverse wave solutions here. We can drop that. And that means that our equation can be rewritten as omega squared minus C squared k squared times E equals minus i omega J upon epsilon 0, like that. And I just want to point out, there's absolutely no plasma physics in this at the moment. All we've done is manipulate Maxwell's equations. We haven't said anything about the plasma. So if I take a standard limit here, say I let the current equal 0, which is what it would be in the vacuum of space where there's no particles to carry any current, then we would simply end up with an equation for light in free space and you would have a dispersion relationship omega squared equals C squared k squared, like that. So those are light waves. So life is good. Any questions on that before we put some plasma physics in? Hopefully, you're dredging up your memories of Griffiths and Jackson and all sorts of wonderful things like this, and this is all making sense. So let's keep going. The next thing we want to do then is add in some plasma physics. And we're going to add in some plasma physics with some serious assumptions which let us make significant progress quickly. And these assumptions can all be justified for most plasmas. And if you can't justify it for your plasma, you may want to revisit these a little bit. So the first assumption we're going to make is that we're using high-frequency waves here. And high frequency, this is kind of a wishy-washy term. We want to make that more precise by having a dimensionless parameter. And so we're going to say that omega is much, much larger than omega pi here. So the frequency of our waves is much higher than the ion plasma frequency. What does that condition physically respond to? What are we saying about the ions and their interaction with this wave by putting this condition in? STUDENT: You're saying they essentially don't interact at that point. JACK HARE: Yeah. So the ions are frozen in place. And they are not going to participate in any of the physics that we're interested in here. And again, you can work out the ion plasma frequency for some density that you're interested in, and you'll find out it's pretty low. So this is pretty reasonable, but if you start using very low-frequency waves, it won't be reasonable. So another thing that we're going to do is make the cold plasma approximation. And again, we can't just use the word cold. We have to say what we mean by cold. And we're going to say the thermal of the electrons is much, much less than the speed of light here. And this condition is equivalent to us not worrying about the Maxwellian distribution of the electrons. So all the electrons are just going to be moving with no-- they will be moving, unlike the ions, which are frozen, but they will be moving all at the same speeds. There's no spread of velocities here. So we have a delta function of velocities, and we do not have our Maxwellian distribution that we might normally think about. And the final condition we're going to write down is unmagnetized. Now, this is the one which I think is most complicated. Because, in fact, there are a dozen different ways you can write down a dimensionless parameter for unmagnetized, and they all mean slightly different things. We could think about collisionality. We could think about pressure balance, all sorts of things like that. For the purposes of this derivation, unmagnetized means omega is much, much larger than capital omega. I know it's confusing. And I'll put a little subscript e here because we've frozen the ions. But this is the gyro motion of the electrons. So the electrons may be gyrating around field lines. There may be some magnetic field. It doesn't have to be 0. But on the time scale that the wave goes by and does its stuff, the electrons do not move appreciably around their gyro orbit. So we don't have to care about their gyro motion. This is one of the places we will definitely have to relax later on when we want to do Faraday rotation imaging which relies on that gyro motion to give us the effect. So this is the final condition. Any questions on those three assumptions? OK, so then, we can write down that the current inside our plasma is simply going to be equal to the charge on the electrons, the number of electrons, and how fast they're moving. And so we've simply transferred our lack of knowledge about J into our lack of knowledge about V. So we better do something about that. And we're going to do that using the electron equation of motion. And that electron equation of motion looks like m dVe dt is equal to minus e the charge times capital E, the vector electric field, like that. So we could rearrange that so that we have Ve is equal to e vector E over im omega. We've done the same Fourier transform trick that we did with all the other quantities. We assume it's going to be oscillating in some way. Substitute that back in to our equation for E, and we'll get omega squared minus C squared k squared E, as we had before, is equal to nee squared over epsilon 0 and E, capital E. Sorry, was there a question? I heard someone speak. STUDENT: No. That was an accident. Sorry. JACK HARE: No worries. All good. And just to be clear here, this equation of motion doesn't have the V cross B term in it, which would be like the magnetic field term, because we've dropped it because we're making this unmagnetized approximation. But that's where you put it back in. You put in a little term that was like, cross V cross B in here and put little brackets around. But for now, we're just setting that equal to 0. OK. Good. And the solution to this equation that we've got now is omega squared equals omega p squared plus C squared K squared. So this looks an awful lot like what we had before, which was just these two terms. But now, we've got this additional term, which includes some plasma physics. And I could write that this is omega pe squared and make it clear that it's the electrons, but remember, we've dropped the arm motion already. So there's only one plasma frequency that's very interesting, the electron plasma frequency for this derivation here. And then, we can go ahead and do all the standard things we do with one of these functions, which is to try and write down the phase velocity. And the phase velocity squared is just equal to omega squared upon k squared. And so that means that our phase velocity is C upon 1 minus omega p squared upon omega squared. It's a half. And then, we can write down the refractive index, because our refractive index, capital N, is just equal to C over the phase velocity. And that is going to be equal to 1 minus omega p squared over omega squared. And we often write this not in terms of frequencies, but in terms of densities, because the plasma frequency has inside it an electron density. We write 1 minus ne over n critical, like this, where n critical is some critical density. And we'll get on to why it's so critical in a moment. But if you want to approximate it, then n critical in centimeters to the minus 3, so particles per cubic centimeter, is roughly 10 to the 21 over lambda when lambda is in microns here. So if you're using a laser beam at 1 micron, that would be a very standard laser wavelength from a neodymium YAG or a neodymium glass laser, then you have a critical density of about 10 to the 21. Which depending on which field you're working in is either hilariously high and unreachable or crazy low and happens all the time. So again, this is one of the exciting things about doing plasma diagnostics course where we span 16 orders of magnitude in density. Any questions on that? STUDENT: Yeah, what's the critical density point again? JACK HARE: As in, what is its physical significance? STUDENT: Yeah. Like, why did you choose that number? JACK HARE: We're going to look into that in a moment. It's a good question. It's a solid question. But just to be clear, the reason I've got there is I've taken this omega p squared and this omega squared and I've noticed that omega p squared has inside it the electron density, and I've rewritten all the other terms. So this n critical now has inside it things like omega squared. I made a critical mistake in my equation up here. This is lambda squared here for approximating this in terms of simple quantities. So the n critical now depends on the laser wavelength or the frequency of your probing electromagnetic radiation. So it's different for every frequency. But that's the only thing it depends on. The rest of it is all like, fundamental constants, like E and the electron mass that don't change. So this is a parameter there's a critical density for every single laser wavelength or electromagnetic wave frequency. I'm going to talk about lasers a lot. Of course, this also applies to microwaves and things like that as well. But some sort of source of radiation. So I'm going to get on to what n critical is in just a moment. But any other questions on this before we keep going? OK, so we just said that n is equal to the square root of 1 minus ne over n critical. If we work in a regime where ne is much, much less than n critical, so we work far from the critical density, we can do a Taylor expansion, which is what we often end up doing, and we write 1 minus ne over 2 n critical, like this. So my questions for you now-- and we will try and work out what n critical is doing together-- for n, for a density which is greater than the critical density, what happens to n, the refractive index? STUDENT: The wave becomes evanescent, right? JACK HARE: Yeah. Well, that's the result. So what happens to n itself? STUDENT: Big N? It's imaginary. JACK HARE: Right. Exactly. Yeah. Yeah. You've got the right answer. I just wanted to take in a few more steps. So n is imaginary because it's going to be the square root of a negative number, which we can see here. And so that means our wave becomes evanescent. So it's going to have properties which decay in time. So it's going to look like e to the minus alpha x e to the minus gamma t, like this. So it dies off. So that means that we can't propagate a wave at densities greater than the critical density. What happens to all the energy, then? Because this wave is carrying energy. STUDENT: It's absorbed by the plasma. JACK HARE: No absorption mechanism in our equations, actually. STUDENT: Yeah, so that-- I mean, from the math we have, the only option's reflection, right? JACK HARE: Yeah. So reflection is indeed the answer. So we will get reflection of this wave. It will bounce off the critical surface and go somewhere else. We'd only be able to have absorption if we put, for example, collisions into the equation of motion, all the way back there. And then, we could have what's called inverse Bremsstrahlung, which is, effectively, the electrons get oscillated by the wave, and then they collide with some ions and transfer the energy to the ions. So that's a damping mechanism. There could be other damping mechanisms like lambda, but in the equations we've got so far, we don't actually have those. So reflection is the only thing that can happen. Now, we also had this equation for the phase velocity, which was that Vp is equal to c over 1 minus omega p squared upon omega squared to the 1/2. Does anyone want to comment on what happens to the phase velocity as we go above the critical density? STUDENT: Does it go to infinity? JACK HARE: I mean, we'll do that eventually, yes. Yes. Right there, at that point-- STUDENT: It diverges? JACK HARE: Extremely large. Yes. Yeah. So this is going to start doing very silly things here. And those things don't seem very physical, of course, because we don't want things traveling faster than the speed of light. Fortunately, this isn't a big problem because the phase velocity doesn't carry any information, and there are limitations on things going faster than the speed of light have to do with information. And the information is encoded in a quantity called the group velocity, and the group velocity just looks like this, c times the square root of 1 minus omega p squared upon omega squared. And so as we get close to the critical density here, all that happens is the group velocity goes to 0, and effectively, we transmit no information through that evanescent region. So we don't have to worry about the fact that phase velocity. It's superluminal because our group velocity is still subluminal, as we'd like. Sean? STUDENT: So if the wave's being reflected here, and the group velocity is going to 0, to me, that seems like the wave information is sort of stagnating at the reflection point. How do we see from these equations that the wave information is actually being reflected back out. JACK HARE: Very good question. STUDENT: I think-- excuse me. I think you need to impose boundary conditions to do that. Because there would be some sort of discontinuity, right? JACK HARE: Yeah. I suspect the model I'm presenting is a little bit too simplistic to handle this stuff. STUDENT: So we would need to bake in some more information. JACK HARE: I think so. There's certainly-- as you get very, very slow group velocities, you're going to start-- we've been making some assumptions about the homogeneity here, and so, reflectively, there's going to be some length scale in here, which is going to be like, k, the size of the wave vector of the light at a given point, and we're going to be comparing that to the length scale associated with how quickly the electron density changes, this gradient length scale here. And as the group velocity gets very, very low that k is going to get very, very long, and we're going to start violating our assumption that k is-- let's see. Maybe I can write this as lambda. That lambda is going to be much, much less than this change in the gradient. So for any realistic system, our density has to ramp up. It can't just immediately get up to the critical density. This could be the critical density here. And we're going to find out that there's some region where some of the approximations we've implicitly been making break down. And then, you need to start doing wbk, and all that sort of stuff and doing everything properly. So I think, effectively, this simple model breaks down. But if you do it properly-- and I think Hutchinson does this in the reflectometry section. So we may end up doing it. You can get the answer about the reflection there. It's a good question. This is very hand-wavy at this point. I agree. STUDENT: Thanks. JACK HARE: Cool. Any other questions on this? Because this equation here, we are now going to use an awful lot. So I'd like you to agree that it's valid within the assumptions that we've made. And if you don't agree, we should have a chat about it. OK, good. So let's keep going. So there is a series of different measurements that we can make. And these are the refractive index diagnostics. So I'm going to just call these n measurements or n diagnostics. Because they rely on the change in refractive index here. So one type is when the refractive index is not equal to 1. There's a refractive index inside our plasma is not equal to 1, which is true anytime there's any density inside the plasma. This sort of diagnostic causes a phase shift. So the plasma ends up-- the laser beam going through or the electromagnetic radiation going through the plasma ends up with a different phase than it would have done in the absence of the plasma. And we can measure that phase shift using a technique called interferometry. And with interferometry, we can therefore say something about the density inside the plasma. Another technique is when the gradient of the refractive index is not equal to 0. So this is when there is any change in refractive index. And in a plasma, that corresponds very clearly to just changes in the electron density, so gradients in the electron density. But of course, in general, this technique can be used for any medium where the refractive index changes. So air, if you heat it up, the refractive index changes, and so you could use these techniques. These are not specific to plasma physics. And these diagnostics tend to be called refraction diagnostics, because the light refracts and it bends. And we end up doing techniques such as schlieren and sonography. And then, the final type that I'm going to talk about are ones where, actually, the polarization, the medium is birefringent. It treats different polarizations differently. And so we can have polarizations of light which are circular. We can have the left-handed and the right-handed polarizations, which we sometimes refer to as plus and minus. And here, we would say that the refractive index for the plus wave is not equal to the refractive index to the minus wave here. And so here, we measure the polarization. And this is using a technique called Faraday or Faraday rotation. Which we briefly discussed in the context of magnetic field measurements using Verdet glass. And in fact, it turns out that you need to have magnetic fields that are non-zero, and we also need to relax our assumption that the plasma is unmagnetized in the sense that the light frequency is much larger than the electron cyclotron frequency. So those are three different types of refractive index diagnostic, and we're going to start with what I think is conceptually the simplest, but still often causes us lots of problems, which are the refraction diagnostics here. I see some people writing, so I'm just going to pause on this slide for a moment. Okey doke. Now, I just want to have a little aside. And this is on conceptual models for electromagnetic propagation. Because I'm going to be switching quite a lot between different ways of thinking about electromagnetic radiation and how it moves through a plasma. Because sometimes, some models are easier to work with than others. Sometimes, models are simplifications and they throw away physics, but they make the intuition much simpler. So I just want to show you two different models that we're going to be using in these next few lectures so that you have an idea of what's going on. One model would be a model of wavefronts. So this is based on the idea that, as we said, our electric and magnetic fields can be written as just a single Fourier component. So there's some strength and polarization of the electric field here, and this is multiplied by the exponential of what we call the phase factor. So i k dot x minus omega t, like this, which we could write as E0 exponential of i times some scalar quantity, which is the phase here. And so if we think of our electromagnetic wave as having a phase, and the electromagnetic wave still exists in all places, all points in time, blah, blah, blah, blah, blah, but that's a very difficult thing for me to sketch on my iPad here or on the board in front of you. So what I'll probably end up sketching are what we call isophase contours. So these are controls along which the phase is constant. And so, for example, it could be at some integer multiple of pi, right? Yes. And belonging to z. So this might look like some waves like this. This would be an electromagnetic wave which is diverging here. Actually, it could be an electromagnetic wave which is converging to the left. But at the moment, it looks like it's diverging to the right. So that's one way I could draw a wave here. Another way I could do it is with a ray model. And this gets into a topic which is called geometric optics. And it turns out what you can do, if you have some isophase contours like the ones I just drew, say, these contours here, they're doing something slightly strange, but perhaps there's a plasma there, which is like moving the phase contours around. If there's a change of refractive index, will affect the phase. The rays that we draw here, I can just take these phase contours and I can draw rays such that they are everywhere normal to the isophase contours. So this ray would look like this. This one would look like this. And this one would look like that. So they are perpendicular to the wavefronts. Conveniently, they are also parallel to the Poynting vector. At least, I'm pretty convinced they are. If someone knows more about geometric optics and thinks I'm wrong, please, shout out, because this was a very hard fact to check in like, 10 minutes before the lecture. But I'm pretty certain they represent the direction of the energy flux in electromagnetic waves. So they're quite conceptually useful as well. They tell us where the power is flowing. Now, when we think about these rays here, we can start thinking a little bit like it's a particle trajectory. And I put particle here in speech marks. I don't think you really need to think about these as photons, but you can think about them as little point particles that move around inside a plasma. And we'll find out some rules for how they move inside the plasma in a moment. And if you track their trajectory, that's where the ray's gone on. And then, you also know some places where you have lines which are normal to the wavefronts. So maybe you could reconstruct the wavefronts later on. But it's important that when we're doing this, we ignore the wave effects. So we no longer track the phase of each particle. It's now just a little billiard ball. And billiard balls don't have phase. And so we're going to get rid of effects like interference and diffraction, and we're going to keep effects only like refraction here. So no interference. No diffraction. Just refraction. So this is our ray model. So does anyone have any questions on these models before we start trying to use them? STUDENT: I was just wondering, so if the Poynting vector right is E cross B, Does that mean that if we have any parallel electric field to k-- I'm just wondering, your point about the Poynting vector, would that break if there was like a parallel e to the main background magnetic field, or is that just the oscillating B there? If that question makes sense. JACK HARE: It does, actually. And I know the answer to it. That's good. So if you had some background magnetic field, like in a tokamak. And then E was parallel to it. Well, let's put it this way. The Poynting flux oscillates. And so, when you're averaging it, time averaging it, that's what gives you the actual power that's moving. If you've got a static magnetic field, your average power will go to 0. STUDENT: OK. Yeah. Thank you. JACK HARE: So you'll only get power flow from oscillating components here because that's what's transporting the electromagnetic energy. But it's a really good question. STUDENT: Thank you. JACK HARE: Like I said, I'm not completely 100% sure that rays follow the trajectory of the Poynting vector, but I'm pretty certain, after thinking about it for about 10 minutes, that they do. So if someone finds out that's wrong, please let me know and I'll take it out. OK, so everyone is going to be pretty happy if I start drawing ray diagrams. And they'll understand that these ray diagrams represent the trajectory of little beamlets of light, and you can also reconstruct the wavefronts from them, and therefore, you could reconstruct visually what the entire electromagnetic field looks like. And we're implicitly assuming everywhere here that our magnetic field is perpendicular to our electric field, which is a pretty good approximation to the assumptions we've made so far. OK, so now, let's try putting an electromagnetic wave through a plasma. And it's not going to be any old plasma here. I'm going to choose a slab of plasma like this. And this slab is going to be much denser at the top than it is at the bottom, which I've tried to really clumsily do with some shading here. So it's going to have a gradient of electron density going up, like that. And of course, you remember our formula, 1 minus ne over 2n critical for our refractive index. We're going to work in this regime where the density is much less than the critical density, so we don't have to worry about what happens if we get close to the critical density. And so you can see, then, that if the gradient in the electron density is in this direction, then the gradient in the refractive index is in the opposite direction, like that. And we're going to start by putting through some phase fronts. And we're going to start with a plane wave. So this is a wave in which the phase fronts are flat and parallel and uniformly spaced. So those are my wavefronts. I'll put a little coordinate system in here. I'm going to tend to put the z-coordinate in the direction waves are going. And so there'll be two transverse coordinates, y and x. And I'll probably just write y on most of these. I'll try and do things in a one-dimensional sense. But everything I say you can imagine could be applied to a three-dimensional picture here. I'm going to say that this plasma slab has some length L, and it's homogeneous within that length apart from the gradient in the density here. Does anyone know what happens to the phase fronts as they emerge out from this plasma? The wavefronts or the rays. I don't mind. STUDENT: They bend in the up or down, right? JACK HARE: Sorry? Yeah, Daniel? STUDENT: Oh. Yeah. They're bent downward, right? Because you've got a lower refractive index in the upper half. JACK HARE: Yeah. You're absolutely right. So these rays will emerge or these wavefronts will emerge bent, like this. And so the rays-- which I didn't draw on before, but I meant to. So here are some rays for you here. You see how they're all normal to the wavefronts. Here are some rays for you here. And they're going to be bent by some angle, which we'll call theta here. And it turns out, if you go and look how to do this, theta is going to be equal to d phase dy times lambda over 2pi. And so we can actually put that all together and we can say it's going to be equal to d dy times the integral of capital N dz, like that. Which, for our plasma, is minus 1 over 2 n critical integral of gradient of the electron density dz. And this dz here is going to be running from 0 to L. Now, I don't know how clear this is to everyone that the rays should bend or that they bend downwards or why they bend. There's lots of different ways of thinking about it. You can go and just solve a load of equations, if you want to. I like to think of it-- and you may laugh at me for this-- as a bunch of soldiers marching arm in arm through some mixed terrain here. So here's my soldiers. I'm looking at them from above. You can see how I'm lining them up nicely with the wavefronts very suggestively. And maybe some of the soldiers over here have got some sort of marsh that they've got to walk through, and these soldiers are going to fall behind. And because they've all linked arms, they've still got to stay in a straight line with each other. And so, as they go, they turn more and more round like this, and this is what leads to our bending here. And you can make this a bit more rigorous if you start thinking about the rays as particles, and you think about their velocity, and you think about the speed that they're going at inside the plasma, and you realize that they're actually going slower in the denser regions, and that's going to start giving you a twist. So they're going faster in the denser regions. They're going slower in the regions with high refractive index. And that's what gives you the bending here. So this is just like a little mental model to think about when you're trying to work out why it is that the rays of light are turning. But there's many, many different ways to get this. STUDENT: Is there a great density over all you've got use this? Hi, is there a gradient and density along the z-axis? The way it's drawn it looks like it's only within the y-axis. JACK HARE: It isn't only in the y-axis. Yes. STUDENT: So an integral of the gradient of density along z from that constant, then, was it-- is there no-- is there a density change in z, si what I'm asking. JACK HARE: Not in this really simple model I'm proposing. Of course, in general, there can be a density change. Really, this should read gradient of density dot dl, where L is an infinitesimal. No. Sorry. Ignore that. Yeah, there is no gradient in density in z. And we don't need one to get any bending. And in fact, if there was a gradient of density in z, it wouldn't have any effect on the light. It would just go forwards at a different speed, but it wouldn't get bent. STUDENT: All right. So then that integral is just gradient of ne times L. JACK HARE: Yes. For this very simple model, you're absolutely right. I'm just introducing the generality because we may have something different. But you're quite right. We could write this as minus 2 ncr times L. And maybe I'll put a subscript z so that I know that it's my length scale z and times by the gradient in ne. And if this is some simple density ramp, so I would have any 0 times 1 minus x upon Ly, or something like that, I can simply put this in and say that the entire beam is now twisted by a nice linear angle, which has an Lz inside it, an ne0, and minus 1 upon Ly, like this. So this, is if I give you some analytical result, you can then go and work out what the angle would be. And that's a super useful thing to be able to do. As I segue perfectly into my next remark, which is to do with the first problem that this causes. So this is issues with deflection. The first issue is if you've got some plasma and you're trying to put some electromagnetic radiation in it, you want to collect that radiation. You want to put it onto a detector. Maybe that detector is a camera, or it might be a waveguide that you're collecting microwaves with. And so that camera has some physical size. And so maybe the camera is represented by this lens here. It's got some physical size D. And if your rays get deflected-- that was a terrible straight line. OK. If your rays get deflected by an angle greater than theta max, where tangent of theta max is equal to d over 2 times L-- I forgot to put in this L here. There we go. Then your ray is going to be lost. So for theta greater than theta max, you lose your rays. So that means you can't collect them. You can't detect them anymore. So this causes big problems because it means that we're going to start losing light here. And for most situations, we can use the small angle paraxial approximation and just replace the tan theta with theta here. So that means you want to keep your deflection angle theta, which is equal to, as we said, ddy of the integral of Ndl. That wants to be less than theta max. So there's a few things that you can do to try and do this. You can have a nice big lens. You can have a close lens. You can put it nice and close in. Or you can use a shorter wavelength. Because if you go to a shorter wavelength, you get a smaller deflection angle, which you can see if you go back to maybe this formula here. A shorter wavelength corresponds to a larger n critical, and so you'll get a smaller angle. Now, not all of these things are possible in a standard experiment. If you've got a tokamak, you may have a limit on how big your detector can be, because it's got to fit in a gap between some magnets. You'll certainly have a limit on how close you can put it to the plasma because you don't want to stick it right inside. And you may not be able to choose whatever wavelength you want. Perhaps you're looking at electron cyclotron emission and you've got no choice but to use the wavelength that's emitted at. So this can cause big problems. And so if you're doing some electromagnetic probing of your plasma, one of the first things you should probably do is check whether the density gradients are going to make it hard to actually measure anything. Any questions on this? All right, so we're now going to plunge in to our first diagnostic. And the point I want to make here is although deflection can be frustrating, it can also be useful. Because we can use it to measure something about the plasma. The first thing we're going to talk about is schlieren imaging. This word, schlieren, people often assume refers to a person. It does not. So it doesn't have a capital, despite what Overleaf will tell you. And it's actually after a German word, schlierer, which is like streaks, because this was first used for looking at small imperfections in optics. And so looking at these little streaks here. So it's a way of imaging things which would otherwise be impossible to see because they cause small gradients in refractive index. So let's have a little example, building up towards schlieren imaging. This first thing I'm going to show you is not schlieren imaging. This is just imaging. But my impression is that some folks need a refresher with some optics. So we're going to start with a solid object. It's going to be this nice little chalice here. And we're going to put in some rays of light. Like this. Now, this object is solid, and so it blocks any rays of light which hit it, these two centers ones, and allows through rays of light going past it. Allows through rays of light going past it. Very good. And what we would probably do here if we're doing a standard imaging system is we would have a lens. So this is how you form an image. So we'll put our lens here. It's going to have a focal length F. And we're going to place it at a distance, which is 2F away from the object we're trying to image. I'll just put that F up there. Now, behind this lens, if we're doing a standard 2F imaging system, we're going to have a focal point, and that's going to be at F away. And then, we're going to have an object plane, which is also at F away. Sorry, an image plane. This is the object plane. And this is the lens with focal length F. So hopefully, some of you have seen this sort of thing before. You know that the rays will pass through the focal point here. He says, drawing them carefully. And what we'll end up with-- can't do this on a chalkboard-- is a copy of our image, of our object here. But it's going to be inverted. And you can tell that because you can see the rays have changed place. So this is a nice 1 to 1 image. It's at magnification 1, and it is inverted. So this is the simplest-- I think, the simplest possible imaging system you could possibly develop. It simply takes whatever is at the object plane and puts it at the image plane some distance away. This could be a microscope. This could be a camera. All sorts of things like that. Yes, Vincent? STUDENT: I think I missed it. What was F again? JACK HARE: I beg your pardon. STUDENT: What was F, like, in the diagram? JACK HARE: The focal length of the lens. STUDENT: Oh. Thank you. JACK HARE: Cool. Any other questions? OK, let's make this more interesting. Let's put a plasma here instead. And we're still going to have our lens. There still can be a focal point. And there's still going to be an image plane here. But the plasma doesn't block the rays of light, as long as we've got, for example, ne much, much less than the critical density here so that the rays can pass through easily. Instead, what we're going to have is rays that come in. And then, they're going to be deflected slightly inside this plasma. So I haven't drawn the density gradient. We can imagine, we've just got a whole range of exciting density gradients that cause some deflections. So they deflect this ray slightly downwards. They deflect this ray slightly upwards. They deflect this bottom ray-- what have I done to this one? Let's have this one go straight. For some reason, there's no density gradient exactly there, so the ray just goes through. And this bottom one gets deflected downwards as well. And let's say it just about makes it onto the lens. And I'll move the lens downwards to make that true. Can't do that on a chalkboard either. OK, good. So what will the lens do to these rays now? Well, it's still going to reflect the rays, and it's going to reflect them-- let's start with this one that actually didn't get deflected at all. So its angle hasn't changed. It's going to go straight through the focal point, as you'd expect. This one that was deflected upwards is going to be deflected down, but it's going to slightly miss the focal point. It's going to be slightly above it, like that. This one that was deflected downwards is going to be the opposite way. It's going to be slightly below the focal point. And this one was deflected downwards. It's also going to be slightly below the focal point. And that was a mistake. There we go. There we go. It's not a very good image compared to the one I was hoping to draw, but there we go. Nothing quite works out. So we should have an image of our plasma here. This image of the plasma should still be 1 to 1 mag 1 and inverted. The fact that I haven't quite managed to get it to work is probably just a flaw with how I managed to draw the rays this time round. Not quite sure what went wrong there. Looks good on my notes, anyway. This stuff gets a little bit tricky. The point is, although the rays here look like they've all gone upwards slightly, they should actually still end up in the same places that they did before. And the fact that I can't get it to work right now just means that I've made a mistake while drawing it live. OK, good. STUDENT: Jack, this is a very basic question, but what's the point of having all the rays go through the trouble of going through the lens when we could just have them go straight through and hit our image plane? Guess I missed that. You know what I mean? JACK HARE: Yeah, absolutely. So in the top case, the solid object, the rays could go-- if the rays went straight through and hit the image plane, they will be deflected slightly at the edges. And so you'll end up with something fuzzy, so it'll be out of focus. So you need a lens to bring it to focus, which effectively is mapping the rays from where they came from back to the same place on the object plane. If in the case of the plasma, if you don't have the lens and you just put-- if you don't have a lens and you just let the rays propagate to a screen, that's a technique called shadowgraphy, which we'll talk about next lecture, which I actually think is more difficult even though it's simpler to draw. And so I want to talk about it after this one. So we haven't done anything here at the moment. And in fact, if you do this with a plasma, you won't see anything at all. Because all of the rays are mapped back to where they started from. And that means that you are going to end up with just the same laser beam that you originally started with or the same microwaves you originally started with. So this will be invisible. So the only way we can make this visible is to notice that the rays do not all pass through the focal point. Now, you saw in the case where we did the imaging that all the rays did, indeed, still pass through a focal point. But here, some of them have gone above and some of them gone below. And in fact, the distance they've gone above and the distance they've gone below is directly proportional to the angle with which they exited the plasma here. And so we can learn something about the angle they can select exited the plasma by placing a filter at this focal plane. And this filter maybe looks a little bit like this. This filter, for example, here is like a little aperture. And it lets through these two rays. Let me color code them. This one and this one. And it blocks off these two rays. And so light is coming from that bit of the plasma, where the density gradients were large will be blocked and it will no longer appear on our final image. And this is what Schlieren imaging is. So we place a stop at the focal plane and we filter by angle. I'm going to do some exhaustive examples of this to try and build some intuition for what's going on if you're a little bit confused right now. Any questions on this before we keep going? STUDENT: I have sort of an overall conceptual question. I feel like the highest gradients, a lot of the time, or-- well, maybe this isn't quite true. But I can imagine, in a lot of cases, this is going to be affected most by the edges where it's entering and leaving the plasma because you're-- yeah, depending on how uniform things are. So just curious what you actually get an image of. I mean, if you-- especially if you don't have a great sense of where the gradients are or if you have gradients inside you don't know about or something. JACK HARE: Yeah, absolutely. These images are difficult to interpret. So this is not a generic diagnostic technique that will immediately tell you what's going on. You need to know something about your plasma. Maybe you have a simulation and you do a synthetic diagnostic on it. Or maybe you've set up your experiment such that it's particularly simple. We'll talk a little bit about some simple distributions and what patterns they make and that will give us an idea for what sorts of things we might be able to measure with this. Turbulence in the edge of a tokamak or something like that, this is maybe not the ideal diagnostic for it. STUDENT: Fair enough. JACK HARE: Yeah. I want to make very clear because I don't know if it came across when we were talking about it before. But we only get deflections from density gradients, which are perpendicular to the direction here. So if our ray is going in this direction, in the z direction, we sense ddy of ne and d dx sub ne, but we do not sense ddz of any. So if there's density gradients in the direction the ray is propagating, the ray will slow down or speed up, but it won't actually deflect from that. And so that helps you a little bit. You're only sensitive to gradients perpendicular to the probing direction. That might also tell you if you've got a plasma, which you think has some geometry. There may be a good direction to send the probing beam through there maybe a bad direction. So you want to think about that a little bit. So let's have a talk about some of these stops, right? So I've said that we can place these stops here. Let's have a chat about what sort of different stops are available to us. So types of stop. The first type is to decide whether our stop is going to be dark field or light field. So I'm putting darken in brackets because it will save me writing in a moment. The difference between dark field and light field is that the dark field blocks undeflected rays. So ones that do pass through the focal point. And the light field blocks deflected rays, ones which do not pass through the focal point. So you can either look for regions where there are density gradients or where there aren't density gradients. We can also choose the shape of our stop because our stop is a two dimensional plane at the focal plane here. So we can have a circular stop. And that doesn't care what direction the ray is deflected in, it only cares on the size of the angle. So the size of theta. So that is basically, are there any large density gradients? Or we can have what's called a knife edge, which is linear like this. And that is sensitive to density gradients in only one direction and it still cares about the size of the density gradient. That is like x hat here. So we could, for example, have a stop at the focal plane. No, it's not going to do it. OK, fine. I can't get a nice, round circle. We can have a stop which is an opening inside an opaque sheet of material here. And this opening could be positioned such that the focal spot in the absence of any plasma sits inside it. What sort of stop would this be? STUDENT: Light field? JACK HARE: So this is a light field stop. And what shape is it? [INTERPOSING VOICES] Yes, OK, circle. Thank you. We could also have a stop that looks like this. And we can position it such that the focal spot is actually within the opaque region. And what sort of stop would this be? STUDENT: Dark field. STUDENT: Dark field. JACK HARE: OK, dark field. And what shape is it? STUDENT: Linear. JACK HARE: So the knife edge here. Yeah. We call it a knife edge because actually using a razor blade is a pretty good thing to have because you get a very nice, sharp, uniform edge to it. OK, and so depending on these stops you can think of as filters in angle space, right? So they allow through certain angles. You can think of arbitrarily complicated versions of this. There's a technique called angular-- not fringe. Angular filter refractometry, which has a set of nested annuli, which let through light which has been deflected by certain specific angles. So the world is your oyster. You can come up with all sorts of exciting different stops if you want to. One thing I will note is that the dynamic range of your diagnostic, which we'll talk about more later, depends a great deal on your focal spot size. So I've shown these focal spots to be relatively small here. But that focal spot size, at least in a diffraction limited sense, it covers an angle, which is equal to the wavelength of your light over the size of your lens, the diameter of your lens here. And so you might end up having focal spots which are not small, but actually could be rather large. And then, part of the focal spot could be obscured and part of the focal spot could be clear for some given deflection angle. And in general, when we've got a plasma here, we have, say, our small focal spot before we put any plasma in the way. When the plasma is gone in the way, different rays of light have been deflected by different amounts. So this thing may take on some complicated shape here. And this is the shape that you're filtering. You might be filtering it with your knife edge like this, or you might be filtering it with your circular stop like this. So we're basically filtering the rays based on how far they've been deflected. At the focal plane, there's no information about where the rays came from inside their plasma. So their spatial information has been lost. The only thing we know about them is their angular position. So rays, which are deflected by an angle theta 1 from at the top of the plasma, are rays which are deflected by the same angle from the bottom of plasma. It ends up at the same place in the focal plane, even though they came from different parts of the plasma. This is the magic of geometric optics. So any questions on this or should we do a little example? OK, let's do our example. So let us consider a very simple plasma. This plasma-- we'll have the coordinate system y vertically and we'll have z in the direction of propagation as we discussed before. We'll have rays. Well, I'll draw the plasma first. So the plasma is going to have a density distribution that sort of looks Gaussian ish, some sort of nice peaked function. So this is density, ne. So you can think about this, for example, as like a cylinder. So you've got a cylinder of plasma, like a z pinch, and you're probing down the axis of this z pinch and it's got a Gaussian distribution of density to it. Anything like that. And then, we'll have the rays of light coming through. So we'll have rays of light, which are sampling very small density gradients at the edges here. And these rays e will just go straight through. There's also be a ray that goes through the center, which also sees a very small density gradient here, right? At the center of this distribution, the density gradient is zero. But the edges here where the density gradient is large will have some deflection. And if we place our lens, as we did before, some distance away, it's going to focus those rays onto our focal plane. And in our focal plane, we're going to put some sort of stop here. So the undeflected rays are just going to go straight through the focal point. But the deflected rays are not going to go through the focal point. The deflected rays are going to go above and below. So now we can put a series of stops inside here. So we can have a stop on that blue dashed line that looks like a light field knife edge. We can have a stop that looks like dark field knife edge. We can have a stop that looks like a light field circle. And we could have a stop that looks like a dark field circle. And what we're going to do is sketch out what we expect the intensity to look like from each of these different knife edges here. So if I plot this one-- I'll just draw the density distribution again like that. So that's our density. Now we want to know what our intensity distribution looks like. So this is now intensity. We have our initial intensity 1 and we have 0 intensity corresponding to all the rays being blocked. So where for the light field knife edge do we see no intensity? STUDENT: In the upper side where there is the largest density gradient. JACK HARE: I'm sorry, I didn't hear that properly. Can you say it again? STUDENT: I think the upper portion where there is the largest density gradient. JACK HARE: So I think what you said was in the upper side where there's the largest density gradient, right? So this is where the density gradient is very large. So we expect outside of that region our intensity would be pretty much constant. But inside that region, we'd expect the intensity would drop to 0. Because we're blocking the rays which have a large deflection angle in one direction upwards and the rays which have a large deflection angle in one direction upwards corresponds to that specific density gradient there. OK, anyone want to have a go at telling me what happens with this one? What's happening at the edges? What's happening out in these regions here where the density gradients are small? STUDENT: They're cropped out because they mostly go through the focal spot. JACK HARE: Right. Yeah, so these are going to be 0, right? So 0, 0, is there any region where it's not zero? STUDENT: I think it's not 0 for the high gradient region that's lower in y. Because it's deflected. JACK HARE: So you think it's not 0 for this region here? STUDENT: Yes. JACK HARE: Does anyone agree with Sara or does anyone disagree? STUDENT: I think that looks right. JACK HARE: OK, so we're talking about this ray here. Yeah. So this ray is, indeed, passing low down compared to the knife edge. So we'd expect this to show some intensity here. But not for the upper one, which passes higher up. OK, good. We're getting there. What about for the circular light field? Anyone want to tell me what the intensity looks like here? STUDENT: You'd probably get three peaks. So on the edges where the density isn't really-- where the density is just low and then right in the middle where the density is high, but not really changing too much. JACK HARE: Yeah. So you say three peaks, I'm going to think about them as two notches, but I agree with what you're saying. So these notches here correspond to the points where the density gradient is largest here. And so those rays got a big deflection angle there being blocked out. What about for the final one here, dark field circular aperture? STUDENT: Well, there you're basically notching out all the minimally deflected rays, so you lose the ones on the edges and center. JACK HARE: Yeah, so we'd have something that looks like this, right? So we would just see light where there was a significant deflection angle. OK, this may still seem a little bit abstract, so we're going to try one more thing. And I hope that you don't hate me for spending so much time on Schlieren, but I absolutely love it, so that's your loss. Which is going to be a two dimensional example here. I'll just draw on this distribution function. There we go, density function now on that last one just so you've got it. So this actually goes back to a sketch that I did during the b dot lecture where we stuck a little b dot inside the plasma. And we have plasma flow coming from left to right. And because it collides with this, we get some sort of bow shock like this. And as we all know from our shop physics, at the bow shock, we have strong density gradients like this where the density jumps across the shock here. So if we look at this system and we have our probing laser coming towards us, so our laser is looking towards us like this, we are the camera. What would we see about this bow shock? What could we tell about it? So maybe the first thing we could do is ask for a light field circular aperture, what do we think we would see here? And if you're not quite sure, at each of these places where I've drawn this little arrow, you could say that the density maybe as a simple model looks like a sort of hyperbolic tangent type thing like that. It's got some region where the density is ne0 some region where the density is ne1, and then some region where the density changes rapidly. That's not true for a shock, but it's a model just to get us thinking about what this looks like. So what would I see on my image? I've got this expanded laser beam going through this shock and I've decided to use a circular light field stop. STUDENT: I might be doing this backwards, but from-- it looks like what we drew above. That would mean that you're not getting your steep gradient sections, so it should be dark where the bow shock is. JACK HARE: So you'd actually have an image. Yeah, you'd have an image, which if I was on a chalkboard, I could do as an eraser, but it's actually quite hard here. If you imagine this is filled, a green laser beam beautiful nice laser beam image. Then you would have a region where it-- no, the arrays is not very good. If I can make the eraser smaller that would work much better you would have a region where there was no light whatsoever and there was just darkness, right? So you just have this dark region here corresponding to the bow shock. And if you did this with a dark field circular aperture, then you'd have the opposite. You'd have complete darkness and you would just have a region where the bow shock shows up very nicely like this. And so this is actually typically what we use for shock measurements is dark field circular aperture. If you do happen to do something like dark field with a knife edge and you put your knife edge like this, so you were measuring density gradients that were in this direction, you would end up seeing something a little bit like just one half of the bow shock like that. So if you set-- your probe was sitting here, you would just see a little bit of the bow shock. You wouldn't see this section because the density gradients wouldn't be in the correct direction. So you wouldn't be able to observe those. But the knife edge might be a better choice for some shock geometries if you're only interested in gradients in one direction. Aidan, I see your hand. STUDENT: Yeah, I'm curious how distinct the actual image gradients would be given that this is like a cylindrical phenomenon, right? The shock. So I assume there's some density gradients that's they slowly become parallel to your propagation direction as you rotate to the-- JACK HARE: Yeah, so you won't see those density gradients, but you will see the ones on the edges of the shock. Yeah, it depends on the exact shock morphology, whether it's some extended like a bow shock that's extended like this or whether it's a bow shock that's sort of rotated like the tip of a nose cone on an aircraft or something like that. So it will make a difference. But this is just to try and get a feel for what this looks like in terms of imaging here. I can send around some papers later, which have some very nice images of Schlieren of shock structures that show what sort of quality of data you can get out from this. So any other questions on Schlieren? I will then just summarize exactly how to use it and when not to use it and things like that. But if anyone has any questions on these sort of worked examples we've done, please go ahead and shout out. So Schlieren is good for visualizing density gradients, right? And in particular, it's good for visualizing strong density gradients. So in particular, it's good at visualizing shocks. So if you're looking at shocks in plasmas, this is a nice diagnostic. And it's particularly nice because it's very simple to set up. As I showed you, it's just got at minimum a single optic. You need to form a focal point, so you need to have an optic. You need a focal point and you need a stop. So this is a very simple thing to set up. You can do it very, very quickly. The trouble is, although it's simple, it's very difficult to be quantitative. We can say there is some sort of density gradient there. We think the density gradient has to be larger than some certain limiting value. But other than that, we can't really say much more than that. So it's useful for seeing where shocks are and their morphology, but is not useful for measuring gradient of any. So we can't measure gradient Ne. We can measure shape and location of the density gradients. Now, there are some ways where you can make this more quantitative. So imagine you've got a little beam of light coming in like this. This is going in this direction. And you have a knife edge that looks like this. That knife edge is going to be entirely blocking what's coming through, we'll have 0 light coming through. But if we have a beam that comes through and it's lined up like this, you can see that now about half the light is going to get through. And if we have a beam coming through that's lined up above the knife edge, we'll have all of the light coming through. And it turns out that if you have a large enough spot that you can actually see this partial obscuration of the focal point, you end up in a regime where the intensity that you see on your detector eye is, in fact, proportional directly to the gradient of the electron density. I'm putting it in brackets because you need a large uniform focal spot. And I'll talk in a moment about why that's actually extremely difficult using a laser, which is what we normally use here. OK, if you can't guarantee that you have a nice large uniform focal spot, you could also use a graded neutral density filter. So a neutral density filter is sort of smoky glass that blocks light. And you can change the amount of impurities in it to block more light. So we could carefully fabricate for ourselves a neutral density filter that has a changing absorption. So we can have a gradient in, say, we'll call the absorption alpha here. And then, the idea is that different rays of light at different points of your stop will either come through not at all or very attenuated. Or they will come through partially attenuated or they will come through lazy not attenuated at all. And so if you have a really good graded neutral density filter, you can, again, end up in this regime where you actually get some sensitivity. To the position of the beam. And that can get you back into this nice regime up here where the intensity of your signal is actually proportional to the gradient of the electron density. So that'd be a very nice place to be, but this is hard to make. And you still need a uniform beam to start with, which is also hard to make. So these techniques, a lot of Schlieren was actually developed using light sources, which are very different from the light sources that we have to end up using in our experiments with plasmas. And so you end up, yeah, so just looking at the beautiful pictures that Matthew put in the chat here. These pictures here are absolutely glorious. And you can see there's a huge amount of detail on them. And this detail is due to the fact that actually I think for these ones they're using the sun as the back lighter. I may have forgotten this exactly, but light source is the sun. And the sun is actually quite large. It's an extended object, you may have noticed, in the sky. And this gives you really nice Schlieren imaging. But we don't tend to use nice large objects for our experiments with plasmas. We tend to use things like lasers. And lasers are not a good Schlieren source. And if you want to read in more detail why lasers are actually a really bad idea, you should go read the book that's listed in the bibliography by Settles, which is an absolutely cracking book. Has lots of lovely pictures, like the ones which were just posted in the chat. And also, a very detailed description of this stuff. But can anyone tell me why-- if lasers aren't very good, why do we end up using lasers as the light source for Schlieren imaging in plasma physics? What property of a laser is it that we particularly want? STUDENT: Monochromatic light. STUDENT: Monochromatic. JACK HARE: So monochromatic is somewhat useful. Actually, it turns out you can do really cool Schlieren techniques with a broadband light source as well and filters because the different wavelengths will be reflected by different amounts. And so if you have multiple cameras with different filters, you can really carefully reconstruct the deflection angles. So monochromatic is a good guess. But actually, we'll be OK with broadband. What else-- when you think laser, what do you think? STUDENT: Coherence. JACK HARE: Coherence. Actually, coherence isn't good for this. And we've been using a picture and the coherence screws up our nice simple picture. It's going to cause problems. So we really want coherence for interferometry and we absolutely hate it for Schlieren. And that's another thing that Settles says in his book, so coherence is a bad thing. So I'm going to put coherence. No, we actually don't want it. What else do we think about lasers? STUDENT: Well, they tend to produce nice round spots. So it's also easy to deflect them without changing them that. JACK HARE: Nice beam. Yeah, you can do that with other light sources. But fine, it is a nice beam. Anyone ever shone a laser pointer in their eye? If not, why not? STUDENT: High energy density. JACK HARE: They're very bright. Should we put it that way? OK, so lasers are extremely bright. Why do we want a bright light source when we're dealing with plasmas? STUDENT: The plasmas glow. JACK HARE: Yes, so we need to overcome the plasma glow, should we call it, or should I say emission here. So when you're working with jet planes flying around and you're taking Schlieren images of them using the sun as your background, you don't actually have to worry about the plasma or the shocked air around the plane glowing and ruining your measurements. Whereas, a plasma, you really do. It makes an awful lot of light. So you need an incredibly bright light source. And so despite the fact lasers are terrible for Schlieren it's the brightest light source that we have available so we end up using that. But the trouble with lasers is that they have a very small focal spot. So there are, in fact, incredibly easy to focus down to a point. And we don't want that. We want a nice large focal spot. And it's actually very difficult to get a laser to decohere enough to get a nice big focal spot. But maybe there are some techniques we can do. And because we've got a small focal spot, laser Schlieren is effectively binary. And I'll explain what I mean by that when I remember how to spell Schlieren. What I mean by that is the spot is so small, the laser Schlieren, it is either completely blocked by the knife edge or it is completely visible, unblocked, by the knife edge. And so when you do a laser Schlieren image, you get an image which is either dark or light, 0 or 1, no intensity or full intensity. And so you don't get those nice images that we were just looking at that were linked. And we also don't have the ability to use the change in intensity to measure the gradients in the electron density because the intensity is either 0 or 1. So we have very, very limited dynamic range. You can think about it that way. We have no dynamic range. We even know the density gradient is larger than the number or it's smaller than a number and that's it. So it makes very good shock pictures, but you can't do all the beautiful techniques that people use Schlieren for in standard fluid dynamics where they have access to different sources. So someone came up with a nice, bright, incoherent large area laser that we could use. That would be absolutely great. I have a few ideas along this direction, but haven't had a chance to try them out yet. So yeah, they have some serious limitations. So Schlieren is a lovely technique for plasmas. It's very limited. It's obviously useful when you have shocks, so you need to be having fast moving plasmas. And so this is not really a technique that we would normally use inside say a tokamak or a magnetically confined device, but it is very good when you have larger densities. All right, so any questions on this? Nigel, yeah. STUDENT: So did we ever say how exactly the critical density was determined? Is that for a later lecture? JACK HARE: Do you mean like, what is a numeric-- what is an analytical result for the critical density? STUDENT: Yeah, when we had like 1 minus n over n critical. Where that was ever determined? Or is it just kind of guess and check. JACK HARE: No, absolutely not. You can write it down. It's a function. And I neglected to write it down here. And I am not going to try and remember what it is. But it is a function which has inside it the frequency of the laser light, epsilon naught, e and me and the-- not the density. Definitely doesn't have the density inside it. So this effectively comes from rearranging the plasma frequency here. And you can find this in Hutchinson's book, and I really should have written in my notes, but I just didn't, OK? And I'm not going to try and bullshit you and look it up now. So yeah, the critical density is a number that we can find. It is uniquely defined for every electromagnetic wavelength and that is the density, which you cannot go above. And if you're sensible, you'll work well away from the critical density. Because if you get anywhere close to it, it really screws up most of these calculations. So most of the time, we're relying on this approximation that the density we're using is much less than the critical density. And we'll come across that approximation very strongly when we deal with interferometry in a little bit. But we did also use it already when we were deriving the Schlieren angle. When we got to this point here, we used this linear approximation where the density is much less than the critical density. Any other questions? Yeah? No? STUDENT: So can you hear me? JACK HARE: I can hear you. Yeah. STUDENT: OK, cool. So would it not be possible to use the quote unquote glow of the plasma itself to use in some other part of the plasma that isn't glowing itself or that it's glowing at some other frequency? Say, once we're putting-- JACK HARE: I thought about this. I think it's a really cool idea. You definitely have to arrange your plasma so there's a bit that's glowing and a bit that isn't. It's a bit you want to measure as also glowing that's going to make things very, very difficult. So yeah, you could potentially have something like that, like a ICF hotspot backlighting the whole plasma. That sort of thing could work. Yeah. But I don't know of anyone who's done it. STUDENT: OK, Thank you. JACK HARE: All right, we're past the hour now. So I think we'll leave it here. I will be back in the classroom next Tuesday. I look forward to seeing all of you there and all of the Columbia folks online. And yeah, enjoy your weekend and bye for now.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_3_Magnetics_II.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So I'm going to begin by giving a recap of the last lecture, just very briefly. And then we'll keep going with where we left off before. So you remember we talked about B-dot probes. These were little loops of wire, like this. They've got some area, A. They've got some magnetic field pointing through them. We're only sensitive to the component of the magnetic field that is parallel to the normal of this loop. And what we want to do is measure some voltage across this loop. And we showed that the flux through this loop, phi, was going to be equal to the integral of B dot ds over this area. And the voltage that we were going to get out was going to be equal to the time rate of change of the flux, which is equal to B dot A, like that. So by measuring the voltage on this loop and then integrating it up, we can work out what the magnetic field is doing as a function of time here. We also talked about-- these are B-dots. We also talked about Rogowski coils, which is effectively a set of B-dots arranged around the circle. So we have a circle like this. But now, we have a load of B-dots arranged all the way around. And we join them all together. And we measure the voltage again here. And this tells us something about the current which is flowing through this loop here. And we found that the voltage that we get out here is equal to the number of turns per unit length. So for example, this might be three turns per millimeter. The area of each of these little loops here times by I dot-- and there was an optional factor of mu r here. This is different from 1. Or if it's different from 1 because you've chosen a material that saturates, like steel or something like that. So these Rogowski coils are used for measuring current in a very similar way to B-dots. We get out of voltage, which we can then integrate up and find the current. So then we talked about how we could use these simple devices to measure the plasma conductivity, which we represented by this symbol, sigma. And the bar here is just some sort of average conductivity here. We made lots of assumptions, such as the fact that we're in steady state, which allows us to ditch a load of terms. And we then ended up with a power balance where we balanced the ohmic dissipation within the plasma, which is the integral of the current density squared over this plasma conductivity with the Poynting flux, which is the energy we're pushing into the plasma in the form of a loop voltage driven by our transformer. In the case of something like a tokamak, this could be the voltage inside a Z-pinch as well, and times by the current flowing in that same direction here. And so this was a balance between the ohmic heating and the external applied power here. And we found that, for something like a tokamak-- I'm just giving you half a donut here-- if we put a Rogowski around the cross-section here, we could measure I phi. And if we put a voltage loop around the tokamak like this, which will effectively sense the same voltage that's being driven inside the tokamak by our transformer, we measure V phi. So we can measure those two things. And with a little bit of rearrangement here, we ended up with a formula for our conductivity, which was equal to I phi over V phi with some geometric terms 2R over A squared, where R is the major radius and A is the minor radius of our tokamak. And this was particularly interesting because we have good models for plasma resistivity. And we know that this conductivity is proportional to a load of constants times Te to the 3/2, like that. And so by measuring the plasma conductivity, we could effectively measure the temperature inside our plasma just using two loops of wire, which is pretty cool. Finally, we started talking about pressure balance. So we looked at our MHD equation-- J cross B minus gradient P equals 0. This is steady state MHD in completely general three dimensions. And the idea is we want to be able to measure B and use that to infer the pressure. And so then the pressure is an important quantity because the density and the temperature of the tokamak, or whatever device you're trying to measure, gives you the fusion power output here. And we decided that trying to solve this for like a three dimensional tokamak or a stellarator was very hard. And so we were just going to focus, just to start with, on a so-called straight tokamak, which is just where we take the torus and unfold it into a cylinder, like this. So we went for cylindrical geometry. And we also said we would focus on just the first two modes, the m equals 0 and the m equals 1 modes. And this was the idea that if we put some probe out here measuring the theta, the magnetic field going around our little cylinder here, we could decompose that B theta, as we can always do, using a Fourier decomposition. So would have some Bc component-- C0. And then we'd have a sum over these series of m modes from 1 to infinity with coefficients like Cm, [? pulse, ?] m theta-- I'm running out of space-- plus Sm phi m theta, just about made it. OK, cool. And the idea is really, most of the time, we're just going to be interested in these lower order modes. But of course, if you have lots and lots of B-dots and digitizers, you might want to go after some of these high order modes. But what we're going to show in today's class is that the size of the m equals 0 mode tells you something about energy confinement. And the size of the m equals 1 mode tells you something about the displacement of your plasma, both of which are useful things to be able to measure. So I'm going to pause there and let you ask any questions on the material that we covered last lecture. Mm-hmm? STUDENT: [INAUDIBLE] JACK HARE: Yes. STUDENT: [INAUDIBLE] JACK HARE: I haven't drawn the return loop there. If you want to put the return loop on, which is a good idea for canceling out some of the stray fields, when you get to this end, you simply wind it back here and put it out this side. So one way you can do this simply is you take coaxial cable, like a BNC cable, you strip off the outer conductor. And so you just have the dielectric and the inner conductor. You take some magnet wire. And you back-wind it along the section that you've stripped, and then solder it back onto the outer sheet. And so that makes this configuration very easily. Other questions? Anyone from Colombia? Yeah? STUDENT: [INAUDIBLE] JACK HARE: Yeah, so the question is, does the Rogowski have to form a complete loop? Or can you go halfway? And the answer is, no, you definitely have to form a complete loop. When you're deriving this, you're using Ampere's law, where you have a loop integral. And that requires you to do a full circuit around the surface that you're integrating around, but yeah. Matthew? STUDENT: Yeah, I think you mentioned this last class, but I just want to make sure-- JACK HARE: I can't hear you. And that's probably my fault. So just give me a second. Can you try to say something now? STUDENT: Test, test. JACK HARE: Yeah, OK. I can hear you. Can you say your question again, please? STUDENT: Yeah, so I think you mentioned this last class, but I just want to make sure. So another assumption in your computation of the conductivity is that all the current in the plasma is chemically driven, right? You can't have ECCD or neutral beam-driven current or anything like that. Otherwise, that V phi measurement wouldn't be accurate, right? JACK HARE: Yeah, absolutely true. So in case anyone didn't hear properly, here, we're assuming that all the things that's driving the current here is to do with a transformer that's driving the current. STUDENT: Right. JACK HARE: It's not to do with waves or something else like that. If you have a tokamak where you're driving waves, this very simple treatment won't work anymore. But many tokamaks, for a long time, they were just inductively driven. STUDENT: Right, right, right. All right, thank you. JACK HARE: OK. Right. So yeah, we started with this derivation with the Poynting flux. But I quickly simplified it to this circuit model. But you're right, if I kept the Poynting flux in there, then obviously I'm capturing all of the power I'm putting into the plasma. But we wouldn't be able to do it in this very simple way with two loops anymore because these two loops are only measuring the contributions to the Poynting flux from the transformer. These loops won't capture your lower hybrid current. STUDENT: [INAUDIBLE] JACK HARE: Yeah. So if you know that power, if you know how much power you're coupling-- STUDENT: [INAUDIBLE] JACK HARE: Right, exactly. Yeah, so you might know you're putting in 10 megawatts. But you may not be coupling 10 megawatts. So yeah, any other questions? All right, let's keep going. So I'm just going to write that. Actually, I don't think I need it. I'm not going to write the decomposition again. Hopefully, you've remembered this. I'll write it again if I need it sometime later. So we're going to focus, first of all, on the m equals 0 term from that decomposition. This is often called the diamagnetic term. We'll see, in fact, that the plasma does not have to be diamagnetic. It just turns out, for many plasmas we study, like tokamaks and stellarators, they are diamagnetic. And so this is a reasonable thing to call them. But we'll talk about paramagnetism briefly as well. And the point of studying this, just so you don't lose all hope as we plow through all this mathematics-- the thing we're going for is it allows us to measure the energy confinement time inside our plasma, tau e. And you probably recall that the Lawson criterion depends on the density and on this tau e quantity. And if you want to put a T on there and call it the triple product, you can do that as well. And pretty clearly, right now, we're able to measure T very roughly using two loops. And we're going to see it only takes three loops of wire to measure tau e. It takes significantly more than that to measure n, unfortunately. But we can get a pretty decent way to being able to measure the Lawson parameter for our plasma with just some little loops of wire, which is pretty cool. So let's have a look at this. So the m equals 0 term, as we discussed last week, is a term that says something about the size of the plasma. This is something that tells us-- has no information whatsoever about the azimuthal profile of our plasma. We've thrown all of that away. So we're only interested in things that move in the radial coordinate. So can, for example, take my MHD equation and dot it with the radial unit vector and say, this is equal to 0. Previously, the whole equation was equal to 0. But now, I don't care about all the other terms. They'll be 0 by symmetry. I'm just going to focus on this one. And that allows us to write down a simplified form of this MHD equation, which is minus dP/dR plus dBphi/dR times B phi upon mu 0 plus 1 upon RdR/d theta dR times B theta over mu 0. And you'll recall that all of these mu 0's are popping out because we've replaced J with the curl of B over mu 0. And you'll also remember that, although we don't really have a theta in our analysis because we're dealing with our straight cylinder, we agreed that we would still continue to call this direction along the cylinder phi, like that, because, if we do bend this back around into a torus, that's what it'll be. And of course, the other direction that's important here is the theta direction, which wraps around [? but ?] your poloidal angle. OK. If you take this equation 1 and you spend a little bit of time fiddling with it-- and this is where I'm not going to do the whole derivation in class-- you can take this equation 1 and you can multiply it by R squared. And then you can integrate it up-- equation 1 times R squared dR. And you can integrate it between 0 and A, where A is the edge of your plasma, here. And if you do all of that plus some rearrangement-- and I suggest you go away in your own time and give this a go, especially if you're studying for quals at MSE, because this is the sort of question we love to ask-- you will end up being able to rearrange it into a dimensionless parameter called beta theta. And beta theta is defined as 2 times mu 0 over B theta A squared. So this is the magnetic field at the edge of the plasma. And this is multiplied by the average pressure. So this is not necessarily the standard definition of beta that you've seen. It's a definition of beta that uses an average of the magnetic pressure-- sorry, an average of the thermal pressure over the whole plasma cross-section. And we're comparing that to the magnetic pressure at the edge. So it's a little bit subtle because it's not actually a local dimensionless parameter. This is an average dimensionless parameter here. You've taken this. You've integrated it up. You've rearranged it so that you have-- on your left-hand side of your equation, you have this, which is what you're going after. On the right-hand side of your equation, you find out you have 1 minus B phi at the edge-- so B phi A squared minus the average of B phi squared across the plasma cross-section over B theta A squared. So again, these two are at the edge. And so you could measure them just using B-dot probes, which are sitting at the edge, not inside your plasma. This one is going to give you more trouble measuring it. Now, if I asked you to measure B phi, you'd say, that's not a big problem. I'm just going to stick a B-dot all the way around my plasma. And you see how this is going to average the average B phi all the way over the area. And you say, job done. But I didn't ask you for B phi. I asked you for B squared phi. I also didn't ask you for B phi squared, which you could do just by squaring that. So clearly, this is going to cause us some problems. And we don't have a diagnostic that just measures B squared phi. So we can't just get this out easily. And so to make some progress on this, we're going to have to make some assumptions. And this is where we're making even more assumptions in order to be able to make progress. Before we keep going, does anyone have any questions about this? Yes? STUDENT: [INAUDIBLE] JACK HARE: All I need is for-- the question was, can I put a B-dot probe at A and have it survive? In fact, all I need is for the plasma pressure to be 0 where I place my probe. And it could have been 0 at A over 2, much further in. As long as it's still 0 where I put my probe, this will all work. So it needs to be a vacuum measurement. There cannot be any plasma at a larger radius than my probe. But yeah, OK, other questions? Yes? STUDENT: [INAUDIBLE] JACK HARE: It's free. It's a free agent. This is a fraction. The 1 is by itself. Yes, cool. I can't remember what equation this is in Hutchinson, but this is definitely one of his equations. Yeah, other questions? Yeah? STUDENT: [INAUDIBLE] JACK HARE: Yeah, so we're bothering with this definition of beta. Actually, the beta, it doesn't matter. What we've done is-- just ignore that if you don't like it. I've shown that I can write the average pressure in terms of some quantities. And we can certainly measure these quantities. I could multiply this up by the way, right? I could put this over on the right-hand side. Then I'd have a B theta A squared minus this blah, blah, blah. So far, it's looking good apart from we can't measure this one. And we're going to explain how to measure that. So if you don't like thinking about B theta, just think, hey, you've measured the average thermal pressure inside the plasma. That's already pretty good. So questions from Columbia? All right. Yeah, another question here? STUDENT: [INAUDIBLE] JACK HARE: Apart from the last one, the squared term, yes, yeah. OK, good. So to make progress, we are going to make some assumptions. This happens quite a lot. Some people are like, is that assumption good? Well, it doesn't matter. It lets me keep going with the mathematics. So it's good enough for now. If you want to do it properly, you'll often have to resort to numerics as opposed to analytical results. So we want to make analytical progress here. So we're going to assume three things-- first of all, that our toroidal magnetic field, B phi, as a function of R is roughly constant, as in constant spatially, it doesn't change much in R. We're going to assume that B phi is much, much larger than B theta. And we're also going to assume that beta phi-- and we haven't actually defined that yet. But if you squint over here, you can probably work out what it's going to look like. Oh no, I'll define it here. OK, 2 mu 0 B phi at the edge, B phi A squared times by the average total pressure is much, much less than 1. Can anyone tell me a system for which this is a reasonable set of assumptions? Yeah? STUDENT: [INAUDIBLE] JACK HARE: Yeah, high aspect ratio tokamak, high aspect ratio stellarator as well. So some sort of tokamak or stellarator-- these are machines-- stellarator, OK. These are machines where we are deliberately operating at low beta to avoid MHD instabilities, where we have a very strong toroidal magnetic field to prevent MHD instabilities, like the kink. And if you're at high aspect ratio, then your magnetic field as a function of R is going to be dropping as roughly 1 over R. So I just put my circular cross-section far enough out that this is roughly constant across the circular cross-section. So these are actually not terrible assumptions to make for certain plasmas. But they are violated for other ones. This won't work for reversed field pinch where this is not true. But it does work quite well for these. OK. So using these assumptions, what can we do? We can say, again, that B phi is roughly constant, which means it's also roughly equal to the average value of B phi. But to take into account the fact that B phi probably does vary just a little bit, I'm going to write this as B phi at the edge, B phi A, plus some small quantity delta B phi, which will vary as a function of R. If I take the square of this then, B phi squared, I'm going to get something that's roughly equal to B phi A squared plus 2 delta B E phi A plus a term, which is quadratic in this small quantity, delta B phi. And we will say that we are going to drop the quadratic terms. And we're just going to keep the linear terms here. So this is a standard perturbation theory trick. And if you squint at this for a little bit, you'll realize you can then write B phi A squared minus the average of B phi squared is equal to 2 B phi A B phi A minus the average of B phi. Conveniently, this term here is the same as this term here. But we've now rewritten it in terms of things which we can measure. So we can now finally make some progress. I will say, I find this derivation slightly hand-wavy. And it's not in Hutchinson's book. He skips straight to the result. So you might want to sit down and work it out yourself and convince yourself that it's really true. We're definitely playing fast and loose with our averages as we do this. So this suddenly randomly including these angle bracket signs is maybe a little bit dodgy. So give it a go. You might find it interesting. OK. So that means we can now write our colloidal beta as roughly equal to 1 plus 2 B phi A by A minus the average of B phi over B theta A squared. And if I go back and I have a look at my little straight tokamak, I can put in a B-dot here to measure B theta A. I can put it in a loop that's aligned with phi. And I can measure B phi A. And as I've already said, I can put a big loop that goes around the entire plasma and measure the average value across the plasma of B phi. If I number those 1, 2, and 3, you can see that we have now measured all the things that we wanted. Where's 2? This one, this one, this one. Great, so it's pretty clear that we can actually measure the average pressure inside our plasma using just three loops and a lot of assumption. But these assumptions are reasonable for the sorts of plasmas that many of us work on. Any questions on that before we keep going and I show you how to get the [INAUDIBLE]? Let's briefly return to our definition for beta phi because this tells us something about the [? damn ?] magnetism that I hinted at earlier. So beta phi we can write very simply as beta theta. But we need to swap out the magnetic field theta in beta theta for a beta phi field. And so we just multiply this by beta theta A squared-- sorry, not beta, B theta A squared over B phi A squared. And if you remember what that looked like, you're just swapping out the B thetas for the B phis. And so then you'll have your definition of B phi. And if we pop in this B theta definition, we'd end up with B theta squared A over B phi squared at the edge plus 2 times 1 minus the average value of B phi over B phi at the edge. I can't go down low enough. But if you look at this, you multiply it by that ratio of B theta over B phi. The 1 picks up that term. The B theta conveniently cancels with the denominator on this. We end up with something like this. We can go further than this for the assumptions that we've made because we already assumed that the poloidal magnetic field is weak compared to the toroidal magnetic field. So we'll drop this. Now, can anyone tell me any bounds on beta? STUDENT: [INAUDIBLE] JACK HARE: It's positive. Why is it positive? STUDENT: [INAUDIBLE] JACK HARE: So the definition is 2 mu 0 over B phi A squared average pressure. Every single one of those terms is positive. So beta phi is positive. It has to be. There might be other limits to MHD instabilities. But this is the most basic of them. That also implies that this term is positive because that's just the same as beta phi. So from that, we can see that B phi at the edge here is always going to be larger than the average value of B phi. If that were not true, then this could be a negative number, which would be bad. And what this is effectively telling us is something that you already know, but is nice to show just using some MHD, which is-- if you think about your tokamak as a function of major radius, you've got your magnetic field, which is falling off as 1 upon R, like that. The cross-section of your tokamak is something like this. We have decreed that, in fact, B phi at the edge is the same on either side. So maybe I need to go a little bit further out until this line flattens off. But what this is saying is that the average magnetic field has to be less than the magnet field of the edges, which means that this magnetic field has a lovely little dip inside it. So the magnetic field is less than you would predict from the 1 over R. And it is, indeed, that dip that the thermal pressure sits inside, there. So the plasma reduces the magnetic field. And that is the definition of a diamagnetic substance. Any questions on that? Yes? STUDENT: [INAUDIBLE] magnetic [INAUDIBLE]? JACK HARE: If there was no pressure gradient, there'd be no plasma. And so it wouldn't happen. There would just be a vacuum. You can't have a plasma with pressure gradient because you need to have a plasma which has 0 pressure A because otherwise you don't have the edge of a plasma. And if you have no gradients, then the only other pressure available to the plasma throughout the entire volume is 0, yeah. STUDENT: [INAUDIBLE] JACK HARE: It would, under these strong assumptions that we've made. And we're going to talk about relaxing these assumptions and finding a plasma which is paramagnetic, which actually enhances the magnetic field, in a moment. But for something like a tokamak, then this is always paramagnetic-- diamagnetic, sorry, yeah. Other questions? Hm? STUDENT: [INAUDIBLE] JACK HARE: I mean, this is a pretty good model. You can go to Hutchinson's textbook and see measurements he made in 1973 when I assume his mustache was absolutely brilliant. And he does a very good job there. The error bars are quite tight on measuring the energy confinement time. So this was possible in the '70s. So I imagine it is still possible now. We're going to go on a little bit about how difficult it is because effectively you're trying to measure-- it's a very small dip. I've made it look very big here. But of course, the actual difference between the toroidal magnetic field at the edge and the average toroidal magnetic field is very small because the beta is so low. So this is a very difficult measurement to make. But as Hutchinson said in his book, if you do it very carefully, you can get reasonable results from it. So yeah, I think it is a real technique that one can use. Yeah? STUDENT: [INAUDIBLE] so this A [INAUDIBLE] first where [INAUDIBLE]? JACK HARE: Yes, the A refers to a point where pressure goes to 0. I'm saying at the edge of the vacuum chamber because it definitely should go to 0 there. But if you want to put A further out, you can also do that, yeah. STUDENT: Does it matter [INAUDIBLE]? JACK HARE: Oh, it absolutely matters that it's outside the last closed flux surface. You cannot have any plasma further out than this probe because that plasma could carry current. And that would violate all of the assumptions we've made so far. STUDENT: So if there's [INAUDIBLE]? JACK HARE: I don't understand. I would never put a B-dot probe inside [? the ?] [? last-- ?] even anywhere close to the last closed vector. You'd have them very far away because of reasons mentioned earlier, that they would melt. STUDENT: Right, OK. JACK HARE: So let me give you an example. If this is your tokamak with a divertor, we've got our core plasma and we've got all of this stuff here. You can just put your B-dots out here, somewhere where the pressure is equal to 0. The pressure is not equal to 0 in the last closed flux service. And it's not equal to 0 in the core. And that effectively reflects that there is some plasma there. That plasma could carry some toroidal current density. And that would mess with of the magnetic field calculations we've been doing where we've carefully assumed that there is no further plasma carrying current outside. So we certainly couldn't suddenly introduce a blob of plasma out here. That would ruin our day, yeah. STUDENT: [INAUDIBLE] JACK HARE: I mean, the pressure of the plasma has to go to 0 somewhere. Otherwise, it has no edge, right? And you can define where that goes, like you say, by sticking a limiter in, like that. And so you can put it anywhere that is behind the limiter, yeah. STUDENT: OK, cool. Anywhere? JACK HARE: Anywhere where the plasma pressure is 0, yes. Yeah, I see a question from online. STUDENT: Hi there. So my question is about the B phi at the edge on both sides. You said at one point that we are like confining them to be the same on either edge. And so it's the dip? JACK HARE: Yeah, so we made an assumption here that B phi was roughly constant. So all I'm saying-- probably I shouldn't have drawn the diagram exactly like this. In the model that we're doing at the moment, what we have is a plasma like this. And we're saying that the magnetic field up to the edge of our plasma is some value B phi A. And inside that, it has some dip. We don't know the exact shape of the dip. That's going to be due to the current profile. So the dip could look like this, for example. But I'm just-- some sort of dip. Does that make sense? STUDENT: Yeah, that makes sense. And then the thermal energy is-- the thermal-- what did you say about thermal? I missed that part. You said there was a thermal energy-- JACK HARE: Well, we'll go through it in a little bit more detail in a moment. But the thermal pressure here is just the thermal energy density. So you can measure the thermal energy inside the plasma now that you know the average thermal energy density. STUDENT: OK. JACK HARE: Yeah, we'll get to that in a second, yeah. OK, more questions here? Yeah? STUDENT: [INAUDIBLE] JACK HARE: We haven't got there yet. If you'll wait five minutes, I will get there. The big reveal at the end. OK, other questions? STUDENT: [INAUDIBLE] JACK HARE: Well, the way we've done this is we have taken such a large aspect ratio that our 1 over R is effectively constant where we've chosen this. This is our unwrapping of our torus into a cylinder. It's not true. And we will talk a little bit about the not truthfulness of it in a moment as well. Any other questions? Cool, I'm just going to raise these answers to questions. And then I'll keep going. So I want to briefly touch on times when we have paramagnetism instead. So in the case where our poloidal magnetic field is not necessarily much smaller than our toroidal magnetic field, that doesn't necessarily mean it's bigger than the toroidal magnetic field. It might just be that it's not negligible. If you go and you work through this equation and stare at it for a little bit, it's actually pretty obvious that you can end up with a situation in which B phi is greater than B phi A, where the average B phi is greater than B phi A. So that would be a situation which looks like that. Anyone got any idea what's going on here, why that could happen? Yeah? STUDENT: [INAUDIBLE] JACK HARE: That'd be cool. Dynamo is forbidden in axisymmetric systems by Cowling's theorem. So I think we can't have that. But OK, it'd be a good idea. Yeah? STUDENT: [INAUDIBLE] JACK HARE: Good. OK, so let's draw that out. So let's start with a big plasma that has got a uniform magnetic field across it. But this plasma also has some current going in this direction. And that current is generating a poloidal magnetic field, B theta. And this poloidal magnetic field is allowed inside the plasma as well, of course. This configuration gives rise to J cross B forces, which will always pinch inwards. These are pinching forces. This is like two wires attracting that you know about from electromagnetism. So this is your J cross B force. Now, the magnetic field inside a sufficiently conducting plasma is frozen to it. So as this plasma compresses, the magnetic field inside will also be compressed. So we'll end up with a situation where the plasma is now smaller, the magnetic field on the outside is still pinned, but the magnetic field on the inside can have some pattern which, again, is going to depend on some details of how the current is distributed such that this condition is fulfilled. And it turns out, if you go through mathematically, whether you get ferromagnetism or diamagnetism depends not on beta phi, but it depends on beta theta. And if you have beta theta less than 1, it's diamagnetic. And if you have beta theta greater than 1, it's paramagnetic. Don't ask me what happens if it's equal to 1, probably nothing. So cool, and so that situation, can anyone tell me some sort of device where this could happen? So this was relevant for stellarators and tokamaks. Any sorts of devices where this could occur? STUDENT: [INAUDIBLE] JACK HARE: Yeah, Z-pinches don't actually have a toroidal magnetic field. So it wouldn't work, but a screw pinch would, a generic screw pinch, Yeah. Something like a reversed field pinch is like the classic example. So reversed field pinches, which look topologically like tokamaks, fulfill this condition here. In fact, they have a region where the toroidal magnetic field goes to 0 and reverses, hence reversed field pinch. And they definitely see this paramagnetic effect. Yeah? STUDENT: [INAUDIBLE] JACK HARE: Yeah, I think you probably would, yes. Sorry, for Columbia people, I think you would see this in a [? sparmac, ?] but I don't know. Any questions from Colombia or anyone else in the room? Yeah? STUDENT: [INAUDIBLE] JACK HARE: Ah, that's a good question. I guess the way we're coming at it here is what we can measure and so what we can learn about our plasma rather than what we can do to our plasma. So I'm not sure Hutchinson goes into much detail on this. I'm not sure the answer to your question there. Yeah, I don't have a good answer. STUDENT: I have a question. JACK HARE: Yes, please. STUDENT: So how does this relate to the diamagnetic drift? Often in a tokamak, if it's diamagnetic, there will be some poloidal drift associated with it. But does that drift [? switch ?] direction in the paramagnetic case? JACK HARE: Yeah, that's a great question. So if people didn't hear, how does this relate to diamagnetic drifts that we talk about from single particle motion? The answer is this is equivalent. So you can get these same results by following single particles as they orbit. And you can also get it from MHD, which is quite nice because you can't always get everything single particle from MHD. But in this case, MHD retains that information. And so think, in the paramagnetic case, you're right. The drifts in the opposite direction would cancel the magnetic field, yeah. STUDENT: [INAUDIBLE] theta theta that [INAUDIBLE] definition? JACK HARE: No, it's still this one. So it's at that beta theta, yeah. All right, let's measure the energy confinement time. So all of this has led up to us being able to measure the volume averaged pressure. And we found that we could do that using B theta at the edge, B phi at the edge, and the average of B phi. The reason we might want to do that is that this average thermal pressure is just equal to the density times the temperature. Using temperature in energy units, we swallow the Boltzmann constant. And the stored thermal energy is just equal to this average energy density times the volume of our plasma times a factor of 3/2, which comes from equipartition in three dimensions And you've probably seen from ideal gases. And I'm not going to go through it. But just, it's there. Cool. So the trouble is measuring this is still difficult because-- where's my board with it on-- the thing that we're measuring in order to get this pressure is the ratio of these two things. And it turns out that this dip is going to be very, very small. And so it's going to be very, very hard to measure this. So I'm going to put that back now. The thing we're trying to measure, all of the errors are going to come from the average value of B phi over B phi at the edge. And this is going to be-- it's going to vary within about 10%. So it's not really the right use of approximately equal sign, so 10% variation. So this is difficult to measure accurately. But you can still do it. It is hard, but possible. And as I said, you should have a look in the textbook. There's a very nice figure from a tokamak, I think in Australia. And they still had tokamaks where they did this. And the reason you might want to do that is because you might want to calculate the energy confinement time. So can anyone tell me a useful definition for the energy confinement time? And you can do it in words, if you don't want to do it in formula. Mm-hmm? STUDENT: [INAUDIBLE] JACK HARE: Perfect. Thank you very much. So it is defined as the total stored energy W over the input power. And I'm writing that as a big P because it's not the same as our little p. It's the power going in. So as I said, this is simply-- well, I'm not going to write it out again. It's there. And this, or an ohmically driven, inductively driven tokamak, doesn't include all the other heating terms you might want to do is going to be I phi squared RP where this is the plasma resistivity or plasma resistance. So we're treating the entire plasma, the entire donut here, as some sort of resistor with resistance RP, like that. OK. And if you crank the handle on all of this, you remember the volume of your torus is pi A squared times 2 pi r, at least for large aspect ratio torus so that we have a simple formula. And you remember that your toroidal make-- your toroidal current from Ampere's law is going to be 2 pi A B theta at the edge over mu 0 squared. You can write all of this triumphantly as 3/8 mu 0 beta theta R over RP. I personally dislike the way that we use R here because these are completely different R's. It makes it look like it's a ratio of things with the same quantities. But this is the R for your torus, the major radius. And this, again as we said, is the plasma resistive, which is going to be-- let's see-- da, da, da-- it should go up with length. So 2 pi r dotted with the resistivity over pi A squared-- and that resistive we can just replace with a 1 over the conductivity. Huzzah. So we've measured that already. Remember, we used two loops to measure that. We used a Rogowski and we measured B phi. And then we also need to know this. And again, we can measure that using three loops, which are B theta A B phi A and average of phi. Three loops-- so the conclusion of all this is 5 loops gets you tau E with a fair few assumptions along the way. But that's not bad. Questions? Yeah? STUDENT: [INAUDIBLE] JACK HARE: Yeah, I guess you can also extract rather more noisily the density at this point. So your argument is that from the conductivity we've also got temperature. And because we've got the average pressure, which is presumably going to be the average of n times t, we're going to be able to get out an estimate for n with lots of exciting averaging going on. But again, for five loops, that's not bad. So you're going to argue that we can also get n. And so we can get the Lawson criterion as well. I'll talk about that. That sounds like fun. Someone should try that. OK. Yeah, I see there's a question online. STUDENT: Oh, hi. Would you just quickly mind showing where you would place all of these-- JACK HARE: Thank you, that's a great question. I'm just going to erase some of this because I think you guys have seen it. And I want to have this space, so I can draw nice and big what's going on here. So I'm going to draw my tokamak or my toroidal device in cross-section, like this. So I need to have a Rogowski. I'm going to put that around some poloidal cross-section. I'm going to back-wind it to keep everyone happy. But that's I. I am going to stick in a probe which measures the toroidal magnetic field at the edge. I'm going to stick in another probe. I hope you can appreciate the beautiful perspective I've put into this circle here. And this is going to be measuring B theta at the edge. So remember, that probe should be lined up with the long axis of the torus here. So if you want me to, I could just draw it like this, I guess. There we go. Cool. I also want to have something that measures the average magnetic field B phi. Note how this looks a little bit like the Rogowski, but it measures something very, very different. And then finally, I need to have my flux loop, which was measuring B phi like this. Now, remember what we were saying is we're talking about here an ohmically driven, inductively driven device. And so that transformer, which will fit around the cross-section here, is driving some voltage that drives the voltage in the plasma. But exactly the same voltage is being driven across this loop. And so we can measure the voltage that's in the plasma using this loop here. OK. Hopefully, that is clear. Any more questions? STUDENT: Yeah, thank you. JACK HARE: You're welcome. OK, let's do m equals 1 [? after ?] m equals 0. So as we discussed last week, m equals 1 is a displacement mode. And so what we're doing is we're talking about a cylindrical vacuum vessel here that has inside it some sort of cylindrical plasma. And the center of that plasma is offset from the geometric center of our vacuum vessel by a distance that we're going to call delta, capital delta. And we'll say that the vacuum vessel has a radius A here. And around the edge of this vacuum vessel, we're going to place a series of B-dot probes. And these B-dot probes are going to be lined up to measure the B theta. So if we look at what B theta is going to be as a function of theta-- and remember, each of these probes is at a different angle-- theta 1, theta 2, so on. So effectively, we are measuring this function sparsely at each B-dot probe here. We're going to see that this is equal to mu 0 over 2 pi A times the current inside this plasma. For Ampere's law, we don't care about how the current is distributed. It's just the total current here. And this is going to be modified by a term that looks like sine squared theta plus cos theta minus delta upon A [INAUDIBLE] vacuum brackets squared square rooted. This is just geometry. This term here is just R, where R is the distance from the center of the plasma to some probe, which is now different depending on your angle that you're going at, which is why I'm just going to clarify it. This now has a theta dependence to it. And we're going to make an assumption again that delta upon A is small. Is this a good assumption? What would happen if it wasn't true? Yeah? STUDENT: [INAUDIBLE] JACK HARE: Plasma's gone. It's hit the wall already, exactly. So we're going to make it. And it's a very good assumption because otherwise you have already failed the [INAUDIBLE]. So having made that assumption, we can approximate this as mu 0 I upon 2 pi A. And we get out 1 plus delta upon A cos theta, the Taylor expansion here. And you can see that there's going to be a second order term [INAUDIBLE] A that disappears. And there's going to be a sine squared and a cos squared. And they're going to come together to make a 1. So this all looks like it's going to work. Remember here, in this case, we have specified that delta is in the x direction. So this is delta x, like that. If it was in the y direction, this would be 1 plus delta y upon A sine theta. And if you remember the Fourier decomposition that we said we were going to do at the start, for this magnetic field pattern there are only two terms. There's the B-- well, this is the C0 coefficient. And these two are the C1 coefficients. And so we can say that we can directly measure the displacement x as 2A C1 over C0 or the displacement in y as 2A S1 over C0. So this is C1. This is S1. So by doing a Fourier decomposition of our signal and therefore finding the coefficients C1 and S1, we can immediately find the displacement of our plasma. And that's pretty fast to do. And so we can see if our plasma is starting to move. And we can do something about it, like active feedback, to stop it hitting the wall. So that's pretty useful. Any questions on this? Yeah? STUDENT: [INAUDIBLE] as you're varying theta [INAUDIBLE]? JACK HARE: Oh, yes. STUDENT: [INAUDIBLE] JACK HARE: Yeah, so if your plasma is in the center, you would have-- B theta would be like this. And you might reconstruct a flat profile. If your plasma moves towards theta 1, you might have a profile-- let me use a different color. Yeah, that goes back to there. And you'll notice that, in this case, I've got four sensors. And I can just about reconstruct the cosinusoidal variation there. And you'll find that very interesting when you come to the P set because this sort of question is fundamental to how many B-dot probes do you need to measure some displacement. So thinking in terms of Nyquist-Shannon sampling theorem, things like that might get you a long way. Cool. Other questions? Yeah? STUDENT: [INAUDIBLE] for measuring [INAUDIBLE] 0 [INAUDIBLE]? JACK HARE: How many did we need to measure? We just needed 1. STUDENT: But now we're [INAUDIBLE]? JACK HARE: No, it's still here. You see the C0 term when you do your Fourier decomposition. No matter how many probes you have, you can always Fourier decompose it to C0 upon 2, and then the sum to infinity of Cm cos m theta Sm sine m theta. And what we're saying is if you want to measure, for example, Te and all that beta stuff we did before, you just take this one. This is the m equals 0 term. If you want to measure displacements of the plasma, you take these first two terms, C1. If you want to measure more exciting higher order modes where your plasma gets pancaked or it comes into increasingly unlikely shapes, then you want the higher order ones of these. But John made a very good point, which is that you'll have to think about how many-- what m you need to get to will set a requirement on how many probes that you have in order to properly do this Fourier decomposition. And this, again, is related to Nyquist theorem. So there's a bit to think about. Any other questions on this? Yeah? STUDENT: What [INAUDIBLE]? JACK HARE: Yeah, so the question was, at what point do you need to start worrying about eddy currents in the vacuum vessel messing up this measurement. These would definitely be affected by eddy currents. So that's a higher order correction that you need to do to understand what the eddy current does, yeah. So in this assumption, we have assumed there are no eddy currents. We have a nonconducting vacuum vessel or something else. Yeah, exactly. Any other questions? So just very briefly, Hutchinson spends a long time on this in the book. And I think it's very interesting. But it's very hard to go through in class because it's very mathematical. So I just want to give you a little taster of what would happen for a toroidal system. So remember, we've really been assuming all along here that we've got a cylinder, a very high aspect ratio tokamak, so that we don't have to worry about toroidal curvature. But what happens in a toroidal system is that we have a very complicated force balance. Remember, our MHD equations, which we simplified massively, we're going to have some more terms. But there's a complex force balance. And that shifts. If this is our axis of symmetry of our tokamak and this is the vacuum vessel, that shifts the magnetic flux surfaces so that they're no longer concentric. So we may have a flux surface like this, and a flux surface like this, a flux surface like this, and a flux surface like this. And again, I'm not explaining where you get to from this. This is like three or four pages of algebra. But you start ending up with systems that are much more complicated. We have kind of been assuming a lot of the time here that we have concentric flux surfaces. It's actually not important for most of the things we're driving. But we certainly would have no reason to believe the flux surfaces are not concentric because our system is azimuthally symmetric. There'd be nothing pushing them from one side to the other. So if you start doing this analysis properly, you get to equation 2.2.25 in Hutchinson's book, just to let you know where I've pulled this away from. And you find out that your magnetic field, the m equals 1 term, is going to be equal to the m equals 0 term times a correction for where you are inside the plasma. So this coordinate R measures radially outwards from the center of our poloidal cross-section. And this R0 is the distance from the axis the center of our poloidal cross-section. And we find-- sorry, this isn't very clear. And all of this is going to be multiplied by beta theta plus a term, Li upon 2, which I'll explain in a moment, minus 1. Again, none of this is derived. I'm just throwing this equation at you to show you some of the things that you can do in the toroidal geometry. Now, beta theta is something that we can measure from our m equals 0 diagnostics. They're still going to work just fine. What's interesting is that this shift here results in a term that is to do with the inductance-- not quite the inductance, but it's similar to the inductance. And we talked before about how the inductance is a measure of the geometry of your current, so how the current is flowing in the system. And so by measuring this very precisely, you may be able to determine whether your system has a current profile that is nicely peaked or whether it has a profile which has a hollow in the center, something like that. And so you actually can end up getting more information out of your system when you have a toroidal geometry. But the price you pay for that is significantly more complexity with all the equations. So again, that's all I'm going to go into in terms of this. If you work with magnetic diagnostics for tokamaks, or stellarators, or other devices like that, you will definitely come across some of this very complicated stuff. So any questions? OK. Now, we're going to stick the probe in the plasma, see what happens. So now, we're discussing internal probes. So again, we'll have some vacuum chamber with some sort of clever re-entrance port. And we can stick into it our B-dot probe. We'll probably put a bit of shielding insulation on the B-dot probe, like that. And we'll talk a little bit more about that in the context of Langmuir probes next lecture. And there is some plasma here. And there will be some magnetic fields, which may be three dimensional in nature. Now, despite what it looks like, this is actually not very perturbative. Despite the fact you would stick a probe right in your plasma, this is not as bad as it might seem. What do we mean by perturbative? We mean that it changes the measurement that we're trying to make. This turns out not to be very perturbative. The main reason is that the magnetic fields at a given point here are not just set by the currents locally. They're set by currents over here, and over here, and over here. So B is set by global currents. And so disturbing just a little bit of the plasma doesn't actually change the magnetic field at a point that much. Again, we're going to put an insulator around it. So we would set I into the probe equal to 0. And we're going to make the probe small. We want it to be small to reduce the perturbation on the plasma, but also to make delta B over the probe small, as we discussed before, which makes our analysis easier because it means the magnetic field is constant over the probe here. So you wouldn't want to stick one of these inside a very hot and dense plasma. Well, you might. If went on to stick one of these inside a tokamak because it would just melt and it would fill your plasma with impurities. And your plasma would crash. That would be bad. If you're doing, for example, a pulse power-driven experiment, you might want to stick these inside a plasma. But you might not expect it to last more than one experiment because of the heat flux on it. Now, what people do on some machines is they have sticks of probes that they can position. The stick has a load of probes wound like this to measure, for example, Bx. It also has a load of probes wound like this in the same location to measure By. And it'll have a load of probes wound like this to measure Bz. So you'll be able to get all three of the axes here. And the nice thing about that-- you put these all together. You can find out what the current is locally inside your plasma because the current is simply 1 over mu 0 curl of B. Now, you're taking the gradient of the signal. And the signal might be noisy. So you've got to be a little bit careful here because this is going to be a noisy measurement itself. But the fact is you can still have some go at measuring the current internally to a plasma, which is a very, very hard thing to do. I'll just make a note here that this is noisy because we have delta B over delta x. Delta x is your spacing of your probes here. So the smaller you can make delta x, the better you're going to be measuring this gradient. And this looks very Nyquisty again all of a sudden. Sorry, we can't escape from them. The nice thing about measuring J is that then we would have J and B. And we can use that to measure the pressure. So we can actually measure the pressure inside our plasma using these closely spaced probes. Or at least we can measure the pressure along the line of the probe. This turns out to be hard for low beta plasmas because grad P is small. So you won't see much change in J cross B. But when beta is reasonable, you might be able to do this. A very nice trick is if you have a steady plasma or a plasma which is highly reproducible, so you can do this experiment over and over again. You can actually take this probe and say you've got a plasma that looks like this. And you can step the probe across the plasma. And either the plasma is steady or you're doing the same experiment again and again and again. And you can build up a complete map of B as a function of space and time and therefore J as a function of space and time as well. And people do this. One great experiment is the large area plasma device at UCLA, LAPD. And on this device where they do astrophysics relevance experiments, they can map out all of this. They will do a shot every second. There'll be a servo stepper motor that moves this across. And they just press go after programming it. And it will run for 24 hours a day for a week. And you'll have a huge amount of data with a full three-dimensional map of the magnetic fields. So this is the sort of thing you can do with high rep rate devices. It's very, very cool. OK, any questions so far? Yes? STUDENT: [INAUDIBLE]. But when you're entering [INAUDIBLE]? JACK HARE: It's the local current. So this is a local equation. This is B of x. So you get J of x wherever your measurement is being taken. So you would get J at the locations of each of these coils or, depending on your differencing scheme, halfway between the locations of each coil, yeah. STUDENT: [INAUDIBLE] JACK HARE: Yeah, so the thing is that this equation here does not imply causality. You don't have magnetic fields telling currents where to go or currents going where-- telling magnetic fields where to go. It's just true simultaneously that a global magnetic field also constrains the global current. And so you don't end up having too much perturbation here. There will be some perturbation. You'll be cooling down the plasma and things like that. But if you're on small length scales compared to the plasma dynamics, the current will still be roughly the same. Yeah? STUDENT: Can you repeat why [INAUDIBLE] theta [INAUDIBLE]? JACK HARE: It's just that, in that case, you have P much less than B squared over 2 mu 0, which means that this J cross term is just going to look like this. And so this is x and this is B squared over 2 mu 0. So this is only going to vary by, say, 1% or so. But you're trying to measure the pressure, which is going to be the opposite of this, down at about a 1% level, which means you need to have 1% accuracy on this reconstruction. So you need to have very low signal to noise because if your signal was noisy, you would just reconstruct a completely noisy pressure signal. And it wouldn't make any sense. Okey doke. Some other cool tricks you can do with probes-- if you can't move the probe inside the plasma because the plasma is moving too quickly or it's not steady enough, you can just set your probe up and let the plasma flow over it. So for example, we could have a B-dot probe like this. And we can have a plasma that is moving towards the probe with some velocity, v. And we're assuming here that this plasma is taking with it some magnetic field. And maybe you know which direction the magnetic field is pointing in prior to the experiment. And you set up your B-dot probe so that it's aligned with this field. And if you have a set of these B-dot probes-- for example, if you have one here and then you have one up here and they've got some separation delta x, like this, the signal that you'll measure on the two probes will be delayed. So if one probe goes like this and gives you this magnetic field as a function of time, well, that one is 2, 1. And the next one goes like this. You can do some rather neat tricks and identify similar features in these two signals-- so the start of the magnetic field, the start of this foot, the peak of the magnetic field here, and the fall of this foot here. And you can calculate the time differences here. So you'll have a set of time differences like delta T1, delta T2, delta T3, and so on like that. And from this, you can put it all together. And you can get out an estimate for the flow velocity of your plasma as a function of time at a specific point, so at the location of your probe, just from doing very simple time of flight differencing like this. It turns out this formula is a bit simplified. You actually have to think in a Lagrangian sense rather than this is an Eulerian sense. And one of my grad students, [? Rachaad ?] [? Datta, ?] wrote a nice paper on how to do this properly last year, which is in RSI. So if you want to know any more details, [? Datta ?] et al. RSI 2022. Another thing that [? Rachaad ?] then did, which is the final thing I want to talk about today, is he asked what happens if the plasma, which is flowing here, is flowing supersonically, has a Mach number greater than 1. Anyone know what will happen then? Yeah? STUDENT: [INAUDIBLE] JACK HARE: Yeah, so you'll get a bow shock forming around your probe. Anyone taking a course in shock physics and know anything interesting about the shape of the bow shock? STUDENT: [INAUDIBLE] JACK HARE: Indeed. So the opening angle of this shock, which we call mu, from simple shock theory, we get that the sine of this opening angle is equal to 1 over the Mach number. So if we have a probe and a camera taking a picture of this bow shock and we can measure the opening angle of it, we can get out the Mach number. What's the Mach number equal to? The dimensionless parameter? STUDENT: [INAUDIBLE] JACK HARE: Speed over the sound speed. Great. Well, we've already measured the speed using this technique. We've just measured the Mach number using this technique. So now, we have an estimate of the sound speed. And in a plasma, the sound speed is going to be something like gamma average ionization times the electron temperature over the ion mass all [? halved, ?] like this. Now, some of these coefficients we don't know very well. We should know what our plasma is made out of. We should know the ion mass. It turns out that this gamma for high density plasma where you're going to see shocks is about 1.1. E would be 5/3 in an ideal gas. But it's reduced by the effect of ionization. And so all of this together gives you a measurement of the ionization times the electron temperature, which is kind of cool because it means that, just by using a single B-dot and a camera, you can get out the electron temperature of your plasma, which is the sort of thing that you normally need to do optical Thomson scattering for. We're going to spend four lectures on optical Thomson scattering. It's not an easy thing to understand, whereas this is a relatively simple diagnostic and relatively cheap as well. So this is kind of a nice way of being able to measure the temperature of your plasma just by taking pictures of bow shocks and measuring the magnetic field flowing over them. So I see there were some questions. STUDENT: [INAUDIBLE] JACK HARE: So the question is, does the heating of the probe cause problems? Definitely, these probes get blown up. And normally, you see the voltage signal go haywire. And then you stop trusting it after that point. So you integrate up until time of death. And then you leave it, yeah. STUDENT: [INAUDIBLE] JACK HARE: The thing is that heat transport is actually very slow on these timescales. These are like nanosecond timescale experiments here. So you're not too worried about heat transport. So yeah, ablation is actually like photoionization on the surface and heating through as an ablation [INAUDIBLE] problem, yeah. STUDENT: [INAUDIBLE] JACK HARE: So the question is, are you looking at optical wavelengths, are you looking at X-ray wavelengths? Yes, all of those wavelengths will work. In general, the brightness will be to do with the density of the plasma. It has a stronger dependence on density than anything else. So if the plasma is hot enough to be emitting, then you will see the density jump. And you'll see that as a dark region, and then a bright region where the shock is. So measuring this is uncertain. You want to measure mu as far back as possible, where you've got to a weak shock. But where you've got the weak shock, you have a small density jump, so you can't see it. So what you actually do is you measure up here, and then eyeball it. And you go uh, maybe like that. And then you measure that. So that contributes a lot to the errors. And then, of course, because you're only measuring the sound speed with temperature inside a square root, when you square the sound speed to get the temperature, that amplifies those errors. So we measured like 22 plus or minus 9 electron volts, which is not a very precise measurement. But it's still better than you can do with [INAUDIBLE]. Yeah, any other questions? Yes? STUDENT: [INAUDIBLE] JACK HARE: Z is the average ionization of the plasma. So for hydrogen, that would just be 1. For fully stripped carbon, that would be 6. But in the plasmas I work with, your atom can have some electrons attached to it. So it's still an ion. But it doesn't have all of the electrons removed. And we'll talk a lot more about ionization states later on. But just throwing that in there, if you're a tokamak person you don't like thinking about z, you can just make that 1. But then you probably won't get any radiation because it's fully stripped. So you probably won't be able to see this very easily. So you need to have something going on, yeah. Any other questions? Anything from Colombia? All right, well, we're at time or past time. Thank you very much. See you next week.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_11_Radiation.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So today, we are starting a new topic, and we're going to be looking at radiation or self-emission from the plasmas. In a way, this is the natural language of the plasma. Previously, we have surrounded it with magnetic probes. We've plotted it with Langmuir probes. We've fired various beams of lasers and microwaves through it, but now we're looking at what the plasma produces itself as a way of trying to diagnose what's going on inside. And it's important to note, of course, plasma is not special in this sense. So all objects will omit electromagnetic radiation. Humans, of course, mostly in the IR. Pokers when they get red hot, famously, blow different colors. And of course, the universe, in the form of things like stars and black holes. But even in the completely empty parts of the universe, we still have the cosmic microwave background left over from the Big Bang rattling around. And so even something that is as cold as outer space still has some radiation associated with it. In general, if we have an object which is hotter, we get higher energy photons out. So we could say higher frequency or energy, which is just, of course, h bar times the frequency. Photon's out of it. And so we might expect in the case of plasmas, which are usually pretty hot, that we would be able to get some pretty high energy photons. And indeed, plasmas tend to span the gamut all the way from the radiofrequency up to X-rays and even gamma rays. So we're going to be trying to talk about radiation that spans different energy bands varying by many, many orders of magnitude. And just like when we talked about the different plasmas we worked on at the start of this class, we saw there was a huge diversity. Here, we're also dealing with a huge diversity, so we're going to try and use a framework that treats all of these in a similar way. But of course, there will be some subtle details. And at the end of the day, some things are going to look more like radio waves where maybe you need to think more about the wave equation because the wavelength of the radiation is on the order of the size of your plasma. And some things like X-rays and gamma rays are going to be up where the wavelength is so small compared to the size of our plasma that we can use a ray treatment instead. So just keep in mind that we have different conceptual frameworks for dealing with radiation and we may need to modify it a little bit, depending on what we're dealing with. Now, the fact that plasmas are hot and emit all of these different types of radiation has two consequences. First of all, this can be a significant ruling term. So for example, the radiation may cool down our plasma. You saw this particularly if you took the fusion energy class. And we remember that our Bremsstrahlung cooling there was a significant loss mechanism in our 0d power [INAUDIBLE] balance. But as well as being maybe pesky in the sense that they cool our nice, hot plasmas down, of course they also give us information. So if we can study the spectrum and the location and the temporal variation of this electromagnetic radiation, we can get information on our plasma. And this, of course, is our focus in this course because this is a diagnostic course. So in this course, we like radiation. Maybe in the previous course, you decided you didn't like radiation because you wanted to do fusion. But here, we like the radiation. Now in general, the way that radiation moves around inside the plasma, or indeed, inside any fluid is not trivial. And we want to think about how that radiation is transported. And this is a whole topic in itself that we'll only cover briefly here-- the topic of radiation transport. The basic idea in radiation transport is that you have some sort of plasma. There'll be radiation emitted in one place, and that will locally cool the plasma. And that radiation may be absorbed in another place, which will locally heat the plasma. But then that bit of plasma will also re-emit, and so the cycle continues. And you can see straight away that in three dimensions in a highly inhomogeneous system, this is going to be very, very complicated. So this is, effectively, the transport of energy. And this transport can be highly non-local. So you may be used to thinking about transport in a system where we've got some sort of heat, thermal conductivity. And we look at the diffusion of the heat through the material. That's a very local process. The amount of heat that's traveling just depends on the local temperature gradients. But here, we could have a region which is emitting, and it gets absorbed a very, very long way away. And because that radiation is traveling close to the speed of light, this can be a very, very fast process. So this non-locality makes solving the full radiation transport problem extremely difficult. In reality, if we want to solve radiation transport, we often assume that this is a diffusive process. And there are reasons why that assumption might be valid, but there are also good reasons why it may not be valid in general. So just to give you an overview that radiation transfer is complicated. And we're just doing a simplified version of it here. But radiation transport is extremely important for understanding how plasmas work-- for example, the significant cooling. But it's also very important for understanding how we get information from the plasmas because if you just have a camera sitting out here looking at the plasma and you just see some light coming out, you really want to have some model that tells you where that light came from inside your plasma. If you just think it's all coming from the surface, you'll get one result. If you think it's coming from modestly within the plasma, you'll get a different result. So it's very important to understand this radiation transport conception. So we're going to start by having a look at radiation transport before we even start to talk about what the radiation is or how it's produced. So even without knowing any of the details of what's making all these radio waves and X-rays and gamma rays and things like that, we're going to look at radiation transport framework, which we'll then apply for all of the different wavelengths that we're working with. OK. That's very high level. Any questions on that so far? AUDIENCE: Professor? JACK HARE: Yes. AUDIENCE: When you say assume diffuse, what does that exactly mean? JACK HARE: Sorry, I was talking at the same time. So the idea was that most of the time, when we're trying to solve radiation transport, we can't use this non-local model because it's very, very complicated. And so we often make assumptions that our transport is diffusive. I'm not really going into this in a great deal of detail here, but there are-- I can give some references later if you want. AUDIENCE: Yeah, that'd be great. Thank you. JACK HARE: The other thing I'll say is that a lot of this radiation transport stuff, as we'll find out, is important when you have some sort of opacity in your system for sufficiently high energy X-rays in a sufficiently sparse plasma like a tokamak. Then radiation transport is not necessarily very important. And so you might think, oh, I'm a tokamak person. I don't need to know this. But we'll find out very, very quickly that radiation transport is still incredibly important for the lower frequency waves, like the electron cyclotron emission in a tokamak. So people still need to pay attention even if they think they're too good for radiation transport. So let's have a look at what's going on here. Again, we're going to have some sort of plasma. And we're going to have some radiation coming into the plasma. Maybe this is radiation from another part of the plasma. Maybe we have generated a beam of X-rays or lasers or microwaves that we're using to shine through the plasma. It doesn't really matter here. This radiation is going to have an intensity, I, and we're going to parameterize the path of the radiation with this parameter, s. So we're going to call the point where the radiation enters the plasma s1. And then there is some path, s, through the plasma. And we're interested in the properties of the radiation at point s2. That's I at s2 when the radiation has exited the plasma here. So what we want to know is, how does I change along this path, s? Now, in general, this path, s, could be curved because we've talked extensively about the fact that when you have changes in the refractive index, you're going to have refraction of your red. Now, although I have drawn it curved here, a lot of the time I'm going to assume it's a straight line. So just watch out for that. But if you want to solve the full thing, you need to take this curvature into account. OK. And this quantity, I, that we're dealing with here, this is a quantity which we can formally define as the spectral radiance. I've used the symbol, I, which is often used for intensity because this is a symbol that's commonly used. Although people often refer to the spectral radiance as the intensity, they also use the word intensity to mean lots of different things. And so if you go on the radiosity Wikipedia page, there's an incredible table that has very, very niche words for all sorts of different quantities. And this is the table I always go to when I want to work out exactly what I'm talking about. And the reason is that this word, spectral radiance, here corresponds to one exact set of units, which is the watts, the joules per unit time being emitted through an area, meters squared here-- through a solid angle per some spectral unit here. So the spectral unit could be something like hertz. It could be something like joules. It could be electron volts. It could be meters. The difference in this being depends how you're resolving your radiation spectrum. So in terms of hertz, that's where we're using angular frequency. Joules and EV, that's where we're using energy. And meters, that's where we're measuring it in wavelength. So if we're talking about the spectral radiance in terms of, I don't know, gigahertz or per EV or per meters here. Of course, if you put meters in this one, then it starts to get complicated because this becomes a cubed and stuff like that. And so people will actually write this as watts per meter squared per steradian per meter, just to remind themselves that they haven't accidentally folded this in. So what this means is that there is some amount of radiation that's going through some unit area-- that's the meter squared. We've got watts going in this direction, and that watts is subtended by some solid angle that's measured in steradians. And we're also resolving it in terms of electronvolts or joules. This is a complicated quantity. But it will correspond to your intuition of what intensity of light is like if you think about it a little bit. OK. So let's just have an example here. Maybe our spectral radiance-- and I may start from this intensity every now and again-- initially has a spectrum. And I shall do this in hertz here. This is what Hutchinson uses in his book. Maybe our spectrum initially has some sort of complicated set of features-- spectral lines in the background, stuff like that-- like this. And by the time that it has traveled through the plasma, there will be some emission inside the plasma. The plasma will make more light which adds to our beam of light, and there'll be some absorption inside the plasma that gets rid of these. So for example, we might have absorption at these low frequencies. They've been completely wiped out. And then we will have some additional emission from inside your plasma. So here, we would have a region where there was absorption. And here we have a region where there was emission. So in general, we want to come up with some formulas that will tell us how, for a given amount of absorption and a given amount of emission in different parts of the plasma-- how the spectra will change from one point to the other. And this is the radiation transport equation, which is the very fundamental equation in this field. And the radiation transport equation says that the change in spectral radiance along our path that we've parametrized by s is simply equal to this quantity, j, which is the emissivity minus the intensity of the spectral radiance times by another quantity, alpha, which is the opacity. The important thing about the equation as we've written it here is that each of these quantities are functions of frequency. And effectively, this equation is linear in frequency, in the sense that we can solve the equations separately for each frequency. So whatever is happening down in the gigahertz range has nothing to do with what's happening up in the X-ray range. This is not necessarily true. In order to make this assumption, we have to neglect interesting processes like fluorescence, where we might excite part of the plasma at one frequency and get back lines at different frequencies. But in order to be able to solve this equation easily, we're going to neglect that. So this is an assumption that we've made. And I'll just tell you what the units of all these things are. You can work them out yourself, but the units of opacity-- very clearly, it has units of inverse length scale. And the units of emissivity are still watts per steradian per hertz. But now it's not per meter squared, but per meter cubed. And this reflects the fact that the emissivity is a measure of the light coming out from a small volume in every direction like this. The first region means that if you integrate over 4 pi steradians, you get the total radiated power per cubic meter per hertz. AUDIENCE: Emissivity and opacity are also functions of s, right? JACK HARE: Yes. I've suppressed the s in here, but you're quite right. As we go through the plasma, these will in general change. Yeah. These are properties in the material. It could be a plasma or it could be a lump of cloudy glass. We haven't actually decided or said anything about them. And we won't learn how to calculate these for some time. You just have to take my word for it that you can calculate the emissivity and the opacity. You've come across some of this stuff before if you've done blackbody radiation. But we're just going to leave it like that at the moment. OK. Any questions? Yes, Sean? AUDIENCE: Does this equation-- if you include temperature dependence of the emissivity and the opacity in this equation to be used to understand heating of the plasma, if you're thinking about maybe you're injecting waves or something. JACK HARE: So the question was, can this equation be used to understand heating of the plasma? Yeah, effectively this term is the amount of energy lost from the beam per unit length. And so you could add that energy into your plasma. So you could couple this to an energy equation in your plasma. But this itself is not an energy equation for the plasma. This is an energy equation effectively for the energy in the radiation. AUDIENCE: I see. JACK HARE: Yeah. But if you wanted to do full radiation transport, you would couple this to your energy equation, yeah. Other questions? OK. It's very convenient to introduce a slightly mysterious but important quantity called the optical depth. And the optical depth, which is still a function of frequency-- and I'm just going to keep trying to write these as long as possible to remind you of all functions of frequency. This is a quantity which is defined as the integral of the opacity along some section of path. And just written like this, this is an indefinite integral. We will come up with some limits to it later on. So at the moment, it doesn't have a huge meaning. But the important thing about it is that it depends on the actual path, s, and frequency. So the optical depth can be very different for X-rays-- good for microwaves-- so again, very important to realize that. The reason we introduced this is that if you stare at this definition and this equation, you might be able to convince yourself that we can then rewrite the equation as dI d tau is equal to j upon alpha minus I. And this is rather a nice equation because we use the equations of the form dI d tau equals minus I. These are exponential growth, exponential decay equations. And we're also used to adding an extra term in here, which is like a source term. So what this equation says is that the intensity is going to drop off along the path parametrized by s, which is now sitting inside tau. But the intensity is going to increase with a parameter that looks like J upon alpha. And I'll show you how we're going to use that in a second. Let's label explicitly. This, then, looks like a source function. So now we can go back to this problem and we can solve this equation. So we can solve for s1 to s2, like that. And we're going to solve it by integrating up with respect to tau here. And we're going to find that the intensity at s2 on the other side of the plasma is going to be the intensity that we started out with at s1 times by a factor of tau 1 minus tau 2. So this is an attenuation that takes into account any absorption along the path. And then along with the initial radiation we started with, we're going to have a second term which corresponds to the radiation we've picked up through the plasma due to the emissivity. And that's going to be the integral from s1 to s2. Our emissivity, now explicitly parameterized as a function of s. As someone asked already, all of the J's and alphas are functions of s. But now I'm making explicit here e into the tau minus tau 2 ds. What this exponential here is doing is saying, yeah, OK. You're adding radiation, but that radiation may also be absorbed. If that radiation is emitted very early on, near s1, it's going to be absorbed a lot as it goes through the plasma. If the radiation is emitted late along the path, close to s2, it's not going to be absorbed very much. So it matters where along the path the radiation is actually emitted. So let me just label these terms here. So this is the original. I'm going to call it spectral radiance so I don't accidentally say intensity. So the original spectral radiance-- this is the optical depth from s1 to s2. This is the emission along the path, s. And this is the depth from s to s2, where s is wherever along the path this radiation has been emitted. This is a complicated equation, and I'm now going to give you two simple examples to try and build up your intuition for what this equation is saying. If you don't think it's complicated-- well, never mind. I think it's quite a complicated equation, especially what these taus are doing here. Any questions on it, though, before we keep going. Yeah? AUDIENCE: So for the emission along s term, if we're wondering about what's appearing at the point, s2, I'm just confused why we don't care about the e angle that it is emitted from, right? Because if it's emitted back along the pathway that you completed, it's not heading towards s2. I guess where is that dependence in there? JACK HARE: Right. So the question was, where is the direction of the emission featuring into this equation? And the question is because when we're talking about the emissivity, we said that it is in some solid angle. So we could imagine that this emissivity has an angle with respect to the magnetic field. And that would be a perfectly reasonable thing to think about. And indeed, this is where I've started slipping into a straight line kind of picture and also an isotropic emission kind of picture. If you want to do this properly, all of these things that are scalars start to become vector, or even worse, some sort of horrific tensor and things like that. And you have to account for all of that properly. So just to try and simplify this, I'm going to do that. But you're right. If you're dealing with an emission that is anisotropic like electron cyclotron emission, that might be very important. Yeah. Another question from Sean. AUDIENCE: Can you define tau an integral over the path? JACK HARE: Yeah. AUDIENCE: I'm a little confused how-- is tau inside the exponential inside the integral? Is this a sub integral to be evaluated before you evaluate the true integral? JACK HARE: I think there needs to be some primes and things like that in order to do this properly. You're certainly going to evaluate this tau, and it's going to be evaluated based on whereabouts in this integral, s, you are. So as you're incrementing s here, this tau is going to be evaluated from, for example, whatever is halfway between s1.5 to s of 2, where the endpoint of this tau is fixed already. I don't think I gave a very good explanation of that. I think probably if I went back and did this more rigorously, I'd put some more primes on this, and we'd have integrals within integrals. OK. Other questions? Yeah, I see Nicola. AUDIENCE: The path that the radiation takes through the plasma is going to depend on the radiation itself, right? So it's going to depend on what kind of radiation it is. JACK HARE: Yeah, could be. So if you're a different frequency, you might have a different refractive index, so you'd have a different bending. Yes, exactly. AUDIENCE: But in this equation, the path is predetermined. So it doesn't-- we're saying that if we know that the path is this, this is how we would calculate the-- JACK HARE: Right. But the path of the radiation, although it does depend on the frequency, doesn't depend on the intensity, necessarily. And so for a certain frequency, you can trace out a ray through your plasma. You'll know what trajectory it is. And then you can go back and solve the radiation transport equation along that path. AUDIENCE: And all the emissivity and opacity figures for that particular frequency that we are looking at. JACK HARE: Yes. At the moment, we're splitting it up so you could solve this for any different frequencies you want. And they don't interact in this simple model here. Yeah. OK. AUDIENCE: When you were answering Sean's question-- I'm a little confused about how the s 1.5 works into the bounds of the tau integral. If you're providing it with some estimate up to, what's the starting s, or if you're providing it, starting, that's with your ending s in that tau integral? AUDIENCE: So in this integral is an indefinite integral. So we're not providing any limits whatsoever at all on this. The limits only come when you start specifying the endpoint. So for example here, we completely specified both the start and the end, tau 1 and tau 2. And that integral is going to look like-- this is going to just look like e to the integral of alpha ds from s1 to s2. This is the equivalent. These two are equivalent here. In this case of tau 2, at the moment what we've got here-- we don't really have the lower bound. So we have e to the integral of s to s2 of alpha ds, like that, where the actual s that we put in the bottom here depends on whereabouts in evaluating this integral we actually are. Maybe that also might answer Sean's question. I don't know. OK, good. Thank you for teasing that out. OK. Just any questions online? OK, Nicola. AUDIENCE: Is it possible that radiation emitted from outside of that line would end up joining that same rate? JACK HARE: Yes, it could do. AUDIENCE: And they're not accounted for? JACK HARE: In this case, no. What we're going to do in an awful lot of this is assume that the radiation is moving in straight lines, in which case it won't cross. The reason is because this is already complicated enough to solve without that. If you need to solve it with that, then you do. If you're dealing with something like an X-ray going through a tokamak plasma, we are so far away from the critical density with an X-ray that the refractive index is unity, and it doesn't change. And so therefore, they do go in straight lines. But you're absolutely right. You could imagine all sorts of radiation coming through, crossing through this point, going back to an earlier question. That radiation could heat up a bit of the plasma, and then that plasma could then emit in a way that you didn't expect before by changing, obviously. So radiation transport is extremely complicated, even without magnetic fields. And then when you put the magnetic fields in, the whole thing becomes much worse. OK. Any other questions? I will give some examples of this equation, so hopefully it will begin to make more sense. OK. Let's keep going onto that. Good. So let's consider a really simple system where we have some radiation coming in here. We have a plasma, which I'm just going to split up into two regions here. These regions are actually going to be identical, but I want to show you how the intensity changes in each of these regions. And then we've got some radiation coming back out here. So in this plasma, we're going to have an opacity, which is just simply some amount of opacity as a single frequency, nu 1. And we're going to have some emissivity, which is simply some amount of emission at a different frequency, nu 2, like this. And my initial radiation that I'm going to put through here is just going to have some amount of emission at nu 1 here. So I've really simplified this, where effectively we are solving the radiation transport for all frequencies. It's just that I've come up with this problem where there are two frequencies. So this is much easier. You can come up with something arbitrarily more complicated than this if you want to. I just want to point out, you're going to find out soon that these two, emissivity and opacities, are not compatible. In fact, there's a really strong thermodynamic link between the two. So don't shout at me there. This is just to make life easier for you right now. But it turns out that you couldn't physically have a pattern like this. We'll talk about that in a little bit. OK. So what does this look like as we go through the plasma? So at this first step here, we're going to absorb some amounts of the initial radiation, which was at nu 1. So our radiation is going to go down. But we're going to pick up some amount of radiation at nu 2 because the plasma is emitting. And this next step here, this process continues. Nu 1 goes down, nu 2 goes up. And then finally, as we exit the plasma, we might end up with a system where nu 1 is very, very small and nu 2 is very, very large. So effectively, as we've gone through the plasma, we have been absorbing that nu 1, the size of that line has gone down, and we've been emitting that nu 2. The size of that line has gone up here. So that's a cartoon example. It turns out that for this approximation, that the plasma is homogeneous, then we can solve this equation analytically as well. AUDIENCE: Professor? JACK HARE: Yeah. AUDIENCE: So just to clarify, the absorption is throughout both sections of the plasma. JACK HARE: Yeah. I split it into two sections so I could just draw the intensity in two places. But the plasma has uniform properties throughout. This applies to both sections, and this applies to both sections like that. Yeah, good question. AUDIENCE: Thank you. JACK HARE: OK. Any other questions? OK. At these times, I wish I'd more boards. I really want that equation. Yeah? AUDIENCE: Can we assume anything about energy conservation in this equation? Can your additions have more energy out than what caused the emission? Or I guess-- JACK HARE: So the question was, are we conserving energy in this equation? So explicitly, we are not conserving energy in this equation. The plasma is the source and the sink of the energy. Any radiation we lose is heating the plasma, any radiation we gain goes to cooling the plasma. That's why you would need to couple an energy equation into this to do it properly. AUDIENCE: OK. JACK HARE: Yeah. I'll put that equation there. So if we have a system, which is now homogeneous so that along our path, s, this quantity, J upon alpha doesn't change. So again, this is just the homogeneous condition. We can solve this analytically, and we get that the intensity at point s2 is simply equal to the intensity of point s1, attenuated by a factor e to the minus Cal 2, 1, which is, as we said before, the integral from l 2, 1 is equal to the integral from s1 to s2 of alpha ds. So this is just an exponential damping factor on the intensity. This is what you would expect for some sort of constant opacity through your system. The further you go, the more that initial signal is going to be damped. And then we're also going to have a term, J upon alpha 1 minus e to the minus tau 2, 1-- same again here. OK. And when we look at this equation, we can see that there's a strong dependence on tau. And so we want to identify two limits which have names, one of which is this tau 2, 1 the optical depth is much, much less than 1. And the other one is tau 2, 1 is much, much more than 1. And these are called optically thin and optically thick. In these limits, this equation reduces to either Is2 is equal to Is1 or Is2 is equal to J on alpha here. Optically thin corresponds to the radiation streaming through without being absorbed. Optically thick corresponds to the radiation being very, very strongly absorbed. So the optically thick case, we have no information about the initial intensity. The optically thin case, we only have information about the initial intensity. We don't have any added emission here. So people may also call these transparent and opaque. Now, remember, this is a strong function of frequency. And so you can be optically thin or transparent to X-rays and optically thick or opaque to the first and second harmonics of the electron cyclotron emission and tokamak. And these are not contradictory. We have solved these equations separately for every different frequency that we're interested in in our system. Now, the interesting thing about this optically thick case is that this corresponds to a black body, which is a thermodynamic system that I'm sure you've studied in which, again, the radiation is in equilibrium with the temperature inside the plasma-- perfect thermodynamic equilibrium. And for a black body, we know a second expression for the intensity here. For a black body, I black body-- or, yeah. I as a function of frequency is equal to a function that's often called B. I'm guessing the black body. And this is equal to u squared upon C squared, Planck's constant Hu over the exponential of h nu upon T minus 1. Or in many cases, it suffices to use the approximation nu squared t upon C squared. And this is valid for h nu much, much less than t. And this is the classical limit. So this is the limit we had in classical physics for some time. And it was the violation of this, the ultraviolet catastrophe implied by the fact that, as nu keeps getting larger, you keep emitting more radiation. That catastrophe is well known too. In some part, the development of quantum mechanics-- and this quantum mechanically correct correction. It just turns out that this is pretty good with [INAUDIBLE] law. It's a pretty good approximation for low frequencies, where the frequency is low compared to the temperature. What is interesting, then, is we now have two expressions for the optically thick case. We have an expression for the black body radiation, and we have an expression to with J upon alpha. And these expressions must be equal to each other. And so this means that the emissivity of the opacity is equal to-- I'm just going to use the classical limit-- nu squared upon C squared times the temperature. And this is called Kirchhoff's law, along with all the other things that people cotransport. Kirchhoff's law is very profound. Although we derived it in the limits of an optically thick body, it still has to apply into an optically thin body. And effectively, what this says is if you know the emissivity, then you automatically know the opacity. These two quantities are intimately related. This is why I said that this wasn't actually valid because this does not obey Kirchhoff's law. So the nice thing about that is you only have to calculate the emissivity for a system, and then you automatically get the opacity. So if you calculate J, you get alpha for free. That's rather convenient. That's nice, but maybe the more profound thing about Kirchhoff's law is the fact that where you have high emissivity, we also have high opacity. What this means is that regions which strongly emit also strongly absorb. And that also means that regions which weakly emit do not-- sorry. Regions which weakly absorb do not strongly emit. They also weakly emit. It's confusing. This is where this comes in. Some of you been staring at this being like, wait, is this obvious? The point is if you have very, very low absorption, you are also going to have very, very low emission. And so you're not going to see significant self-emission from the plasma being added to your initial beam of radiation. And so you might think, oh, perhaps I would get the initial beam plus some extra term. Well, you would get an extra term, but that term will be proportional to tau to 1, and we've already said the brightness is much less than 1. So that extra radiation wouldn't be very significant compared to the initial beam that you put through. OK. Questions? AUDIENCE: Does the built-in thermodynamic equilibrium assumption or something-- like you always have the same temperature everywhere along your path or something like that? Or is It just-- JACK HARE: No. I think the power of Kirchhoff's law is although you derive it using all these black body assumptions, it then works everywhere else as well. Because these two have to obey this principle for a black body, but it means it also pins it even when you're not in non-local thermodynamic equilibrium. Yeah. So I think that's why it's a powerful result. It doesn't just apply to bodies. Yeah, another question. AUDIENCE: You said that you can calculate the emissivity and get absorption. That also requires you knowing the temperature, though, right? JACK HARE: Well, if you're going to calculate the emissivity, you need to know the temperature of your plasma. We haven't got there yet, but the emissivity is always going to be a very strong function of the temperature. AUDIENCE: OK. JACK HARE: Yeah. So to calculate the emissivity, you're going to need to know the density and the temperature of your plasma. And if you want to do it anisotropically, you'll need to know magnetic field direction, all sorts of fun things like that. But at the very basic, we're going to do Bremsstrahlung in a little bit-- Bremsstrahlung, density squared, temperature to the half. So you need to know those two things in order to be able to get the Bremsstrahlung out. AUDIENCE: So we assume that you divide ds of J over alpha on this [INAUDIBLE]. Are we-- do we also have to assume alpha is constant? JACK HARE: Maybe this is more powerful than just simply a homogeneous plasma. Yeah, it looks like they both go up and down together. Yeah. I don't think we need to assume that they're actually homogeneous in order to get this result. So maybe you're right. Yeah, maybe. As long as J and alpha increase and decrease together. I'm trying to work out, is that effectively a statement that the temperature is constant? I think it is. So the temperature is constant because of this result. But I think that means the density could change. So it's homogeneous in temperature only. OK. Any questions online? Yeah? AUDIENCE: Maybe I just missed this in the second [INAUDIBLE] question. But this differential equation in gamma alpha equals zero. Why do we need Kirchhoff's law to conclude that IJ and I alpha are the same, that they increase and decrease in lockstep? And what that differential equation says. JACK HARE: This is an assumption. This equation here is an assumption that we've made in order to derive this equation, which is particularly simple to understand the optically thin and optically thick limits because tau just linearly increases with distance rather than changing rapidly over different bits of plasma. Once we've done that, we then find out for the optical case that this is true. And then, I don't know. I don't think it's obvious from the start that we were going to end up with this result. And this result is still true for inhomogeneous plasmas as well. It just happens to be true for-- we've proved it in the case of a homogeneous plasma. Yeah. I think I see your question. But I don't think we've baked in our result in our assumptions, if that's what you're asking. AUDIENCE: Yeah. JACK HARE: I don't think so, but yeah, OK. OK, other questions? Anything online, anything in the room? All right. So now what we probably want to do is calculate J for a variety of different cases. So we could calculate J, the emissivity from free electrons. What sort of radiation do we get from free electrons? Brems, OK. Anything else? AUDIENCE: Synchrotron? JACK HARE: Synchrotron? You're not going to do synchrotron. AUDIENCE: Thomson scattering? JACK HARE: Thomson scattering. AUDIENCE: Oh, no, Compton. JACK HARE: Compton scattering? Compton scattering is just relativistic Thomson scattering. So we're not going to consider that because that is scattering of radiation from external radiation by the plasma. So we're talking about radiation being produced by the plasma here. But this is a good point. We will talk about scattering extensively later on, but not in this case. Anything else? AUDIENCE: Recombination. JACK HARE: Recombination, yeah. It starts with three electronic beats, so it counts. Anything else? AUDIENCE: Lamar radiation? Or is that just-- JACK HARE: Yeah, that's just cyclotron radiation. To be honest, if that's the same as synchrotron, that explains a lot. So I've always been confused about that. But we're definitely doing cyclotron. Is that the same as synchrotron? AUDIENCE: I think they're slightly different. JACK HARE: Isn't one the relativistic version of the other? AUDIENCE: They're definitely related. JACK HARE: Anyone? AUDIENCE: With any three positrons? Do you think that? JACK HARE: What would we get if we had three positrons? AUDIENCE: [INAUDIBLE] JACK HARE: Yeah, OK. So we're not going to do that, but that'll be great. No, no, no. That would be neat. We're not going to derive that. It's a little bit terminal to the [INAUDIBLE], isn't it? Yeah. That's all I've got here. And then we'll also do this for bound electrons. And really, for bound electrons, this is the whole zoo of light emission. I think I'd be very inconsistent whether I've put two m's or one m in emission throughout this lecture. So you can tell how good my spelling is. OK. Now, in a previous version of this course, we went through all of these without talking at any point about what you'd actually do with it. And I found this very difficult to teach. So what we're going to do is I'm going to cover each of these topics in turn, and then interject with here's how a bolometer works or here's how a pinhole camera works or here's how a spectrometer works. But the point is that all of the techniques I'll discuss are applicable to all of these different types of radiation. I'm just trying to break it up so it's not learn every type of radiation and then learn every type of diagnosing radiation. But we will start, as Hutchinson does, with cyclotron. But before that, I have a little question for you about something that I thought about in the shower the other day, and thought, I wonder if this is obvious or not. And you can tell me whether or not it's obvious. And my question for you is, we've encountered two places in which radiation ceases to go through a plasma. We have found a limit where the optical depth is much, much greater than 1. So that's particularly thick. And we've also talked about a cut-off. That's where the refractive index is less than 0. In the context of a plasma, remember, we might have n is 1 minus ne over 2 and C. And so when we get close to n critical, the wave is cut off. And that also appeared to stop the radiation going. So what I want to ask you is, are these two things the same? And if not, why are they not? AUDIENCE: But they're not the same, the reason being that the optically thick case includes absorption, but not reflection, whereas the in less than 0 case, [INAUDIBLE] reflection. JACK HARE: OK. So let's start writing down some differences. I agree with you. I don't think they are the same. So here, we have reflection. There's no energy absorbed, at least in our WKB picture of this. There might be reality. Here, we just have absorption. What happens to the wave in these two cases? What happens to the electric field in these two cases? OK. So in both cases, what happens to the electric field? I guess we have a critical surface here. Our wave is coming in. What happens to the electric field past a critical surface in the case of a cut-off? AUDIENCE: It can't propagate? JACK HARE: It can't propagate. So what does it do instead? AUDIENCE: Reflects? JACK HARE: Is there no electric field? AUDIENCE: Evanescence. JACK HARE: Evanescence. So what does it do? AUDIENCE: Decays exponentially? JACK HARE: Decays exponentially. AUDIENCE: It's like an Airy function somewhere in there. JACK HARE: OK. So this is because our refractive index is less than 0. Our refractive index is defined as k squared over omega squared C squared, like this. Omega and C are positive, which means that k squared is less than 0, which means we can define some quantity. I'm trying to write this very exaggerated kappa to try to make it look different from my k's-- which is equal to Ik, like this. And then our electric field now goes as exponential to the minus kappa x, like this. The electric field itself decays. It does not oscillate. What about in the case where optically thick? Does the wave oscillate or not? AUDIENCE: Can you just attenuate the amplitude? JACK HARE: Yeah. So in the case where we're optically thick, the amplitude just goes down. So we still have a wave which is oscillating. So e is going as exponential of Ikx minus omega t. It's just that this envelope, I, which is proportional to e times its complex conjugate-- it's not proportional. This is e times its complex conjugate. That is dropping like exponential of minus tau. But the wave is still oscillating. Any other differences about this? AUDIENCE: Here's a clarifying question. These cases are actually the same, right? It's just that in one case, you've entirely gone to an imaginary k, and in the other case, you put, in principle, a complex k or something. JACK HARE: That actually feeds into my final point. So yeah, keep going with that thought. AUDIENCE: I guess just to say one case could become the other, assuming you lost all of your real part. JACK HARE: Yeah. So people online, the question was these may actually be the same, in some sense. This is a slow decay. This is a very rapid decay. The difference I see here is that you can have this slow decay for many different values of the optical depth. This happens suddenly and only when we get to the critical density. Before that, the wave knows nothing about it. In reality, if the density is ramping up, the wavelength will get longer. But the amplitude of the wave packet in our [? wkp ?] approximation that we use in the derivable equations-- that amplitude doesn't change. So I guess the way I summarize it in my notes is that this is a gradual process. And this is a sudden process. In a sense, this is an overdamped oscillator. The amplitude just drops without oscillating. And this is an underdamped or maybe critically damped oscillator, where the wave keeps oscillating, but it goes down slowly. And this will happen. There will always be some absorption in some plasma. So you will always have very gradual decrease, whereas you don't actually have to have this happen in any plasma at all. It doesn't have to be a critical density. But there will always be some finite opacity, even if it's very, very small. Anyway, I don't know if that's profound or not. I just thought it was interesting that these two phenomena look quite similar, but they're actually very different. OK. More questions, yes. AUDIENCE: Weird follow-up. We talked a little bit about not being able to see [INAUDIBLE] for some things. JACK HARE: Usually reflectometry, yes. AUDIENCE: Yeah. So if you have a case where your decay is longer than the length scale over which your critical density is maintained, do you get weird things where it becomes a wave on the other side? JACK HARE: So the question was, if you have a region where the density drops below the critical density-- for example here. So any less than critical density, what happens to the wave? Has anyone done this experiment? I did it in undergrad. It was incredible, life changing. Yeah? AUDIENCE: Keep oscillating? JACK HARE: It will start oscillating again. So you will couple an evanescent wave. You can couple an oscillating wave through an evanescent gap where the wave itself does not propagate, and energy will start coming out the other side. We did it with wax blocks and microwaves in an underground lab-- these big blocks of wax and a microwave generator. And as you move the wax blocks apart, you can generate a wave with increasingly small amplitude. But the wave is still there, and it bridged the gap. And then the remaining energy-- because now your wave is just oscillating on the other side, much smaller. That remaining energy is reflected instead. But this is a really cool example of electromagnetism. I assume the same thing happens in a tokamak or in any plasma. Does it, people who do waves? AUDIENCE: [INAUDIBLE] lower hybrid. JACK HARE: Right. OK. So you can get it evanescently crossing a bit of the plasma and then coming back to life on the other side. AUDIENCE: Yeah. That's what you want to put your launcher as close to the plasma [INAUDIBLE] as you can. Obviously, that gives you plasma surface problems. JACK HARE: Just for the people on line, Grant is saying that you want to put your launcher very, very close to the last closed flux surface because otherwise it's evanescently decaying in free space. So you want it very close to the plasma, where it's actually in oscillating mode instead. OK. Any other questions on this? And then we will go on to deriving radiation from free charges. No screams. None of you've done Jackson before. The good news is we're not going to do the full Jackson treatment on this. It's extremely boring, and you've hopefully got it before. And if you haven't, there's no way I'm going to teach you it in a couple of lectures. We're going to quote some of the main results. If they look completely perplexing, then it might be worth going to have a look at something that deals with radiation from free chapters. But we're not going to do the whole thing. If you want to see it in very rigorous detail, you should look at Hutchinson's book. It really does go into this with a lot of rigor. But what I want to start with is I want to start with a very simple picture of why charges radiate. And I have not been able to find this in any textbook. But it was taught to me in undergrad, and I thought it was a rather nice physical picture, so I will teach it to you. And it may or may not be helpful to you. Radiation is measured by the Poynting flux. The Poynting flux is equal to the electric field crossed with the magnetic field over a factor of mu 0, or probably C, if you're using CGS. I don't know. The point is that radiation moves in a direction which is perpendicular to the electric and the magnetic fields in the system. And we're going to use this simple formula and sketch a few different moving charges. And we're going to use that to see whether those charges radiate or not. So let us start with a very simple system-- an electron at rest. What are the electric and magnetic fields in this? Yes? AUDIENCE: A radial electric field? It's a known [INAUDIBLE]. JACK HARE: So it's a radial electric field like this. Electric field drops off as 1 upon R squared, Gauss's law in the R half direction. The magnetic field is 0 because there are no moving charges, so there's no currents. And so therefore, the Poynting vector is 0. Our stationary charge is not radiated. OK, next one. Now we've got a particle, and it's traveling at a constant velocity. For example, it's traveling in this direction. At a snapshot in time, I'm looking at this particle from my lab frame. What are the electric and the magnetic fields here? Not you again. We'll have someone else. But I'm glad you know. AUDIENCE: The magnetic field, you can do by the right-hand rule so it's a moving charge. JACK HARE: Yeah. So there'll be some sort of-- there's current in this direction, so there'll be a magnetic field surrounding it. I've drawn this tilted, out of the page. Otherwise, I'd just have to draw it straight up and down, which would be hard to do. OK. This is the theta. What about the electric fields? Not a trick question. AUDIENCE: Radially. JACK HARE: Yes. There was still the same electric fields radially in. AUDIENCE: Can I ask the annoying question? We're observing from close enough that we're able to see what's moved and all that good stuff? JACK HARE: Yes. This is definitely not rigorous, but it does get the right answer. So please bear with me, even if you're like, oh, no. OK. So then we've got electric fields, which go as 1 upon r squared, r hat. Magnetic fields, which are going to go as 1 upon r theta hat. This is assuming that we've got this electron as a current carrying wire. Clearly, it's not a current carrying wire. That's a result for a current carrying wire. But it's dropping off in this fashion. And so that means what is s? How does it scale and what direction is it pointed? AUDIENCE: It's pointing in the bi-direction or whatever depends on your unit. JACK HARE: Yeah. Well, the system's a bit screwy here. Let's say I'm using cylindrical coordinates, where I've got z direction here, a radial direction here, and I've got some theta angle like this. I know that doesn't make sense with the definition of r that we had earlier, but again. Yeah. AUDIENCE: That'd be 1 over r cubed mu 0. JACK HARE: Yeah. I'm going to drop the mu 0. Treat myself. And what direction is it pointed? AUDIENCE: Z. JACK HARE: In the z direction, yes. What is this Poynting flux doing? The Poynting flux is the transport of electromagnetic energy. What's it doing? Why is it pointing in the z direction? Is it radiated? AUDIENCE: It's following the particle. JACK HARE: It's just following the particle. This Poynting flux is simply moving the electric and magnetic fields that the particle has with the particle. And it drops off as 1 over r cubed. It drops off very, very quickly, away from the particle. If I draw a surface with surface area r squared around it, and I ask, how much total power do I have going through that surface? That amount total power will drop off as I make my surface bigger. This is not a propagating wave. You can't observe this. Again, you need to be close enough to see the electric magnetic fields. But this is not a radiation you can see from far away. OK. So this moves the n energy with the electron like that. This is the point of it. I can add-- sorry, now that we've decided it's in the z direction. OK. Now finally, we're going to look at a system where the velocity is not constant. And I'm going to look at a very, very specific velocity profile here. And you'll see why I've chosen this in a moment. This is a profile in which the velocity of our particle in this sort of z direction is initially at some value, and then suddenly, instantaneously drops to 0. A function of time, which means that the x-coordinate is going to go up and then flatten off. The particle will then be at rest, and we'll define this to be e equals 0 here. What am I going to get? I've changed something just before the lecture to make it clearer. Now I'm not sure it's consistent. Give it to me. I'm doing it the other way around. Let's see. The particle is initially at rest, and then it suddenly accelerates to some velocity. I think this will make it work. So the particle is initially at rest. Let's say it's at rest here. And then all of a sudden, it's over, moving in this direction like this. Now, information about the electric and magnetic fields of this particle can only propagate at the speed of light. And so that means if I'm an observer some distance away-- and let's say that distance is this circle, like that. The information I have about the electric and magnetic fields is the same information I had-- from my point of view, the particle is still at rest, which means that outside of this circle, all I can say is that the electric fields are pointing inwards here. Inside the circle, I now know that the electric fields are doing something quite different. They're now looking like this. And we have some sort of magnetic field like this. Now, the electric field lines cannot be broken. These electric field lines therefore, in this very fictitious scenario, have to be joined up in this way. So I'm also going to draw some electric field lines like that, electric field lines like that. And these ones are outside like this. And now again on the inside, I've got some magnetic field lines, which is still in this poloidal direction. So what's happening now? So while pointing flux or electric fields or magnetic fields, let's go through the magnetic field and say that at this interface here-- so at this interface, we still have a magnetic field, which opposes upon our theta hat. We've got an electric field. OK, I think this is the bit where I just have to tell you to believe me. If I'm right, I can prove this straight away. This electric field turns out to have a scaling which goes as 1 upon r. I can't remember where that comes from. Not 1 upon r squared, but it's very important for the conclusion. And also here, if we think about this direction, we're going to have something which is tangential to this theta direction. So I'm going to call it some sort of [INAUDIBLE] where I'm subtly switching into a spherical coordinate system. And all of this together means that our Poynting vector now goes as 1 upon r squared. And it has a direction, r hat. And this means that finally, we've got a system which is radiated. So this is moving radiation radially outwards in every direction. And this is the result that you probably remember is that accelerated charged particles radiate. These disappointing lectures are not simply shuffling the energy along with the particle. It's actually taking energy from the particle and putting it into electromagnetic waves. And like I said, it'd be much more convincing if I can remember the argument for this fact. And I decided not to write it down in my notes, so I can't remember it. But it's clearly a very different circumstance than what we had here. The reason we've derived in this very hand-wavy way is I'm about to write down the actual correct equation. And you'll see that it has almost no intuition involved in it whatsoever, whereas I like to think that this gives you a little bit of intuition about what's going on and why, moving charges right here. But questions? Yeah. AUDIENCE: So in this situation, you've drawn in your third situation with the non-constant B and C lightcone of the particle as it begins to move. That's where the energy is located? JACK HARE: Yeah. So this distance here, this is where an observer outside of this would still believe the particle is located here. Inside this lightcone, the observer can see that the particle has moved, so of course, the particle is actually going to be moving in several discrete-- I've drawn these electric field lines as straight. They should actually be curved inside here. I've just drawn them as straight from the current position of the particle at this point. And so there must be a tangential discontinuity of the electric field at this surface to go from one radial vector pointing at one point to a new radial vector pointing at a new point. And it's that tangential discontinuity which generates this electric field perpendicular to the magnetic field that gives us Rossby Poynting vector. And that goes radially outwards. AUDIENCE: And so for the situation where we have basically a step function in the velocity, the energy is infinitely localized to this shell? JACK HARE: Yeah. In this case, the particle is only accelerating at this time. The acceleration just looks like a delta function. And so therefore, there's only radiation here, propagating outwards as a spherical wave. And from then onwards, the particle is just in this situation again where it's shuffling its fields along. And so this is acceleration. Yeah. AUDIENCE: So does that mean like if you have somehow the spatial distribution of energy as a function of your acceleration, [INAUDIBLE]. JACK HARE: Absolutely. And we will see that in the long and complicated formula in a moment, yes. You can do this stuff rigorously as well. Any other questions? Yeah. AUDIENCE: Just my vague memory of how the 1 over r thing is because you're-- I think it's like to have your voltage make some big nonsense, you have to integrate along that path length that it's getting extended across. JACK HARE: OK. So there's a suggestion this 1 over r comes from trying to keep the voltage to a reasonable value and your integration along the path. AUDIENCE: Yeah. There's some weird-- JACK HARE: Have you seen this before, then? AUDIENCE: Yeah. JACK HARE: OK, good. I'm glad I'm not just crazy. I didn't come up with this. It was underground, and I'll try to reconstruct it. OK. Any other questions before we move on? Any questions online? I see Matt's hand. Yeah. AUDIENCE: Yeah. Maybe you said this, but I just want to make sure. The Poynting flux direction in the second picture. Yeah. That's only valid in the plane perpendicular to the motion of the particle, right? Because the electric field direction would be rho, not-- it would be in cylindrical coordinates, right? JACK HARE: I was saying that the fact that I decided to use cylindrical coordinates here is not actually very useful because the electric field isn't just in that direction or in this direction. It is in every direction. It's just the only place where e cross B is significant is really in this plane here. AUDIENCE: OK. Sure, sure. JACK HARE: [INAUDIBLE]. And it will drop off. If you go back to this point here, e cross B-- because they're not very well anti-aligned, it will drop off. And so you have a ring moving the electric fields and magnetic fields with the particle. AUDIENCE: Yeah, that makes sense. OK, thanks. JACK HARE: OK. Any other questions? OK. Let's see the full version. So if you do this properly, you get out of the equation for the radiation from a single moving charge. Like I said, if you haven't seen this or can't remember how to do this, I recommend you go take a look at Jackson or some other sufficiently advanced text. And this equation is the electric field seen by the observer who is some distance away from the charge. And this electric field has got the standard q over 4 pi epsilon 0 that we know and love from Gauss's law. And then it has two terms corresponding to radiation, which we call the near field and the far field. So the first term, which is part of the near field, is a 1 over k squared. This k is not the normal k that we've been using before the moment. And I've just realized this is a cube. k cubed r squared. And then you have a term that looks like r hat minus the particle velocity on C. 1 minus the particle velocity squared on C squared. We're going to see lots of velocities showing up normalized to C because obviously the speed of light is pretty important for this sort of stuff. Here, I've defined two new terms. This k takes account of the fact that we're looking at the particle at some time. We're looking at the particle and we're seeing where that particle is, but the particle has already moved. So this is helping us with our retarded time, which is looking at the earlier time that we're at. So this k is defined as 1 minus r 0 b upon r C. And I haven't defined r either, so I should do that. This r is a vector that is effectively the vector that joins the observer to the particle. So this is defined as the position of the observer minus the position of the particle at time, t. So if I draw a little diagram, we've got some origin to our coordinate system. There's some vector, r of t, which is our electron. And then there's some vector, x, which defines our observer. And so this is the vector r here. We could get rid of this by just assuming either the observer or the particle is at the origin of the coordinate system. But as the particle is moving, that's not necessarily particularly useful. And so we're doing it in this more generalized way here. So we'll talk about what happens to this term in a moment. The second term inside here is 1 upon C squared k cubed R r hat. And we're just going to find r hat is equal to the R vector over the size of the R vector-- so just a normalized vector. And that r hat is crossed with r minus the C, which is, in turn, crossed with v dot upon C. And v dot is going to be extremely important in a moment. And I'm going to close my brackets. So we have two terms here. We have a term which we call the near field and a term that we call the far field. We're not going to explicitly derive the magnetic field here because fortunately, these are properties of electromagnetic waves in a vacuum. And we're still talking about vacuum waves here. B is simply equal to 1 upon C e crossed with r hat, like that. So the magnetic field is perpendicular to the electric field, as we expect. And this means that for the near-field case, we have a Poynting vector, s, which again is equal to e cross B with some constants. And that s is going to go as 1 over r to the 4th. And so the far field case. That s is going to go like R. And s is going to go as 1 upon R squared. Here we had r squared of this. So this straight away tells us that if we consider the power through a sphere of some radius, r, the total power, s, integrated over the surface of the sphere will drop off as R squared for the near-field. But it will stay constant for the far field. So only the far field is actual propagating radiation. That's the only thing we're actually going to observe. So we don't actually need to have this near-field term in the equations we're looking at. It just drops out when you solve this equation. We figured [INAUDIBLE]. They're propagating e and m, e [INAUDIBLE] The only reason to show you this is, first of all, to show-- as many of you know, with the radiation even from a single moving charge is very complicated. And then of course, we have to ask, what is it that we're trying to achieve in a plasma? But the first thing we want to do in a plasma is we notice that our near field far field term has lots of things that we might know, like where we're sitting and how long it's been. But there are several things inside here that we don't know, such as v and v dot. So that tells us that we're going to need to solve the equation of motion for a particle, something that you've done several times in a plasma already without a magnetic field, with a magnetic field. You know this is the thing that tells you that the particles are spiraling around field lines. And because they're spiraling, they're continuously being accelerated. And so we know straight away that these accelerated particles are going to be emitting propagating radiation. And that's the thing that we want to detect. So first of all, we'll have to solve for the equation of motion. But the second thing we'll have to do is integrate overall distribution function, f of e e3v because we will have then-- once we've solved the equation of motion, we'll have it for a single particle. But the particles have different velocities depending on where they fall in this distribution function. And they may have different velocities in different directions. They may have one velocity along the magnetic field and another velocity perpendicular to the magnetic field. And so these two steps are very non-trivial to do properly. And these are the steps that we're going to skip in this class. We're just going to give the results from this. But if you want to do this properly, if you want to work with electron cyclotron emission, you could probably go back and check that you can actually do all these intermediate steps. OK. We are well over time. I'm happy to take questions. But otherwise, I will see you on Thursday.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_9_Refractive_Index_Diagnostics_V_Abel_and_Faraday.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So we're going to go on to a slightly new topic today. We're going to be studying Abel inversion and then talking about Faraday rotation imaging. Does anyone have any questions on all the interferometry stuff we've covered so far before we leave that part alone? Everyone seems very happy with interferometry. All right. So interferometry-- this technique that we're going to be talking about today, Abel inversion, is actually quite general. And I'm introducing it in the context of interferometry because we often use it with this technique, but it could also be used with emission from plasma, from individuals. We'll talk a little bit about what Abel inversion is and what symmetry requirements we have, and you'll see quite quickly it's actually quite general. And it can be used in lots of different cases here. So what we have from interferometry-- I'll just write as IF like that-- is we've got some integration of any dl. So we have some line integrated quantity. And we might have that line integrated quantity along a very specific chord. So we might be integrating along the z direction, and we might be at x equals x 0, y equals y 0. And we might be resolving it as a function of time. That would be our temporally resolved interferometry. Or we might have an image of x and y, and we would have it at some specific time. This would be our spatially resolved interferometry. So this is what we have. And of course, what we want-- that's what we can't have, which is the electron density as a function of position everywhere in space, preferably as a function of position of time. This is what you'd like if you want to compare a simulation or a theory or something like that. And instead, we have these line integrated measurements. OK. So if you want to calculate this density from some of these limited reduced data sets, you could do a technique like tomography. So if you've ever gone for an MRI or something similar-- tomography-- you'll know that what they do is they'll take lots and lots of different images, slowly scanning around your head or the injured part of your body, and then they'll do some very fancy computer techniques to reconstruct the three-dimensional structure of whatever it is they're scanning. And this works extremely well because people have lifetimes of years. But plasmas only have lifetimes of seconds or milliseconds or nanoseconds, and so it's very hard to just slowly rotate your plasma in place when you've taken lots of pictures of it. The alternative, if you want to do single-shot tomography, will be to surround your plasma with lots and lots of cameras and look at it from lots of different angles. So for example, if we've got a circular cross-section plasma like this, maybe this is some sort of automat type thing. We could just have lots of different lines of sight. And we can do our tomographic reconstruction like this. But of course, lines of sight, LOS, are expensive. So we don't tend to be able to just have a very, very large number of them. But of course, for some applications, this might be justifiable on each so they've got 550 kilometer lines of sight for reconstructing the emission from it. So they made a choice to have a lot of imaging lines of sight. In general, for interferometry, this is too expensive, so we don't do it. But what we can do is a version of this where we make strong arguments about the symmetry of our plasma. Because if our plasma has some underlying symmetry, it helps us need fewer lines of sight, and we can still start getting out an approximation of the full density profile. And the symmetry we're going to talk about today with Abel inversion is an assumption of cylindrical symmetry. So we're going to assume cylindrical symmetry. And so that could be, again, something like this plasma here with a circular cross-section. And we're going to assume that maybe there's variation in this direction out of plane. That doesn't matter. We're only trying to measure it in one plane. But we're going to assume that our electron density, any of xy and z, is just a function, any of R and z, where R squared is equal to x squared plus y squared. So that's the same as saying that the density is constant nested circular surfaces. And of course, if this is something like a tokamak, that's quite nice because we know that our flux services quantities tend to be constant. And this could also be true for a cylindrical z-pinch plasma and the sorts of experiments I do. And you can think of other situations where you think things are approximately symmetric. And if you have a system like this, you could have a set of interferometers looking along parallel chords like this. Or if you are working with a tokamak and you can do imaging, you could have a camera and expanded laser beam, as we discussed before. And that camera is now measuring any of-- let's see, xy. That t moves to 0. Or these could be a series of chords which are measuring any at x equals x0 times like that. So this is just two different ways to look at this problem. I can either build up my data from multiple time resolved interferometers, looking along parallel chords, or I can have an imaging system looking at a single time. In both these cases, we can do this thing called the Abel inversion. I'll just write down-- oh I wrote it down. OK, good. Abel is one of these guys who's depressing when you read his biography. He was a Danish mathematician. He invented all sorts of wonderful things in mathematics, as well as the Abel transformation. And he died of consumption at the age of 26. So I don't know how many of you are still younger than 26, but you've still got maybe a couple of years to make such groundbreaking discoveries. I'm already past it. I don't have a chance. So you look at his biography and you're like, damn, Abel got a lot of work done. OK. Cool. So what we have, say, from either of these two systems is a map of the line-integrated electron density as a function of y. And I'm just going to draw this very suggestively as this very blocky setup here. So each of these densities could be the density that we've measured at a single pixel on our image or it could be the density measured by our n time-resolved interferometers here. So we've got some density value at each of these points, like this here. And what we want is, of course, our plasma density as a function of R, not as a function of y, which is a coordinate. For example, it could be one of these two. I'm not really distinguishing between y and x here. As long as it's perpendicular to the probing direction, it doesn't matter. But what we want is this as a function of R. And we'd like to have some nice, smooth function. And it doesn't necessarily have to have the same shape, of course, because it's clear if you look at this and you think about it for a little while that your profile in dl doesn't have to be the same as your profile in any like this. So what we want is some mathematical formalism to allow us to take this data and produce this. It's clear how to go back the other way. You can certainly do it numerically very easily. You can just make this up. You can make up some profile like this, and then you can just calculate the line-integrated density along each line of sight. What's less clear is how to go back the other way from the data we have to the data we want. So this is the setup of the problem. So does anyone have any questions about this so far? Any questions online? This is what we measure. And I'm going to call this some function, F of y. And I'm going to call this some function f of R. And the reason is because it doesn't actually matter whether this is density or brightness or whatever else. The mathematics are all the same. So I'm just going to refer to them as these two different functions. And we're trying to convert one to the other. So mathematically, we have our line-integrated function, F of y, which is equal to the integral of any exerted like this. And this is equal to the integral from minus a squared minus y squared square rooted to plus a squared plus y squared like this-- f of r. Exactly. [INAUDIBLE] very quickly. OK. I had it right in my notes, besides the change on the fly. I hate it. We're going back to the notes. OK. And I'll draw you a diagram of the geometry here. It might be slightly different from the background of the geometry I had just previously here. So we have some plasma which has an approximately circular shape. It's bounded at a, so we can say that the pressure at a is equal to 0. This just stops us having to integrate out to infinity, which is very inconvenient when you try and do it in reality. So we're going to stop our plasma at some distance. So we only need to make measurements up to the boundary, a, in order to solve this problem. We've got some chords going through the plasma. We've got our coordinate system, where y is transverse to the direction that we're probing and the x now is the direction that we're probing. I was using z previously, but the way I've got it written is x. I got myself getting confused with how to do that. And so any point inside the plasma, you can say that there is some distance, y, that it sits at from the origin here and some distance x along that it sits out. And so there's some distance, r. And this is the radial coordinate. We're assuming that we have symmetry in the azimuthal direction, in the angular direction. So we're only interested in the size of this radial coordinate here. And then if you look at this and stare at it for long enough, you can see that this is indeed the procedure I was talking about before, where you can easily go from your-- if you come up with some calculated profile, some guess at what you think the distribution is, like a Gaussian-- how you go from that to your prediction of what you're actually going to get from your detector. So this is the easy direction. This is actually the Abel transformation. And what we're really interested in is the inverse Abel transform. Now, we're technically requiring this condition here. I've written as pressure, but why don't we, now that we're talking in terms of these functions, I've written that I want f of a to be equal to 0. This isn't actually quite true. The rigorous requirement is that f of r as r tends to infinity. This needs to drop off or fall faster than 1 over r. So you can get away with a Gaussian type function or something like that, as long as it falls sufficiently fast. You can't get away with something that's uniform across all space. But as long as your function falls off nice and quickly, you can use this technique here. But this is actually quite the Abel transformation. What we do at this point here is we realize that this is a horrific mess of r's and y's and x's and things like that. And we decide that we want to rewrite everything. And we already said that r squared is equal to y squared plus x squared. And so we want to have a go at substituting this x out for something that's in terms of r instead. And this gives us a real transformation, which is f of y is equal to 2 times the integral from y to a, f of r r dr, square root of r squared minus y squared. And if you stare at this for long enough, you can convince yourself that doing this substitution into integration will work out, and that we've correctly dealt with the limits here as well. And so this is the thing which is called the Abel transform. OK. But we, as I mentioned, don't want the Abel transformation. That's relatively easy to do. What we have is f of y when we want f of r. So what we want to be able to do is the inverse. And I'm not going to derive this. I'm not even sure I know how to. But if you stare at what I'm about to write down in this for long enough, you can convince yourself there's enough shared features to it that it's probably correct. And you can go look it up if you want to [INAUDIBLE] So this gives us that f of r is equal to minus 1 upon pi times the integral from r of a of dF dy. That's our capital F here-- dy y squared minus r squared. I'll just [INAUDIBLE]. Remember, this capital F is something we have as a function of y. So this is our line incentive measurement, and this is as we move to the symmetric radial dependence on the measurement. This is the thing that we're actually trying to get at. OK. So this looks like a complete solution to the problem. If I have some measurement from my detector which is line integrated and I have enough samples in y, I should be able to work out what that involves. Can anyone spot any limitations for this procedure? There are two obvious ones. AUDIENCE: It's not clear where the plasma ends all the time. JACK HARE: Can you just say that again, please? AUDIENCE: Yeah. It's not always clear what a you should choose. JACK HARE: OK. So that's a reasonable one, actually. So where is a? That is actually a problem. I would definitely agree it's a problem. It's less of a problem as long as a drops off rapidly enough, which is related to this. But you're right, the edge of our plasma is fuzzy. You know where the density definitely goes to 0 if you have a vacuum chamber-- the hard metal walls of your vacuum chamber. So maybe that would be good enough. But yeah, you certainly want to do this experiment to know where a is. Other limitations? AUDIENCE: If you do the derivative. So that's going to be limited by detector resolution and then experimental noise. JACK HARE: Yeah. So dF dy is noisy. If we go back to this very suggestive picture that I put in here deliberately like this, for any realistic system, you have discrete measurements at discrete locations. And we all know that doing derivatives of discrete data is a nightmare because you're taking something that's noisy, and you're dividing it by a small number, so any noise here gets really amplified up. So this straight away looks problematic. Third thing. Yes. AUDIENCE: I don't know if we want to count this, but once you start getting close to the edge and do that correctly, it'll be able to have a much smaller number in the bottom. JACK HARE: Close to the edge? AUDIENCE: [INAUDIBLE] we're closer to. [INAUDIBLE] I was thinking-- JACK HARE: You're close, but you're not quite right. Anyone else know? What are we talking about here? We're talking about the fact that there's something interesting going on here. And your physicist eyes have seen this and gone, a-ha. Whenever we start having numbers minus other numbers in the denominator here, there's some chance that this thing will go to 0. And it will actually go to 0 for y close to r equals 0. So near the center here, this thing will have a singularity. Now in reality, we won't have a singularity because we'll never have a detector bin that's exactly at that position. But what we will do is for the bins which are close to the center here, we'll have a very small number on the bottom. And so we will amplify the value of this big number. So if this is noisy, if there's some noise near the center here, that noise will be massively amplified and will appear everywhere else in our solution. And it will cause problems for the rest of our solution. So I'll just write here that we've got a singularity near y equals 0, and that this amplifies the noise of these points. OK. Any questions on any of this? Yes. AUDIENCE: Why are we collecting for the data where y is minus? y can only be positive in this picture. But numerical, you can have my y be negative. JACK HARE: That's an excellent question. And does anyone know why we are neglecting the data that we have for y less than 0? We have assumed symmetry. In order to do this calculation, we have assumed as a mutual symmetry. And so the data must be identical for y less than 0 than y greater than 0. Now, in reality it won't be. We never have a system which is perfectly symmetric. So the good way to present your data is to do the Abel inversion on one half of your data and the Abel inversion on the other half of your data separately, and then see whether those two match. And if they match close enough with an experimental error, great. You've got a good inversion. If they don't match at all, then you shouldn't have used an Abel inversion in the first place. Your prior that you have this cylindrical symmetry is incorrect. So you can't use this method. So it's a good check, actually, on your data. The other reason is because of this symmetry, if you're trying to save money, you might only put detectors in one half. If you really sure that you've got symmetry, then you don't need to check. Maybe you do the experiment a few times with all your detectors spread out, and then you're like, hey, this is great. Now I can get higher resolution by moving half of my detectors to the first half. So there may be some reasons why you only need half the data here. But yeah, that's a really good point. Any other questions on this? Anything online? OK. Like I said, this is very generic. This could be interferometry on a tokamak. This could be interferometry on the sorts of plasmas I work with. This could be used for unfolding a mission on a hotspot from an X-ray image or something like that. So we're just introducing it here because it's a useful technique to know. OK. How do we actually do this in practice? Can anyone think of some way to overcome some of these limitations, particularly this one here? So we've got, again, our data, which is discrete and potentially noisy. Yeah. AUDIENCE: We mentioned before, just take data exactly as you [INAUDIBLE] always going very close to [INAUDIBLE]. JACK HARE: Sure. That could be a problem, yes. So you could deliberately shift your data so that y equals 0 is on one side of one of your bins or something like that so it's close. Hard to get in practice because the plasma might move around, so you're probably not going to be able to do it. Yeah, other ideas. AUDIENCE: I guess that also means that this idea might have issues. But if you have prior information about where high grades are in your plasma, you concentrate your measurements in those regions so that you're better resolving hydrating ingredients can go for it. JACK HARE: Yeah. So certainly these high gradients are going to be dominating this integral. But of course, also any gradients near the center is going to be dominated. So you might want to put more measurements near the middle so you'd have higher fidelity there. Other techniques? AUDIENCE: Is it possible to integrate it by parts? Therefore, you don't differentiate by inside your different [INAUDIBLE] the other part. JACK HARE: I don't think you can do this by parts, but I haven't seen the analytical version of this, which looks like it does that. But I can't immediately check this and tell you that it doesn't. I suspect it doesn't work. Yeah. Any other ideas? AUDIENCE: You might have some reasonable idea of the distribution ahead of time and form an expected interpolation. JACK HARE: OK, yeah. So maybe we've got some priors about the distribution, and we fit to this. That would be good. A similar version is we could fit this noisy data with a set of basis functions that we think has the sort of information in it that describes this. And those basis functions won't have any noise, and they'll be nice and differentiable because we'll use some nice analytical basis functions. So we could have a sum of m Gaussian functions, where the Gaussians have some position and sigma and width or something like that. And then we know what the derivatives of all of those are, so yeah. AUDIENCE: So that's putting a set of basis functions to the brightness of emissivity? Big F or small? JACK HARE: We'd still be fitting into this. This is the only thing we know. AUDIENCE: OK. JACK HARE: Yeah. So you could say F of y is equal to some weighted set of basis functions. And there are some basis functions that work really well because they've got analytical Abel transformations. Gaussians are one of them, unsurprisingly. But some functions have a nice analytical Abel version and some don't. So you'd want to use a set of functions that have nice analytical algorithm versions. So that works pretty well. And that's what most people do. So if you go online, Python has a nice Abel inversion package, and they've got different basis functions. And depending on your exact problem, you might want basis functions that have got more spiky features at the edge or smooth features in the middle. And so just like all of these sorts of problems, there's no one size fits all. You have to tailor it to what you're doing. Did I see another question? AUDIENCE: Can we fold it for a low pass or something to get rid of 5 [INAUDIBLE]? JACK HARE: Yeah. So as always with our data, we can smooth it out, but then we lose spatial resolution. So that could help us, but at some cost. And so you have to balance those things. AUDIENCE: My first thought-- I actually think how to do this is the ideal would be get that derivative in analog directly. JACK HARE: Right. So the idea was how to get this derivative in analog directly. I don't know how to do it either, but I think this is just a fundamental limitation when you're making one of these measurements that you can't overcome. There are some really interesting links between the Abel inversion; the radon transform, which is also used in tomography; and Fourier transform. They form some sort of weird, little cycle. If you do all of them in a row, you get back to where you started. So there's really fun mathematics going on inside this as well, which I'm not intelligent enough to know about. But if you like that sort of thing, you should go look on the Wikipedia page. There's a lot of good stuff. So any other questions or thoughts on Abel inversion? This is just a little aside. We're going to go on and do Faraday rotation after this, so we'll completely change topics. So if you've got any more questions, speak now. AUDIENCE: So this condition of cylindrical symmetry is very strict. So for instance, if we had a highly shaped [INAUDIBLE] flask with diverters, et cetera, it technically doesn't-- it possesses a symmetry on flex surfaces, but those flex surfaces aren't cylindrical. So there would be no way to incorporate that information. JACK HARE: Yeah. So the question was how do we deal with non-circular flux surfaces, like in most modern tokamaks? And yes, you can no longer use an Abel inversion in that case. But there is still sufficient symmetry, and there's a lot of symmetry. So what you would do for a tokamak is I believe you'd do a gradual runoff reconstruction of the flux surfaces from your magnetic diagnostics. You would then know that the density is constant along the flux surface because they are surfaces of constant pressure, and there's enough motion in the toroidal direction to smooth out any density perturbations very quickly. This is roughly-- obviously there's fluctuations and stuff like that. But in general, there's this constant. And then you would use that as a prior. And you wouldn't do this Abel inversion analytically or even semi-analytically, but you would have to feed into your tomographic reconstruction algorithm. And for any tomographic reconstruction algorithm, the more data you have, the better it is. So it's obvious. And it's obvious here as well-- if I only have four chords of interferometry, my data is so sparse that I'm going to have a bad Abel inversion. And so you can have four chords of-- you can have your chords of interferometry crisscrossing the plasma like this, or you could still have them crossing the plasma like this. And even if your plasma was strongly shaped-- so if it was some classic single null no x point type thing like this, you could have your chords frothing in this fashion. And that would be good enough to be able to do some sort of inversion in the middle here. And there's actually-- I think one of Anne White's students who's about to graduate who has been working on this for X-rays. And he came up with some cool ideas about what if you have some sparse chords of data and then in some section here, you have really, really fine chords? Can you combine those and measure very small turbulent fluctuations with this? And apparently, the answer is yes. So there's lots of cool things you can do with [INAUDIBLE].. Yeah. AUDIENCE: Do people commonly [INAUDIBLE] integral formulation of the algorithm or is it more a problem to do a matrix formula where you know the width of your sight line or all of your bins in space, and then you can take the inverse of the matrix with some math massaging? JACK HARE: I don't know what technique is more popular. When I've done this, I've tended to use fitting with basis functions and then do the inversion analytically to the fits of the basis function. But I imagine there might be good reasons for doing that matrix formulation, especially if you're trying to use this for real time control. So the bolometers are going to be used for showing where the glowy bit of the plasma is. And we want that to stay in the middle. If it starts going somewhere else, then you want to feed back on that. And so then you need to do it quickly. And so having some sort of matrix technique will be beneficial compared to, oh, I'll carefully tune my fitting function. So you don't have time to do that. OK. Any questions online? All right. Let's do some Faraday-- AUDIENCE: Are there ways of handling systems-- are there ways of handling systems that don't have-- I guess you already talked about this. But are there ways of handling systems that don't have any symmetry priors? JACK HARE: Yeah, actually. I have a grad student of mine who's been working on this recently for tomographic reconstruction. So you can always do a reconstruction, but it's always poorly posed. You don't have enough information to fully reconstruct the [INAUDIBLE] points in space. But if you have some priors, you can make some guesses. Even with just a single line of sight, you can make some guesses. And the more lines of sight you have, the more you can constrain it. And he was working on a technique, which I can't remember. The acronym is ART, and I can't remember what it stands for. But with two orthogonal lines of sight and just a flat uniform prior-- so no information about what it looks like to start with-- he was able to reconstruct some relatively complicated shapes out of this. So there are some clever things that you can do. And I think there's a lot of cool stuff that we can take in plasma physics from other fields. So a lot of stuff in computer vision from tomography, from medical imaging, which can be used for understanding this stuff. And people do already, so there's lots of nice things out there that you can do. At the end of the day, the more data you have, the easier this is. So if you have very little data, you have to provide that information from somewhere else, which is your intuition, your guesses about the plasma. So you can't win. You can't get just get this information for free. OK. I'm going to go on now so you have time to cover all of this. So we're going to be looking at Faraday rotation. But we're actually just going to take a little side track for maybe most of the rest of this lecture to look again at waves in magnetized plasmas, of which Faraday rotation is one. The reason is although you hopefully have seen some of this in earlier plasma classes, I think there are lots of different ways of looking it. And I think the way that Hutchinson has and that I've adapted is quite a nice way of looking at it. And we're also going to need lots of these results, not only for Faraday rotation, but also for reflectometry and electron cyclotron emission. So we need to know how waves propagate in a magnetized plasma. And so we may as well just review this quickly. So if you've seen this before and you're very confident, feel free to relax. And if you haven't seen it, maybe pay attention. So remember, we had before some assumptions that our plasma was cold, and we quantified that by saying that the thermal velocity of the electrons is much less than the speed of light. We said that our frequency was high, and we quantified that by saying that our ion plasma frequency was much less than a frequency of the waves ascending through the plasma. And that meant that the ions are effectively stationary. So we can just neglect them and just deal with the electrons. That made life much easier. And we also made this following restriction, that k dot e was equal to 0. And this was the restriction that the waves were transverse. Now, this restriction-- the final one, the transverse waves-- made our life very simple, algebraically. But it turns out that if you try and find the waves in a magnetized plasma while keeping this restriction, you don't get all of the waves. You, in fact, have explicitly ruled out one of the most important waves-- the extraordinary mode, which is extraordinarily useful. And so we have to drop this restriction here and then deal with all the horrible consequences of that. So now we have k dot e not equal to 0. And if you go back and you look through the derivation, and you start rederiving bits, you end up with an equation now that looks like omega squared minus c squared k squared. We had something like this before, but our equation previously was a scalar equation. Our equation now is going to be a matrix equation with these 3 by 3 matrices. And this is just the identity matrix. And this is an even more odd looking object, which is kk, which is a dyad, which is also a 3 by 3 matrix here. You can look up more of the details of this in Hutchison's book if you haven't seen this in a while. All of this is now dotted with the electric field. And that is equal to minus I omega J over epsilon. Now, once again, we realized that J is equal to minus e and eVe. And what we want to try and do is write this entire equation in terms of e. We want to get rid of J completely. And then, of course, this is an equation that looks like a matrix times a vector, 0. And we know how to do this. This is what we've trained our whole lives for. We take the determinant of this. We find the mode to search by [INAUDIBLE].. That's great. So we really love these sorts of equations. And so we're trying to make this equation look like one of those. So now we need an equation of motion for Ve. Previously, remember, we just looked at the response of the electrons to the electric field. But now we want to have the magnetic field as well. So we have any d Ve et is equal to minus e times the electric field plus V cross b. So this is just the Lorentz force, but we've now got the magnetic field in here. And we're going to assume that our magnetic field is the first order, is just some static field. And we're going to point it in the z direction. Of course, I can point in any direction we want. I've chosen z in this case here. And then from this, you get out a series of equations for the velocity. And I think you've all seen this, so I'm actually not going to go through this line by line. Well, I might go through. OK. I'm not going to write this equation out in terms of vector components, like I had in the notes. I will make the point that, as before, we're going to say that we're going to assume that V equals Ve 0 exponential of I k dot x minus omega t. And that allows us to replace d dt with minus I omega. So we turn to this differential equation into an algebraic equation. And then we find that the velocity of our single electron here is going to have a structure with some nice symmetry due to the magnetic field. So dx and dy are going to look very similar. They're going to look like minus Ie over omega me times 1 over 1 minus omega squared over omega squared. It doesn't work so well when I say it like that. Capital omega squared over lowercase omega squared. And this is going to be equal to the electric field in the x direction minus I capital omega over lowercase omega electric field in the y direction, where we've defined here capital omega as the cyclotron frequency, e B0 over me. So dy, we have all of these terms again. But in the brackets, we have something that looks similar, but slightly different. I capital omega over lowercase omega ex plus ey. And when you look at this and you squint and you calculate dx squared plus dy squared, you find out that this is just-- when you just look at how these work, this is, of course, how particles are circulating around the magnetic field line. Remember, the field line is going in z direction. And so these two components are just the spiraling dx and dy here. And finally, we have the z components. And that's very simple. That's minus Ie over omega me [INAUDIBLE] like that. OK. You've seen all this before. You can then take this, make it into a nice vector, substitute it back into this equation for J, substitute J back into here. And you see very quickly that all you're going to have left are things like omegas and c's and k's and capital omegas and this electric field. So we're going to have some matrix equation dot e is equal to. And that matrix-- well, we can write J is equal to minus e and b times the velocity. And we can write that in terms of some sort of conductivity tensor times the electric field, where that conductivity is this monster I, any e squared over me omega 1 over 1 minus capital omega squared lowercase omega squared times pi on a nice, big 3 by 3 matrix. If you want, down the diagonal. [INAUDIBLE] here. And minus I omega in here, pointing to off-diagonal elements, then 0s elsewhere. If you're surprised that seeing something interesting involving the magnetic field show up in the zz component of this tensor, it's literally only there to cancel it out here. So in fact, it doesn't exist at all. It's just it's more convenient than writing this factor underneath all four of these terms. So this is just a symbol. So this is the conductivity for our particle. Which means we can then write out in short form that omega squared minus c squared k squared times the identity matrix minus this dyad, ak, plus I omega over epsilon null sigma dot e equals [INAUDIBLE]. And I like writing things in terms of this conductivity term. But if you like things in terms of e-- I'm sorry, in terms of epsilon, the dielectric tensor, you could rewrite this. And you will get Hutchinson's equation, 4.1.2. OK. So the magnetic field in the system breaks the symmetry. So we have to treat that differently from dx and dy. But we don't have to treat dx and dy the same. We can pick our orientation of our x and y axes to make our life simple in what's going to follow. And we're going to do that now. I've got just about enough space here. So we're going to have a coordinate system like this, with z pointing upwards. And that's the direction of the magnetic field. We're going to have this coordinate, y, and a coordinate, x, like this. And I'm going to choose our k vector to always be in the yz plane, with some angle of theta to the z-axis here. So we're effectively choosing our coordinate system such that we can write k is equal to the size of the vector, k, times 0 sine theta, cosine of theta. This makes our life much, much easier and more flawless. But even if you do that, when you go to solve this equation in full generality, you have all these sines and cosines. It's an ungodly mess. So this is absolutely horrible. So you still get something terrible, which is equation 4.1.24, which I think is called the Appleton-Hartree dispersion relationship. And it's a mess, so it's very, very hard to work with. And the trick is that no one actually works with it most of the time. We just work in the case where we have the theta equal to 0 or theta equal to pi upon 2. So those are the two cases that I'm going to tell you about-- when we've got waves propagating along the magnetic field or perpendicular to the magnetic field. If you're ever in the unfortunate case of having to do something in between, you'll have to go back to this equation and work it out yourself. But at least in the limiting cases, the behavior is slightly easier to understand. So those are the cases that I'm going to go through now. So that was a bit of a whistlestop tour. Any questions? Good. Everyone loves waves [INAUDIBLE].. So we're going to start with the theta equals 0 case. Am I? Why? No chance. Probably makes sense. That was a z equals pi over 2 case. So this case is maybe particularly relevant if you're trying to diagnose something like a tokamak, which many of you are. So if we draw our tokamak looking from above, we have magnetic field lines going around like this. And remember, the toroidal magnetic field of the tokamak is very strong. And so really, although the magnetic field lines are slightly not like this, they really are very much just circles around the top. And where do we put our diagnostics? Well, we can't put them on the inside, usually. We don't want to put them at some weird line of sight like this because that would mean integrating through things where we're not really sure about the symmetry. We're very likely to put our diagnostics like this on a line of sight, which is indeed perpendicular at 90 degrees at the local magnetic field. That might be because you've got to fit through some gaps between the magnets or simply because it's very easy. And almost every tokamak I've seen has diagnostic designed to look along this line of sight. If you have something that looks at a weird angle, that's very unusual. So it's very relevant to ask how the waves propagate in a magnetized plasma like a tokamak perpendicular to the magnetic field. And what you get are two different modes here. We have one mode where the dispersion relationship is very familiar. n squared is equal to 1 minus omega p squared upon c squared-- oh, omega squared. So this is just the wave that we found in unmagnetized plasmas. What's interesting is that although we've done all of that mathematics, there is one wave which propagates the perpendicular magnetic field, which looks like the magnetic field doesn't matter at all. And so we call this wave the O mode, where O stands for Order. All right. So in the ordinary mode, remember, we can then go back. This is our eigenvalue. We can work out what the eigenmode is in terms of dx and dy and dz. And we find that dx is equal to dy is equal to 0 here. And so all of our electric field is in the z direction. Just remember we have this diagram where we have z like this, y like this, and x like this. I've restricted my k vector to be in the yz plane. I've now set theta to be pi upon 2, and so therefore, k is pointing in this direction, which means that our electric field is pointing purely in the z direction. And this actually gives us a hint why the mode dispersion relationship doesn't seem to know anything about the magnetic field. And that's because the electrons are traveling along in the z direction. And the electric field, as it oscillates up and down, is simply accelerating or decelerating them along the magnetic field. And so it doesn't have any effect from the gyrating particle orbits. You just have particles which are going like this. And maybe they're being slightly accelerated or slightly decelerated, but it's all in the d direction, so it doesn't have an interaction with the magnetic field whatsoever. So these are nice and easy. And the nice thing about these modes, actually, is that they are transverse. So k dot e is equal to 0. So although we relax that condition, we obviously didn't need it in order to get this mode, as you can see, because we already got the mode before when we did have that transverse condition. So this is our nice, easy wave. The next one is not ordered. So the next one has the dispersion relationship that looks like this. Now, in Hutchinson's book, he rewrites some of these terms as x and y and things like that to make it more compact, which I think is great if you're going to write it a lot. But just in this case, I want to write it out in its full generality, in terms of things like the plasma frequency and stuff like that so that you can see where all of those terms come from. So it's going to look a little bit more complicated than what you get in Hutchinson's book, but I think it's more useful. So it's 1 minus omega p squared over omega squared. Looks good, but then it's actually times 1 minus omega p squared over omega squared. And all of those-- not the 1. All the rest of these are over 1 minus omega p squared over omega squared minus capital omega squared over lowercase omega squared. So this is a little bit Escher-esque. They're bits of the same thing repeated in a fractal pattern. And the more you stare at it, the more you think, I wonder what plasma is playing at going with that. It seems very complicated. And remember, this is the simplified version, where we've already taken theta equals pi upon 2. If you want to put in all the cosines and sines, it becomes much more complicated. So straight away, we can see that this is more than ordinary. And so this is, of course, the x mode, and it is the extraordinary. Now, you can probably guess just by looking at this that when we substitute this back in to our equation and we try and get out the eigenmodes of the system, they're not going to be quite as simple as this. And you are quite right. So what we find out is that dx over dy-- so the x and the y components of the electric field are related to each other. And the relationship between them is minus I times 1 minus omega p squared over omega squared minus capital omega squared over omega squared. All of that is over omega p squared over omega squared, capital omega. Then I've got omega like this. Although that looks complicated, fortunately for us, the z component of the electric field is, in fact, 0. So that saves us a little bit of hassle. So if I plot out now y and z, x like this, I still get my magnetic field in this direction. And I've still got my k vector in this direction. It's got exactly the same k. Can anyone tell me what the electric field looks like here? Previously, we said the electric field was just pointing in the z direction because it was. But now it looks a little bit more complicated. Does anyone [INAUDIBLE]? Yes. AUDIENCE: An ellipse? JACK HARE: So the answer was an ellipse. Yes. Do you want to be more specific? AUDIENCE: In the xy plane. They'll be out of phase by [INAUDIBLE].. JACK HARE: OK. So we're talking about an ellipse in the xy plane. When we have an ellipse, we have a minor axis and a major axis. What is the orientation of the major axis, with respect to this xy coordinate system? That's effectively asking you, is ex bigger than ey or is ey bigger than ex? And we made some very strong assumptions in deriving this that will help you work that out. In particular, we assumed that this was a high frequency wave. Any answers online? You've got a 50/50 chance of being right. I hope everyone realizes that it's not going to be wildly at some random angle. AUDIENCE: If we make the frequency really large in there, then it would be basically 1 divided by-- so ex is way bigger. JACK HARE: Yeah, ex is way bigger. So if we have a high frequency, this term is going to be very, very small, which means ex is much bigger than ey. And so we have an ellipse that is extended like this. So ex, much bigger than ey. And because there's this I inside here, they're actually related to each other in a complex fashion. So it means the electric field vector is going to be sweeping out this ellipse as the wave propagates. OK. Now, the main thing to note here is that this wave is not transverse. So k dot e is not equal to 0 because of course, ky is parallel to our small but existent ey. So this is the wave we would not have got if we insisted on only looking for transverse waves, which is why you have to go back and rederive it all with this. Of course, we also put in the equation of motion for particles in the magnetic field. But you can see why it's much easier to go and derive the ordinary mode in an unmagnetized plasma than it is to get these two modes in a magnetized plasma straight away. So this is a very fun way indeed. There is a question in Hutchinson's book which caused me a lot of thought as a grad student because I didn't understand what the hell he was getting at. And so I'll put it to you now, and we shall see whether you can spot it straight away or not. The question is in the books. You have set up an interferometer looking across the plasma like this. If you have accidentally set up your interferometer to measure the x mode rather than the o mode, what would the error be in your measurements? And you give some plasma parameters so you can calculate it. So remember, in our interferometer, what we measure is not density, but we measure changes in refractive index. And so you're saying, if you set up your interferometer such it measures this refractive index, how different would your result be than if you were measuring this refractive index? And my question I was always asking-- how the hell do you even set it up to do one or the other? So perhaps we can work it out together. How can I get to choose whether I'm using the o mode or the x mode? AUDIENCE: Well, for o mode, you have a z chord, so you could polarize your light coming in, right? JACK HARE: Yeah. So if I'm injecting light into my interferometer-- and maybe it's collecting here and bounces back to that. So if I have this polarization, that's the o mode because the polarization, e, is parallel to b. And if I have this polarization out of the page like that, that's the x mode. So you get to choose which refractive index you probe by your choice of the polarization that you're injecting. And that was the bit that I missed as a grad student. It took me until I was teaching this class for the first time to finally work out what's going on-- is that you actually do have a choice. It's not like the plasma decides for you, which is why I was like, what will the plasma want to do? But that doesn't matter in this case. In this case, you have a choice. The plasma is not in control of you. And in general, you could launch a wave through the plasma at some arbitrary polarization at 45 degrees. And then because you can always decompose your wave into various sets of modes-- but this is a good set of modes which is valid for theta equals pi upon 2. Then you would see that one of your polarizations would travel faster than the other polarization because it would have a different refractive index. And so you'd see all sorts of funky effects going on that may be very hard to interpret. So it's definitely worth thinking about the polarization. And when I went looking recently for some papers on how people actually do this in tokamaks, they spend an awful lot of time thinking about the polarization. And often, they have the ability to switch it between x mode and o mode so they can do different measurements. Yes. AUDIENCE: The x mode should be polarized. JACK HARE: The x mode should be polarized. So yeah, out of the page. Because you're primarily going to be exciting this large ex. And the plasma will do the work of giving you the ey, and it will do that because of the electrons gyrating around the field lines. So you don't have to worry about it. ey is truly fantastically small here. So it's OK. You don't have to give it that yourself, which would be impossible because in free space, waves are only transverse. You can't launch a wave in free space that has a polarization along the direction of propagation. But it's OK. The plasma has got your back there. And these are the only two waves-- the only two electromagnetic high frequency cold magnetized waves which can propagate in a plasma. So if you hit the plasma of some wave, it will instantly convert into some mixture of these two, because they're the only modes which are supported by the plasma media. Yeah. AUDIENCE: When that converting step is happening, are you at all worried about reflected power than being different-- because if you're injecting a wave into the plasma, that's a vacuum wave. And so [INAUDIBLE] x. I'd imagine there's some chance of that vacuum wave reflecting back on you. Would that hurt your signal to noise ratio in some way? Or is there a very small amount reflects back so it doesn't really [INAUDIBLE]?? JACK HARE: So the question is when you have this conversion from the vacuum wave into the plasma waves, is there any power which is reflected? And the answer is I don't know. I would imagine that there could be some power reflected in that interface there. And yeah, also an important thing to notice is that on a tokamak, the magnetic field stays in the same direction. But if you're in something like an RFP-- Reversed Field Pinch, where the magnetic field rotates, your wave will invert. You may launch in o mode and then may convert yourself to x mode. And intermediate, you'll have all those intermediate polarizations and k vectors, with respect to the magnetic field. And you'll have to go back and do the Appleton-Hartree formalism. And that's probably why people don't work on RFPs anymore, because they're extremely difficult to work with. Now, one thing I will note is you might ask know, Jack, you work on z-pinch and they've got strong magnetic fields. And yet, you don't seem to be worried at all about the magnetic field or polarizing your beam or anything like that. And the reason for that is when you start looking at the frequency orderings in the sorts of plasmas I work with, this term here is very, very, very, very, very, very, very, very small. And so this cancels out that. And huzzah, you just have two o modes, so you're basically back to the unmagnetized plasma. So depending on your plasma regime, you may not be sensitive to the field anyway. And basically, that's a requirement that the probing frequency of your wave is much, much higher than the gyro frequency, which is the unmagnetized condition we wrote down all the way back when we derived the unmagnetized waves. So you may not be sensitive to this. It just turns out in the tokamak, you tend to be in a regime where this is important. So we'll talk a little bit about that when we talk about electron cyclotron emission. OK. Any questions on this? AUDIENCE: You noted just a minute ago there's a number of assumptions that have gone into derivation, chief among them being low temperature. Most of today's modern tokamaks are operating 1 to 10 keV ion temperature. It does not seem to me like something that needs a low temperature requirement. So what is the-- maximum sequence theory yet applied to a diagnostic system, is this valid? Do we still meet the thermal? JACK HARE: Yeah, so when we say cold, we're just talking about compared to the speed of light. And that's pretty fast. So even when you've got-- so you can't neglect relativistic effects completely for electrons at 10 keV. They've got over 20%-- some decent fraction of their rest mass at 500 keV. It's not less than that. So in some cases, relativistic effects will be important. And we'll come across those when we do cyclotron radiation. I believe that for interferometry, we don't have to worry about relativistic effects here, and this holds well enough. And the corrections that you get will be on the order of the ratio between the thermal velocity and the speed of light, and that is a small number. So I think there's a really good point. But this actually-- this holds very, very well. We're not dealing with plasma waves, the waves inside a plasma which are being generated, where thermal effects are important, like Langmuir waves. We're dealing with these high frequency electromagnetic waves, which are traveling so fast that their phase velocity is close to the speed of light. Really, sorry-- I shouldn't say c here. This is the phase velocity. It's just the phase velocity is very close to the speed of light for these electromagnetic waves, so c is close enough. So this is the actual condition. For the waves inside your plasma, where you deal with hot plasma effects and Landau damping and all sorts of fun things like that, that phase velocity is much lower. And so the phase velocity is close to the thermal velocity. And then you have the interaction between the wave and the distribution function of plasma that gives you Landau damping, all that fun stuff. We're nowhere near that. But that's a really good question. OK. Any other questions? OK. Now we'll do the interface. This has nothing to do with the diagram. But it's interesting, and we'll use it twice again. So now we're going to do the case where the wave is propagating along the magnetic field here. OK. So this was maybe a harder case to get inside the tokamak, but it's a very easy case to get inside, for example, the z-pinch or many other systems like that. So for example, if you have a z-pinch, some wobbly plasma like this, it's got current like this. And then it's got magnetic field like this. If I fire a laser beam through this, there's going to be at least parts of the laser beam which are parallel or anti-parallel-- you get the same result-- to the magnetic field. And in fact, we'll show that this result that we derived for theta equals 0 applies to almost every angle between 0 and pi over 2. And the theory only breaks down very, very close to pi over 2. So in fact, this still works even when, for example, here, my k is in the same direction but my magnetic field is bent. And there's quite a large theta between here. So this is very useful. OK. We go and we plug theta equals 0 into the Appleton-Hartree relationship, and then we solve the determinant and we get out our eigenvalues and our eigenmodes. And we get, first of all-- well, we've got two nodes. And these nodes I tend to write with a plus and a minus and write the two of them together because they're very, very similar. And so this mode plus and minus-- that's the names of the two modes-- is equal to 1 minus omega p squared upon omega squared over 1 plus and minus-- this is the difference between the two-- capital omega over lowercase omega. So the first thing that we notice, of course, is that for capital omega and the lowercase omega much less than 1, which was our unmagnetized condition, we just reduce back to our magnetized dispersion relation. So that's good. We haven't introduced anything funky in the math. OK. And then when we solve to get the eigenmodes, we find out that we have ex over ey is equal to plus or minus I here. Let me just draw again the geometry of the system. That's upwards y like this, x like this. Our magnetic field is in the z direction. And now our k vector, theta equals 0, is also in the z direction. And now we have the ex and ey lying in [? k. ?] And ez is equal to 0. So this is, again, a transverse wave. k dot equals 0, like that. So does anyone know what this wave is? The electric and the two components of the electric field appear to be out of phase from each other by a factor of pi upon 2, plus or minus sign. AUDIENCE: Circularly polarized? JACK HARE: Circularly polarized, yes. Everyone in the room is doing this. I guess they wanted to say circularly polarized. OK. So what we can do is we can say ex is going to go to exponential of I k dot x. In reality, this is just [INAUDIBLE] because we know which way our [INAUDIBLE] here. Minus omega t and ey is exactly the same, plus an extra factor of pi upon 2 inside the brackets. OK. So if we plot in either space or time-- it doesn't matter-- we can either fix ourselves in one place and watch a wave go by or we can attach ourselves to the wave and see how it changes in space. We're going to get at two oscillating fields. The electric field in x will look like this. The electric field in y will look like-- I'm trying very hard to do this properly-- that. And if you then look at what in the xy plane the electric field is doing, and you stop at this point here and then at this point here and then at this point here and then at this point here, and ask, which direction is the electric field pointing? Well, first of all, it's going to be pointing entirely in the y direction. And then it's going to be pointing entirely in the x direction. And so we can see that the electric field vector traces out a circle. And this is what we call circular polarization. And often, these two waves, which I've been calling plus or minus here-- you might call them the right hand and the left hand circularly polarized wave. And I can never remember the convention, but I think it's right hand rule with a k vector. And it's like, is it going this way or is it going the other way like that? I never really mind too much about which way around it is. But there is a convention about whether it's right or left, and that's what this plus and negative sign really mean here. We've got two modes, one of which is going clockwise and one of which is going counterclockwise. OK. Any questions on that? We're going to use that in a moment. There's not really anything particularly profound at this point, but we will get on to something profound in a moment. Any questions on that before? OK. Just so I can keep that up on the board, we're going to get into this stuff. Forget about that until we get on to reflectometry. So now, finally, we are fulfilling the promise we started the lecture with when we were talking about Faraday rotation. And the main point about Faraday rotation is that your magnetic fields cause a phenomena called birefringence. Has anyone come across birefringence studying optics before and can give us a concise definition? Yes. Either one of you. You can say it in unison. AUDIENCE: 2, 3-- [INAUDIBLE] refraction [INAUDIBLE],, which [INAUDIBLE]. JACK HARE: Its object's index of refraction is different, depending on the direction of propagation. AUDIENCE: Yeah. JACK HARE: No. AUDIENCE: Darn. OK. JACK HARE: Was that your guess, too? AUDIENCE: No. JACK HARE: That is a really interesting phenomena, and it definitely happens, but it's not this. AUDIENCE: I'm trying to make sure these words don't mean the same thing. That the angle of deflection depends on your frequency, which [INAUDIBLE]. JACK HARE: OK. So that was the angle of deflection depends on the frequency. No, that's also not true. But that's not a good definition of birefringence. Anyone got any thoughts about what birefringence could be related to? Anything on this board here that makes you think? Anyone online? AUDIENCE: I'll give it a try. JACK HARE: Sorry? AUDIENCE: I'll give it a try. JACK HARE: Yeah, please. AUDIENCE: Birefringence is that there are different refractive indexes for different directions-- not different directions, different components of the material. So it's like a tensor, and then the different components have different refractive indexes. JACK HARE: Yeah, this is very, very close. So it's different refractive indexes for different polarizations for different directions of the electric field. So everyone had some thought about direction in there, and that was all good. It's not to do with frequency, though of course, this is frequency dependent. But of course, the refractive index for plasma is always frequency dependent. So that's not a unique thing for this solution here. The unique thing about it is that these waves have got different speeds. So the other thing, the x mode and the o mode-- those are also birefringence as well. They're just not birefringence in a particularly useful way. This is birefringence in a way that we can exploit. And if you ever want to go down a rabbit hole of interesting stuff, there were several Viking variants where they found, from these Viking warlords, buried with lumps of calcite. And people were like, why have they got calcite? It's not a particularly pretty crystal. It's transparent. And you can't really make jewelry out of it. It's quartzy. And it wasn't carved in any case. It was just this lump of calcite. And there is a belief unproven amongst archeologists that this calcite, which is a birefringent material, could be used to navigate on a cloudy day. So when you're sailing across the Atlantic or the North Sea to raid some poor monastery and it's all cloudy-- like, damn. I don't know how to get to this monastery. And they didn't have compasses in those days, at least the Vikings didn't. The Chinese did. So with compasses, this birefringent crystal is very interesting, because you might know that the light from the sun is polarized by scattering. And so although that light isn't directly hitting your ship in the fog, some of that light is getting through. And although the fog is messing with that polarization, there's still going to be some overall polarization there. And by rotating this birefringent crystal in the light that you're getting that's slightly polarized from the sun and looking at markings on a bit of stone, when you rotate the crystal just so, the light is going to come through and the two markings will line up. And then you will know roughly where north is, and then you can go and sail and raid your monastery. So I'm not saying it's true. You can go read some really cool papers on people trying to do this. People have tried to go out in a boat on a cloudy day and navigate like this, and it did not go well. But they didn't have as much practice as the Vikings did because we have GPS these days. And so we don't worry about such things. But it's really cool. So if you like optics, go look up Viking sunstones, they call them. I always like rings. OK. Back to plasmas. So we said we have these two modes inside the plasma. These are the right-hand and left-hand circularly polarized modes, which I'm going to refer to as plus and minus like this. So plus is the clockwise going mode, and minus is the anti-clockwise going mode. If you're confused about why I'm confused about this convention, it's because is it clockwise as you look down the ray of light? Or is it clockwise as you look towards the ray of light? Those are different. And I can never remember which one the convention is for. I think it probably should be as you look down the ray of light, but who knows? And so maybe I've got this the wrong way around. If I got the wrong way around, you just swap where I'm looking down the red light or at the red light or whether I'm looking down the red light. And then I'll be writing it like so. Note, by the way, that this does depend on the direction of the magnetic field. This is not capital omega squared. It's just omega. So the direction of the magnetic field changes. Which of these modes propagates faster? If you're propagating along the magnetic field, one of your modes is faster than the other. If you're propagating against the magnet field, the other mode is faster. That's very important. That's going to be what we use to measure both the magnetic field's amplitude and its direction. And we can use this neat technique called Stokes vectors, where a Stokes vector is just a vector of the ex ey, normalized by the sum of ex squared plus ey squared. These Stokes vectors make our life very easy when we want to do the calculations-- same guy as Stokes' theorem. And we can say, looking at this relation between these two, that the e plus is going to have something triangular called the vector r. And that's going to be 1 over I. Oh, I lied. I'm not going to normalize all of them. I can't be bothered to put a square root 2 in front of this. So just remember there should be square root 2. It doesn't really matter. And then the left vector is going to be 1 minus I. And you can see that these have the same relationships between ex and ey. And so these are the Stokes vectors for the right and left circularly polarized light. We also can write some other polarizations in this Stokes vector notation. So if we were polarized entirely in the x direction here so y equals 0, this would be the vector, x. And that would be equal to 1, 0. And if we have the x equal to 0, this would be the vector capital Y. This would be equal to 0, 1, like that. Now, there are two modes inside the plasma. We can write any arbitrary polarization of our wave as a sum of these two basis vectors, effectively. And so we can switch basis vectors if we want to. So generally, when we're launching light, we don't launch it as a circular polarization. That's quite hard to come by. We launch it with some linear polarization. But this linear polarization is made up of these two circular polarizations. So for example, this x linear polarization is r. What's l over 2? The y polarization is r minus l over 2. I don't know why I got this again. Oh, well. OK. Maybe it's a good point. So just to draw our geometry another time like this, we've got some k vector, which is an angle, theta, to the magnetic field, b, like that. And again, for theta equals 0, this is our dispersion relationship. I just want to point out one slightly funny-- I think it's funny-- thing about this. So if we have k in this direction, that means that this wave is a parallel propagation. But of course, it's still a transverse wind. I think people often get this very confused because the words parallel and perpendicular and transverse and longitudinal have similar meanings. So this is a parallel and transverse wave. And there's one more word as well which means something similar. I can't remember which one it is. But yeah, there's lots of different ways. Yeah. AUDIENCE: For the r minus l over 2, wouldn't that be the I in the bottom? JACK HARE: Oh, yeah. I had meant to. There's meant to be an I in here. And it's going to be like that. Or maybe minus that, but you get the idea. No, it's that. Cool. OK. It turns out I derived this for theta equals pi over 2. I didn't derive it. I just showed you it for pi over 2 and theta equals 0. But it actually turns out that the theta equals 0 case applies over a huge range of different conditions here. So in fact, this theta equals 0, which we call the quasi-parallel case, it isn't good simply for theta equals 0. It's good for capital omega over omega. The secant of theta is much, much less than 1. And it turns out that for some reasonable values of omega here and here, this can apply for theta almost to pi upon 2. So you can use this dispersion relationship as I said up here, even for relatively large angles. The waves will propagate as if they had this dispersion relationship, or at least extremely close to it. And that's very, very convenient. They will propagate without dispersion relationship as long as you write your b to be the component parallel to propagation. So you replace b with b 0 cosine of theta. So now the dispersion relationship here, which has capital omega in, is to do with the projection of the magnetic field along your wave. So effectively, the wave fields components of the magnetic field in the direction it's traveling and ignores that other component until it gets very, very close to pi upon 2, when the components along the direction of travel is almost 0. And then we suddenly see this pi over 2 condition here. So this is very, very useful because it means that we only need to have-- for some cylindrical object like this, we can use this dispersion relationship for almost the entire region that we're probing for Faraday rotation. OK. I'm well aware that I've gone over time, so I'm going to leave it here. And we will get on to exactly how we exploit this interesting mathematics and these Stokes vectors in order to measure magnetic fields in the next lecture. So thank you very much. I'll see you-- oh, not on Tuesday because you all have a student holiday. So for the people in Columbia who are unfamiliar with this idea, we have Monday off. And then to recover from the three-day weekend, the students have a second day off from classes. So I will see you on Thursday.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_1_Introduction_to_Principles_of_Plasma_Diagnostics.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: Let's talk a little bit about plasma diagnostics. So this very word, diagnostic, is pretty interesting. Gnostic here is coming from knowledge, and dia is a word sort of meaning through. So these are objects that we gain knowledge through. And we're trying to gain knowledge about our plasma. This is the whole purpose of this class. We're trying to gain knowledge about the plasma indirectly. We can't directly look at a plasma and measure its density, or its temperature, or its velocity, or any of the other quantities we're interested in. Instead, we're inferring it from some sort of physical effect the plasma has on something else. And we're used to this kind of measurement with more basic diagnostics. If we think about something like a thermometer, and it's filled with mercury, what's the physical principle that allows us to measure temperature using a thermometer, a liquid metal thermometer? Yeah. Yeah. So the thermal expansion of this metal, which moves it up and down here, and we can calibrate this, and then we can measure temperature. So this is an example of something that we are relatively familiar with. Thermal expansion of metal, it's something we can see with our eyes. We can pick up a thermometer. We can touch it. This is rarely the case with something like a plasma. Our intuition usually fails us. And so we have to fall back on mathematics, or we have to build our intuition for plasma in the same way that we built our intuition for things like thermometers as we were growing up. For example, here we were talking about the expansion of metal. But in a plasma, we may be looking at something as obscure as the polarization of a beam of light telling us something about magnetic fields, something that immediately it isn't obvious that these things are linked at all. We're going to be assuming very good plasma physics knowledge in this class. So the prerequisite for this class is for students who have taken 22.611. So that's a class that uses Francis Chen's Introduction to Plasma Physics textbook. And we'll be drawing extensively from that without redoing any derivations. This is a very hard class to take at the same time as 22.611. I recommend you don't try, but I know that some of you are going to. And that's fine. But just as a warning, it's going to be very, very difficult. If you are a little bit shaky in your plasma physics, I suggest revising your electromagnetism, your waves in plasmas, and things like Landau damping, especially later on when we get on to Thomson scattering. Any questions on any of that? So why do we bother with diagnostics in the first place? Well, there's no point in making a plasma that you can't actually measure. So we could have a huge tokamak, or a giant laser, or a big pulse power machine, and we could make the world's greatest plasma. But if we don't know anything about it, it's completely pointless. And so plasma experiments really goes hand in hand with plasma diagnostics. And these diagnostics are extremely important in the history of fusion and in the history of plasma physics. So one example that I told the students about in fusion energy last semester that they'll probably remember is the zeta pinch. So this was a toroidal Z-pinch type device that was built in the UK in the '50s. And the Brits got very excited, because when they cranked up the current on their toroidal pinch, they started measuring bursts of neutrons. And they said, aha! We see neutrons. Therefore, we're getting fusion. Now, there was a problem with this, because when they looked at it in more detail, they realized that the neutron burst they were getting was not isotropic, so it couldn't be thermonuclear in origin, and also, the temperature of their plasma was far too low. So they didn't have sufficient diagnostics to realize that these neutrons were not coming from a thermonuclear fusion process, but were, in fact, coming from MHD instabilities within the plasma. This was extremely embarrassing, but the upshot of it was that this team developed a technique called Thomson scattering, which uses a focused laser beam to measure the temperature of a plasma. And they developed it just in time for the Russians to announce their new device, which they called a tokamak, which they claimed had temperatures over a kiloelectron volt, which was about 10 times higher than any other device that anyone had been trying to do fusion with at the time. This, again, caused a huge furor because lots of groups from around the world didn't believe it. They said, how have the Russians managed to get such an incredible result from their machine? And so the team from the UK who had worked on ZETA and then worked on Thomson scattering flew over to Moscow with their ruby laser and made the first Thomson scattering measurements on a tokamak, confirming that it had temperatures of a kiloelectron volt. And the rest is kind of history. Immediately everywhere around the world, people started converting their other machines into tokamaks. The Princeton Model C stellarator, they cut off all the long loopy bits and just shoved two halves together, and it became the Princeton Model C tokamak instead. And this happened overnight. And we're still living in a post-Thomson scattering, post-tokamak world, because all of our major devices now are tokamaks. Another good example is the National Ignition Facility. The National Ignition Facility was carefully designed using the best simulations available-- these incredible radiation hydrodynamics multigroup, everything you can think of all the best physics in the world. And they were very confident that when they turned on the National Ignition Facility they were going to get ignition. They were so confident they put it in the name of the experiment. And so back in 2009 or so, they started running the first experiments. And they said, we don't really need any diagnostics, because it's going to work. It's obvious. We know exactly what's happening. We'll just have, basically, some neutron counters so that we can go ding, we got to ignition, and tick the box, and move on. This didn't happen. When they turned on NIF, the signal was barely measurable. There was almost no neutrons at all coming out of it, and they had no idea why. And the reason they had no idea why is because they didn't have any diagnostics to tell them what was actually happening. They only had diagnostics to tell them whether they were successful or not. And the answer was they weren't. So no one knew what was going on, but a decade later, they now have very good diagnostics at NIF. And it's no coincidence that, of course, we now have ignition as well, because they understand a great deal about the physics going on inside this experiment. So you really need to have good diagnostics to do good plasma physics and to build good fusion reactors as well. Who needs to know about diagnostics? Well, obviously, experimenters, we kind of-- I imagine many of the people in this room are doing experimental plasma physics. And that's why they're here. They want to understand the plasmas that their machines are producing. They want to probe the mysteries of the universe and get unlimited clean renewable energy and all that sort of stuff. But they also want to understand the uncertainty involved in their measurements. When someone says to you, the temperature on this tokamak is 1 kiloelectron volt, what does that mean? Where is it 1 kiloelectron volt? What's the error bars on that? Does that true for the entire discharge? There's huge limitations in the diagnostics we have. And we need to understand those limitations. If you're doing computational work, if you're doing large simulations, you might be interested in diagnostics, because they serve as a primary way of validating your code. Your code is only really worthwhile if it makes predictions that agree with reality. And so you may want to understand how diagnosticians are making these measurements so that they can understand whether your code is right or not. And you may also be motivated to make useful outputs from a computer simulation. You may get a very large amount of data that's very hard to distill down. One way to distill that down is to think about the diagnostics that experimentalists use and reduce your data set so it looks a bit more like those diagnostics. That gives you an easier way of comparing your data with reality. If you're a theorist, I think you should still know about diagnostics. I think you should be very skeptical about your experimental colleagues. I don't think you should believe them when they say the temperature is 1 kiloelectron volt. You should ask where, and when, and how sure are you? Because these people are trying to tell you something, and you're going to go out and tell all your colleagues that your theory has been proved. And if they were wrong, you're going to be very, very embarrassed. You might also be motivated to design theories which have testable predictions. So a lot of theoretical work is phrased in a language which is very different from the way that experimentalists speak. And that means that when experimentalists read your papers, they don't understand what's going on. And they also don't know how exactly they can test your predictions. If you know how diagnosticians think, you can write papers which are clearer for them. They can go test your theory, prove it right, and then you can go collect your Nobel Prize. So it's pretty important to know how diagnostics work there. All right, I've been talking for a bit. So I'll pause and see if there's any questions so far. So what sort of things-- there we are. What can we measure? Before we get onto what can we measure, what would we like to measure? What would a theorist like to ask us for? So what can we measure? What do we want? Well, if it's not too much to ask, we'd like to know the position and velocity of every single particle in the plasma at every point in space and time. So this would be the Klimontovich type distribution here. We'd have sort of f of x as a vector. v is a vector-- time. And this would simply look like the sum of a series of delta functions, which are the position of particle i, the velocity of particle i, and both of those should be functions of time as well. And our full distribution function is just a sum over all of these particles. So this is three dimensional, three velocity vectors plus time. Now, of course, the theorists might tell you they want that, but they don't really. No one wants that. It's absolutely impossible to work with. This is a mess. So what theorists really want is they want some sort of moments of this. They would like to have a simplified version of this. At best, they'd like to have a distribution function, which is smooth. This one is very, very spiky and hard to work with. But they may also be willing to get away with letting you give them some thermodynamic variables, some variables where we've taken moments over this. So if we look at some of the moments that we can take here, we would have things like the density. And this is an integral over the velocity part of your distribution function, d3v. That's the density. Or you might want to know things like the average velocity of your particles. And you can keep going with this by multiplying your distribution function by larger and larger powers of v. And you start getting tensors and all sorts of exciting things like that. So I'm just going to put et cetera before we get carried away and forget ourselves. You might also want to know things like magnetic fields. These are pretty important in plasmas because plasmas move in response to magnetic fields. These magnetic fields are caused by currents, fundamentally, electric currents, and those electric currents can be external. So they could be provided by a large set of magnets that surround your plasma. Or they could be internal. They could be provided by the plasma motion itself-- probably getting a little bit low there. Let's go up. They're clearly very, very important. Now, in reality, even if we can measure density and velocity and magnetic fields, we're not really going to be able to get them at every point in space and time. We're quite limited. So diagnostic limitations, we're going to end up things which are sparse in space. I'm just going to use vector x for space and time. So we don't have it all the time. This may be because our diagnostic only measures at a single point as a function of time, or it may be a picture which measures spatial variation at a single time, or it may be some combination of those here. Our diagnostic is often going to be what we call line integrated. This is especially true for diagnostics which use light. Light famously mostly traveling in straight lines means that you don't get to choose where the light comes from that you're measuring. It's going to be along some cord. So we could say what we're measuring is not the density, but the density along a cord dl here, which is the path of our ray of light through the plasma. So that means as opposed to knowing n as a specific point, you know the integral of n over some points. If you happen to know some symmetry about your plasma, then maybe you can work out what n is locally. But otherwise, you're kind of stuck with this measurement instead. Almost certainly your measurement is going to be filtered in some way. And I'm using the word filtered here very broadly. This means that your instrument is in some way an imperfect measurement of the world. And so there is some sort of response function that you have to convolve or deconvolve from your instrument or convolve your synthetic data with in order to understand what it does. This could be things like a frequency response, if you're measuring voltages. This could be things like a resolution if you're taking images with a camera, that sort of thing. So almost everything has some sort of response function. And it's also probably going to depend on physics that we just don't understand very well, so depend on poorly understood physics. A great example of this is spectroscopy, so atomic data. Anyone who's ever done spectroscopy will know that there's a lot of electrons. There's lots of places they can be. They can interact in lots of ways. And so it's very hard to actually make predictions for the spectrum of even something as simple as helium. And so if we're trying to understand spectroscopy, we need to use this atomic data, but we should be cautious of the atomic data, which is mostly provided by theorists as well. So we should be cautious anyway. But we should be cautious of the atomic data because it forms part of our interpretation. So we may not just have errors due to some of these other things. We may have errors due to how we interpret this. So we should always be on the lookout for like, hmm, that looks weird. I wonder why that's going on and see if we can check whether this is a model dependent effect or whether it's to do with our diagnostic instead. So in this lecture course, we are focusing on the principles. Professor Hutchinson put this in the title of his book after teaching this course for many years. And he explains what he means by principles in the introduction. And I think it's very, very wise. He says that we're not going to focus on the practical implementations of diagnostics. Those things change over time. There's always a new camera that works in a new way that's better. But what doesn't change is what a plasma is, the equations that govern it, and some of the techniques that we can use. Now, there's always new techniques that people are coming up with. This is a very, very lively field. There's a conference every year, High Temperature Plasma Diagnostics, which is well worth going to where people present new diagnostics, new interpretations, new ways of measurement. And it's very, very dynamic. There are lots of different ways to try and measure a plasma. There are some techniques which are very successful, but have only been implemented in one or two institutions, just simply because of the cost. There's some techniques that are very successful and have only been implemented because of institutional inertia, because of the specialized knowledge to be used. So it's always worth reading up some papers you'll never find-- you'll never know. You'll probably find that the Russians did it back in the '70s. This is true with almost every diagnostic you can think of. And they published it in some obscure journal. So it's worth looking out to see whether someone's already tried to do what you're trying to do before. So any questions on this sort of introduction? Then we can have an interactive part of the class. Yes. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. Imagine you've got a plasma, which is glowing. So it's emitting light. And I have a camera. And I'm looking at this plasma. And the plasma is not uniform. Maybe the core of it is very hot, like a tokamak. And the edge is very cold. But that means along the line of sight between the plasma and your camera over there, you are adding up the emission from each little blob of plasma along your line of sight. So you're adding up emission from some hot bits of plasma, some cold bits of plasma. So if you look at the spectrum of that, you're going to see lines that correspond to a cold plasma and a hot plasma simultaneously. That doesn't make any sense. So you need to know that that's happening. It gets worse than that, of course. Different bits of plasma can be absorbing as well. And so you may have attenuation of some of those lines as it goes through. This becomes radiation transport, and it's like a whole mess. So that's one simple example there that just as you look through a plasma, you're going to get very different properties, all contributing to your final measurement. This isn't the case of something that's very local, like a Langmuir probe, or a B-dot probe, or Thomson scattering. But it's very much the case for, I would say, about half the diagnostics we're going to talk about in this course. They're going to suffer from light integration. And the reason is plasma is electromagnetic. Light is electromagnetic. So we use light to probe plasmas, and light has this problem of line integration. So it almost always comes up, no matter what you try and do. Other questions? How's it going, Columbia? Good. Thumbs up. So what I like to do-- I found when I taught this class before there's a huge array of people who take it, which is super exciting. And they all work on very different things. And this is my opportunity to find out some of the stuff you work on as we play pin your plasma to the board. So as you can all see, we have a temperature scale on the side here that goes from 0.1 eV-- and yeah, you can still get plasma there-- up to 10 keV and above, whatever. And then we've got densities here. If you thought that was logarithmic, they go from 10 to 16 particles per cubic meter to 10 to the 32 particles per cubic meter here. So 16 orders of magnitude. And yet we're all sitting in the same room trying to learn the same stuff. That's one of the reasons why this topic is very fascinating, and also quite hard to teach at times. I've also tried to get the third dimension to you guys. We're going to be using color. And we're looking at the magnetic field here. So I've got a magnetic field going from 0.01, which I think most of us agree is basically 0, all the way up to 100 tesla or so, which I know some places where you get 100 tesla in plasma physics. I don't know whether anyone working here works on those. But the idea is you're either going to volunteer, or I'm going to start pointing. And you're going to come up, if you're here in the room, or you're going to tell me if you're over there in Columbia, where your plasma falls on this board. So we'll just draw little circles. So say you've got a plasma that is 100 tesla and it lives up in this corner here. You would just draw a little blob, being like, that's my plasma. You'd put a number in, and by the side of it, you'd maybe write something about your plasma. Maybe it's a magnetized ICF implosion on NIF. And you'd maybe dot down a couple of diagnostics that you work on. If you don't work on any diagnostics at the moment, that's fine. Maybe just mention a diagnostic you know is used. And you don't have to be exhaustive. If you mention the tokamak, you don't have to list every diagnostic on the tokamak, just the one that you work on. And let someone else have a go at mentioning some of the other ones. So I'm going to erase that now, because I think that's slightly optimistic, even for a magnetized ICF shot. And I'm now going to turn around and see someone with their hand up as the first willing volunteer. Good. As you're walking up, what's your name and your department? AUDIENCE: I've worked with Nathan Howard and Pablo Rodriguez Fernandez. JACK HARE: Can the folks at Columbia hear him OK if I stand nearby or is it very quiet? Oh, dear. You should have this then. AUDIENCE: OK. Let's pull you up. AUDIENCE: Hello, everybody. I'm Vince. I work with Nathan Howard and Pablo Rodriguez Fernandez on modeling turbulent transport in the Spark reactor. I don't remember off the top of my head the particle density. I think it's in this range. Just a circle you said? Or like-- JACK HARE: Yeah, where's my little [INAUDIBLE]?? I have 10 to the 20 [INAUDIBLE]. AUDIENCE: 10 to the 20. OK. JACK HARE: 10 to the 20. Yeah, cool. Cool. But put a little number on it like one, and then just write on the side what's your diagnostic [INAUDIBLE]. Yeah, I should not have made it so high. AUDIENCE: Let's see. I don't work with any diagnostics at the moment. I work with heat flux data that's generated via simulation code, like TGLF and gyrokinetic [INAUDIBLE]. JACK HARE: Anyone else know [INAUDIBLE]?? AUDIENCE: I don't-- I don't know off the top of my head. JACK HARE: Well, let's go for that, like a Langmuir probe. AUDIENCE: Is that how you spell it? JACK HARE: All right, next volunteer. Let's have one from Columbia, please. Hey, Nigel. AUDIENCE: Hello. I work with plasmas that are 10 to the 19 density, 100 eV with a magnetic field of 0.1 on that order of magnitude. JACK HARE: So what was that? 10 to 19, and 100 eV. AUDIENCE: Yes. JACK HARE: So kind of about here? AUDIENCE: Yeah. JACK HARE: And so what's this plasma? AUDIENCE: That's the HBT plasma. That's the tokamak that we have at Columbia. And I work with Langmuir probe arrays, Mirnov coils, and tangential extreme electromag-- extreme ultraviolet light. So TUV. JACK HARE: And Mirnov, which I'm just going to write as B-dot. AUDIENCE: Yeah, B-dots. JACK HARE: They're sort of B-dots. Great! A volunteer from in the room. Come on up. I don't know if I can have both of these working at the same time. I think there are spares. There you go. AUDIENCE: Hi. My name is Amelia. I do some work with DIII-D on a Lyman-alpha diagnostic, and that's roughly here-ish in operating space. JACK HARE: Is it that low? AUDIENCE: I think it's like a little higher. It's like four to five often. It's good enough for me. JACK HARE: OK. AUDIENCE: And then Lyman-alpha, the spectral line of hydrogen, as we mentioned. JACK HARE: Great. Thank you. We've got a volunteer from Columbia. AUDIENCE: Do I need raise my hand? Or can I just-- JACK HARE: Oh, yeah. Please, start. Please go for it. AUDIENCE: Hi. I'm Jacob. I'm at Columbia. I just got done working with some DIII-D plasmas as well. JACK HARE: Nice. AUDIENCE: Roughly the same area, I'd say. And I was working with bolometers. JACK HARE: Bolometers-- cool! What do bolometers measure? AUDIENCE: Bolometers were measuring radiation fluxes, mostly heat loads, and it's fast bolometers. So they're AXUVs. JACK HARE: Awesome. Thank you. Anyone in the room? Come on. One of the NIF people. Come on. We need to push up into this corner. Ok. AUDIENCE: [INAUDIBLE] Hi, I'm Chris. So I'm NSC HGDP [INAUDIBLE] confinement fusion. And our temperatures are over here, 3 to 10 keV, I'm going to say. And then I work in units of rho r, so 300 to 600. JACK HARE: [INAUDIBLE] AUDIENCE: And that's where I don't know if they operate in magnetic fields. But I'm going to say Skyler does. 10, 100 tesla. JACK HARE: Yeah, no. I know it is. You don't start with that. Have you been doing the [INAUDIBLE]?? Go for it. Yeah, yeah, yeah. AUDIENCE: I'm going to go with-- JACK HARE: [INAUDIBLE] the yellow one. There we go. AUDIENCE: NIF extreme, and then what are we up to? Four? So for mine, I'll put neutron spectrometry-- neutron. What about you guys? AUDIENCE: Prolon cardiography. JACK HARE: Thomson scattering. Good, good. go on, then. We've got four there. Where's the mic? Cool. All right, do we have anyone else from Columbia willing to volunteer? AUDIENCE: Yeah. AUDIENCE: I also do-- I hope you can hear me. I also do DIII-D experiments, particularly runaway electron experiments. And we use a gamma ray imager and also some hard X-ray detectors. So they're scintillators, which register heart X-ray instagram HXR. JACK HARE: Cool. I'm going to put that on as gamma rays and hard X-rays. That's good stuff. Anyone in the room? Yeah, go for it. AUDIENCE: I'm Stan. I'm in NSCI. I also work on Spark. And I do soft X-ray. JACK HARE: What you are working on? AUDIENCE: So the ones I'm working on are diamond based, because specifically for Spark, traditional silicone diode would die from the neutrons. So we're using a diamond-based detector instead. And they're in very early stage development. Although, our collaborators in Rome have had some promising results so far. JACK HARE: Awesome. Yeah. AUDIENCE: Hi. I'm Nikola. I'm working on arc stuff right now, but I wanted to mention what I was working for my master's back in Serbia. I was working with pseudospark discharges, which were like ionized gases more than actual plasmas. They were somewhere in like this region-ish here. What number is it? JACK HARE: 5. AUDIENCE: 5. And what I was doing with those was vacuum ultraviolet spectroscopy. And those were like not magnetized. JACK HARE: All right, thank you. Anyone from Columbia? AUDIENCE: I can say one. So CNT is no longer operational. But I'm working on upgrading it. And the operational density was of the order of 10 to the 16th. And magnetic field 0.06 tesla. I think temperature, order of an eV. I'm not too sure. JACK HARE: OK. So kind of like here? AUDIENCE: Yeah, probably low, yeah. JACK HARE: And you said 0.06. I'm going to call that 0.1. We'll round up. AUDIENCE: Yeah, sounds good. Sounds good. JACK HARE: Cool. So this is CNT plus plus, or whatever the new name will be. AUDIENCE: Exactly, exactly. JACK HARE: Cool. And what are you going to be diagnosing it with? AUDIENCE: So they had an electron beam that also was paired with a fluorescent rod to map the magnetic field surfaces. JACK HARE: Nice. Yeah, I saw them doing that technique on W7-X as well. It's really cool. AUDIENCE: Nice. JACK HARE: All right. AUDIENCE: Hi. I'm Lainey. So I actually work in the aerospace department. And I work with non-equilibrium plasmas. So I actually am in-- we go from-- we work from low pressure to atmospheric pressure in this general area. But in a nonthermal plasma, our electrons are closer to the 10 eV, anywhere from 1 to 10 eV. And our gas temperature is much lower, so kind of in that region right there. And then we typically use OES, Optical Emission Spectroscopy, or FTIR, Fourier Transform Infrared Resonance. JACK HARE: Thank you. Yeah, so I remember doing this class 2 years ago, and there were some folks from [INAUDIBLE] as well, who came and talked about those plasmas, which is why this scale now goes down to what I consider to be quite a low density. But there we go. And that's why it's kind of fun that we can go all the way up, much higher densities as well. Any more volunteers in the room or from Columbia? Lansing, come on up. AUDIENCE: Hello, everybody. I'm Lansing. I work with Jack. My project is magnetic reconnection on Z. It's in collaboration with Sandia National Labs. So on the Z machine, the largest pulsed power machine, I think we're achieving densities around 10 to the 24 to 10 to the 25 electrons per cubic meter in the reconnection layer. And I believe our temperature is-- for the bulk of the plasma, I think are 10's of eV, but for plasmoids can get up to a little bit above 100 eV. So maybe somewhere like in this region-ish. And then I believe the magnetic fields we're achieving are 10's of tesla, typically. So the work I've been focusing on-- I'll give this a number, number eight. Lately, the diagnostics that I've been working on, it's all been synthetic, so just modeling. But it's preparation for an upcoming shot. It's on shadowgraphy, so using a probing laser beam to try to indirectly constrain the electron density. Thank you. JACK HARE: All right. Anyone else? Yeah, come on up. AUDIENCE: So I work with inertial electrostatic confinement fusion, which is a bit different from this. I would assume way up here somewhere, maybe in here. And the range is 40 to 100 keV, but the pressure is incredibly low, very low. JACK HARE: And try and keep the microphone up. AUDIENCE: Oh, sorry. So I guess very similar to NIF, we're only using neutron detectors, kind of like Hc3. We're looking to upgrade that soon. JACK HARE: Right, thank you. Anyone from Columbia? AUDIENCE: Hey, I'm Daniel. I'm also working on Spark, specifically on equilibrium dynamics. And for that, most of what I'm looking at are flux loops, so finding where the plasma boundary actually is inside the vacuum vessel. JACK HARE: Nice. Good stuff. Thank you. Anyone in the room? Yeah. Come on up. AUDIENCE: Hi. I'm Jacob. Similar to Amelia, I'm doing ASDEX Upgrade in pedestal. So I'll put that in her same bubble. And I work on correlation ECE to look at temperature fluctuations. JACK HARE: What's ECE? AUDIENCE: Electron Cyclotron Emission. JACK HARE: We like our acronyms, but we should spell them out at least once. Cool. Anyone else? Yeah. Come on up. AUDIENCE: Hi. I'm Leo. I'm working on Spark. I'm looking at the SOL in the virtual areas. So depending on how detached it is, somewhere right here. And Langmuir-- JACK HARE: All Langmuir? Cool. AUDIENCE: Well, I mean synthetics. Probably doesn't exist yet. [LAUGHTER] JACK HARE: Good. At least they're planning to have some diagnostics. Nice. All right, anyone else? Yeah. Come on up. AUDIENCE: So I work with plasma, but it was actually simulation for accelerators. So the magnetic field for some simulation for plasma wakefield accelerator. It went up for some cases, because we were trying to scale up to 15 TeV scaled up to a few thousand tesla. So it's a simulation, so I would put it-- JACK HARE: Another color? There you go. That's yours. [LAUGHTER] AUDIENCE: But the diagnostic is all from simulation. So it's like a particle histogram and photon energy. JACK HARE: [INAUDIBLE] AUDIENCE: Well, it depends on what collision you want to do. You could make them really ultra tight, and that would put them somewhere like here. For the temperature, I'm not so sure. But the ultimate goal is to scale it up to 15 TeV. JACK HARE: [INAUDIBLE] AUDIENCE: It would be good if we had a log scale. JACK HARE: All right, thank you. Good stuff. Anyone else from Columbia? I know there's-- I think there's like nine of you, and there's 30 of us. So I've been trying to balance it a little bit. So I may have got through you all now. Hey, I see your hand. AUDIENCE: Hey, yeah. So I've done work on C2W, which sits kind of in between numbers 2 and 3 blobs, so 10 to the 19. And yeah, just under 1 keV. And then the magnetic field is just under 1 tesla. JACK HARE: 1 Tesla, so I can put it in like this. So what are you using on C2W? AUDIENCE: Yeah. So a lot of them have been mentioned, but I don't think interferometry has been mentioned, or a bias bolometer, so like energy analyzers. JACK HARE: Cool. I'll put both of those in, bias bolometer. Nice. All right, anyone else? Anyone from Columbia? All right, it looks like we're all good. So I feel like the purpose of this exercise is just to get a feel for some of the different things that we might end up talking about. I've got a few other things on here to add. So we've already had Spark and NIF. There's a very cool technique for doing fusion using pulse power. It is dear to my heart, which is magnetized liner inertial fusion. This is an interesting one because it works using magnetic field compression. So we start out with fields of about 10 tesla or so. And we start out at densities of about 10 to the 26, which is about here, and temperatures of about 300 eV. But then as we compress, we obviously get to much higher densities, much higher temperatures, and we're also compressing the magnetic field. So in fact, the number I've got written on here means I'm allowed to use the green chalk as well. And we're going to go up to 10 to 29 in density and 8 keV here. So this is magLIF. And there's lots of cool diagnostics that they use on magLIF. But one of the ones which I think is rather neat is that they have some neutron spectrometers that they can actually use to measure the magnetic fields because they fill this with deuterium. And you get deuterium reactions that make tritium. And then when the tritium reacts, it's already moved some distance along the magnetic field. And so you actually get an anisotropic neutron spectrum from these secondary ET reactions. So you're able to use neutron spectroscopy to get the B-field out, which is not something that you would necessarily expect to be able to do. Another one I've got on here is a Hall thruster. Does anyone work on Hall thrusters here? This is a type of plasma thruster for satellites. These are working with relatively low magnetic fields. I got 0.01 tesla written here. 10 to the 18 kind of densities. And they're also working at about 30 eV, so somewhere here, so Hall thruster. I don't a huge amount about the diagnostics used on these. But the sorts of diagnostics people would probably use are things like Langmuir probes, again, because these are relatively low temperature, low density plasmas. So you can get away with sticking stuff inside. Just down the road from here, in fact, not very far from here at all-- I'm not sure I can point directly at it right now-- is a machine called [INAUDIBLE],, which is a helicon plasma. That's used for plasma material interactions. That's got magnetic fields of about 0.1 tesla, densities around about 10 to 17, and relatively low temperatures, about 5 eV. It also sits around about here. So this is a helicon, which is a sort of RF-driven plasma. And again, the diagnostics that tend to be used in that are things like Langmuir probes. And then the work that I do, both before I came here and now the machine we're building is using pulsed power. So this is similar to the stuff that Lansing was talking about on Z. And what we were talking about with magLIF, but on a more modest scale, because we're only a university. And so we tend to be in a regime where we're dealing with fields of up to about a 10 tesla or so. We've got densities of around-- I haven't done this in CC. So it's 10 to the 23 or so, and temperatures up to about 100 eV. So quite similar to magLIF and the Mars shot. And we use diagnostics, some of which have already been mentioned, so interferometry, Faraday rotation imaging, which is this technique for measuring magnetic fields using the polarization of light, and my favorite diagnostic, which is why we spend so long talking about it, Thomson scattering. But that's kind of where I'm coming from, just so you know what my background is. That obviously feeds into my biases about what exactly I spend more or less time teaching. But I will try and cover as much as I can this parameter space and some of the diagnostics that you've mentioned. But the thing about teaching a class like this that I always find is whenever I teach on some specific diagnostic, there'll be someone in the class who's doing their PhD on it, and they know much more about it than me. And I know much, much less about it. And that's great. This is what makes this class enjoyable. So please, if I say something that's wrong, or if you think you've got something to add, or "These days, we do it like that. That was 20 years ago," just shout out, and please add to the conversation, because I know I'm not the expert on this stuff. Most of the time, I'm using a textbook to try and learn about it myself. There are often not very many good review papers either. So if you a good review paper for a diagnostic, I've never found a good one for an electron cyclotron emission diagnostics, for example, then let me know, and I can read that, and then I can regurgitate it to the rest of the class and look very intelligent. So that would help me a great deal. But I'll pause here. If anyone has any questions, and then we'll probably just leave it for today. I know it's a little bit early, but I think we can all do with a little bit of a break in our first week of the semester. So any questions? Yes. Yes. I mean, I rushed through this a little bit. So the subtlety is that they were using-- they weren't looking-- they're not looking at the DD neutrons. Those are isotropic. So they fill it with deuterium gas only. There's no tritium in there. And they look at the DD neutrons. That's isotropic on magLIF. It wasn't on ZETA. So on magLIF, it is thermal. What's cool is that DD reaction can produce a tritium ion. That tritium ion is energetic, but these magnetic fields are huge. So it is magnetized. So in fact, the tritium moves preferentially in one direction or in two directions along the field lines. So when it collides with the deuterium and produces a distinctive neutron at 14 MeV, those neutrons are now coming out anisotropically. And the strength of that anisotropy can be used to measure the magnetic field, because it tells you how confined the tritiums are in a certain direction, which is pretty neat. So it's not a problem for magLIF, because the neutrons are isotropic. And in a real magLIF, if you wanted to do magLIF as a power plant, then you just do [INAUDIBLE] fusion, and you wouldn't see this anisotropy at all because all the fusion would happen. Any other questions? All right, thank you very much. And I'll see you on Tuesday.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_17_Proton_Imaging.txt
[SQUEAKING] [CLICKING] [SQUEAKING] [RUSTLING] [CLICKING] PROFESSOR: So today we are going to be talking about proton radiography and proton imaging. So we're starting a series of three lectures in which we're looking at how to use particles as opposed to the plasma previously. You remember we used electromagnetic radiation and we looked at the emission from the plasma, but now we're actively pushing particles through the plasma and seeing what happens. So imagine-- and we'll talk about exactly how you do this later on-- but you have some source of protons, a beam of protons, with some spread. Maybe it's emitting into 4 pi steradians, maybe it's a tightly focused beam. It doesn't really matter. And you have some plasma here. And maybe within that plasma, you have some magnetic fields. So you have magnetic field lines that maybe look toroidal from the point of view of the protons here. So they're looping like this. Then as our protons go through here, they're going to feel a force, which is equal to that charge-- I'll just write a z-- when you cross B. But they've got velocity in this direction. For example here, the magnetic field is pointing into the page, and so the particles are going to feel the force upwards. And these particles here are going to feel a force downwards like this. And if we put some sort of detector some distance away, we should see that our source, which maybe was initially uniform, so we had a uniform number of protons per centimeter squared, millimeter squared, now that's uniform intensity is going to be changed. So for example, if we draw a little line drawing of intensity over the initial intensity in the absence of this plasma, our intensity profile would just be 1. There's nothing to change the intensity. But in this case here we can see that this magnetic field configuration has created a void. It's pushed the protons outwards, effectively defocus them. So we see that we're going to have relatively few protons in the middle. We're going to have a pile up of protons at the edge. And then far away from the plasma, we're just going to have our standard fluence of 1 because there'll be some particles that just go around the edge here. And here in this little drawing, it's going to be nice and symmetric. So what this would look like if we looked front on potentially is a bright ring where we have lots of protons and a void in the center. That's the magnetic fields. We could also have a plasma that has electric fields as well. And although we think electric fields are quite small due to bi-shielding, on short timescales in HED plasmas where these are often used, the electric fields can be relatively large because the plasma does not have to be quasi neutral, so we could have some sort of electric field like this. And then our electric field would also put a force on these protons. In this case, I've drawn it in the same way so the electric field is pushing the protons out from here. So if I draw some more protons coming from a point source and I draw some trajectories going into the plasma, I'll have protons which are being bent upwards and downwards like this. And so on my detector I could, for example, see a very similar pattern, where I have a void in the middle, I have some regions of increased fluence on the limbs, and then I have a region where there's undisturbed fluence because I've got particles which are just going around. So this is a technique called proton imaging. It's a way of imaging electric and magnetic fields. But of course, it feels both the electric and the magnetic field. So that's one fundamental limitation with it, is that you never quite sure whether your deflection is due to electric or magnetic fields. There is a third form of proton imaging, which is technically called proton radiography. Proton radiography, the radiography here is to make you think about X-rays and X-ray radiography, which is where you have your beam of particles, and you have some very, very dense plasma here. And because it's so dense, these particles don't stream through it and just fuel the electric or magnetic fields, the particles start scattering, and colliding, and maybe they never escape out the other side. And then on your detector, you would have similar to an X-ray, an image which, for example, has a region which is a void where there are no protons, and that's because all the protons have been scattered. So again, this only happens at high densities. At lower densities, the particles should stream through. Unfortunately a lot of people call this proton imaging technique proton radiography because that's what it was initially called, but it's not a radiographic technique. Radiography refers to your beam being attenuated by density in the way. So it's not really a right name, but you'll see a lot of people call this proton radiography. Some people have been trying to call it proton imaging instead. Even in the case where you don't have such a high density, you may still end up with enough density in this plasma that it will knock a particle off course, and that will lead to some sort of blurring. So these weak collisions can lead to a blurring of the image. So you may still want to take into account the [INAUDIBLE] a little bit. OK. And the references for this lecture are two papers. First one is by Hoogland. You can find it in Review of Scientific Instruments, RSI, from 2012. A lot of the mathematics we'll be using is based on this paper. And very recently, there was a review article by Schaeffer. It hasn't been published yet. It's on the archive. And that was published last year. And this review article is really, really good at the nitty gritty of how proton radiography is actually done, and we'll talk an awful lot, or a little bit at least, about how these sources of protons are made later on. For now, for the first bit, we're just going to assume there is some source of protons. So before we get on to a more mathematical treatment of this, does anyone have any questions? Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, absolutely right. So at the edges here, where they have less distance to travel through, you could have something that gets knocked around, but still gets through. But maybe another particle traveling a very similar path doesn't get through, and so you'd have grayscale you'd have here. So just like with X-rays, yeah. Other questions. Any questions online? OK, so mathematically this does look very similar to shadowgraphy, but there are some differences. So I'm going to go through the mathematics again. So you may notice some similarities, some differences in notation. That's all OK. So we're going to start by making some assumptions. Our first assumption is that we have a point source. In reality this means that if we have some source size here, which has a size, say, D source, that has to be much, much less than a length scale the size of our plasma. Because if that's true, we have some finite source size here and some very big plasma, from the perspective of the plasma, this will roughly be a point. So we can make this more rigorous by introducing a dimensionless parameter and insisting that parameter is small. This is a pretty good approximation. The next thing we need to assume is a uniform beam, which is to say that we have the same number of protons coming through every little infinitesimal solid angle unit here. So that in the absence of any plasma, when I put my detector here, I do, in fact, get a nice uniform signal on it. So I over I0 is just 1 here. If my signal is not uniform like this, it complicates the interpretation. It turns out you can do something with a non-uniform beam of protons, but mathematically it's harder. So we're going to insist on a uniform beam. So we'll say the gradient of the beam intensity is equal to 0, at least over the target, over the plasma, I guess. OK. Third approximation we'll make is the paraxial approximation from geometric optics. The paraxial approximation is basically saying that although these particles are following some path that we can call DL, our detector is relatively small. And so most of the particles are going almost parallel to dz hat here. So this means we can approximate any path integrals as just integrals in the z direction, and this also means we can approximate angles like tan alpha to be approximately sine alpha to be approximately alpha. This makes the mathematics very easy as well. And the final assumption we'll make is it all protons are test particles. So you may have seen a test particle formalism in some bits of plasma theory. The idea is that your test particle merely experiences the fields inside your plasma. It doesn't contribute to them. So the fact that we have this beam of protons streaming through here, we assume it has negligible perturbation on the plasma itself. So we have no deflection, for example, between protons within the beam. And this is a statement that the average potential that the protons are creating divided by their kinetic energy is much, much less than 1. That is to say they're going so fast that even if they do feel a small electric field that deflects them slightly, it's negligible because they're moving so fast. It doesn't change their overall proton kinetic energy here. OK, any questions on assumptions? Yes. AUDIENCE: [INAUDIBLE] PROFESSOR: Yeah, the paraxial regime takes care of that. Yeah, exactly. But you're right, but we're really talking about less than a degree subtended. So it's very, very small angle. AUDIENCE: [INAUDIBLE] PROFESSOR: The only place that appeared so far is we've assumed that it's very, very hot. So these protons tend to be, and we'll get on to the generation mechanisms, but these tend to be mega electron volt energy protons here. So they're much, much faster than any of the other particles in our system generally. And because the collisonality, the mean free path goes as-- I'm going to get this right-- velocity cubed, then these very fast particles here have a very long mean free path. Their collisonality is very low, so they're unlikely to collide with any particles in the path. We'll talk about that. Yeah, no, no, no. It's great. For the purposes of this, we're going to do it in such a way that we will just solve for the-- there are some protons with path energy 1, and some protons of energy 2, and our solution is separable. And I guess that comes back to here-- the fact that beam doesn't interact with itself, but we'll talk a lot about that later. Yeah, Alan? AUDIENCE: Why not use [INAUDIBLE]? PROFESSOR: They're much lighter and so they'll scatter more easily. But people have started doing electron imaging as well. But then you need GeV beams and stuff like that, so the protons are better for this. AUDIENCE: Would there be advantages to that path? PROFESSOR: I don't know enough about the electron beam techniques at the moment. So a lot of what I'm going to show you is through very specific detectors that have been created for protons, and I don't know whether that detector technology exists for electrons in the same way. There's another question. AUDIENCE: [INAUDIBLE]? PROFESSOR: Yeah, it's very hard to make a collimated beam of protons. And when we talk about the generation mechanisms, they tend to make point sources or things with very small spot sizes. So it's a technological limitation. In the very early days people used particle accelerators to do this. But although particle accelerators can accelerate a lot of particles to high energy, it's not enough particles on the timescale of these experiments. These experiments are over in a nanosecond, and so you need a very bright burst of protons. And it turns out the techniques we have for making those bursts of protons make a nice point source. Yeah, was there a question? AUDIENCE: When you say point source, you mean like just from a specific one location, not the point source hitting the plasma, correct? PROFESSOR: Sorry, could you say that a different way? AUDIENCE: Yeah, so the point source is the beams. And when you were talking about the particle acceleration, you said it was coming from a single area. Are they still doing that with however else they're generating the protons these days? PROFESSOR: Yeah, we'll skip over that, and we'll get to how the protons are being generated later, and it might make a bit more sense. So now you can imagine this as a point source that, for example, is emitting protons in every direction. It just turns out that our plasma is only in one direction. And so from the point of view of the plasma, there's just a little point that's sending protons its way. AUDIENCE: But in reality, that's not exactly happening. I see, OK. PROFESSOR: OK, any other questions? OK, so let's set up the geometry of our problem. So we've got our point source over here. We've got our plasma here. The distance between the two of these is lowercase L. Our plasma has some characteristic size, a, here. We have a detector some distance away, and we'll call that distance capital L. Within the plasma, we have a coordinate system and we label points within the plasma using the position vector, x0. Yeah if I have a position with a 0 subscript, that's somewhere within the plasma. And then on the detector, I have a position vector, x. So what we're going to be doing is looking at how the protons are deflected at position x0 and they will end up somewhere on the detector at x. And what our job is, therefore, is to link what we see on our detector x back to the properties of the plasma at x0. And we can think of a proton coming out like this and within the plasma undergoing some deflection. And it's deflected from its original trajectory by an angle, alpha. And this alpha, in general, can be a vector. So we can have an angle in this direction and we can have an angle in this direction here. I'll be drawing it mostly in 1D, but remember, the particles can be deflected in two dimensions as well. So from this simple geometry, we can already write down what we're going to see on the detector x. So x is a location of a proton which streams through the point x0. And then there's a magnification factor, capital L over lowercase L. This is just similar triangles. This just says in the absence of any deflection, I would have ended up at this point here, and that point is magnified because capital L and lowercase L are different. So far this is just what you'd get without any plasma. But when you put the plasma in, of course you get some deflection, alpha, and that alpha is times by the distance between the plasma and the detector here. So again, this is just geometry. It has nothing to do with plasmas, or protons, or anything in there. So just to clarify what all these terms are, this is at the detector. This is the object. This term here is due to the divergence of the beam. We wouldn't have this term if we had a collimated beam like we had with lasers. Remember with the laser, we started out with a load of rays which parallel, and so this wouldn't have happened with a collimated beam. But we don't have that. We got a point source. We got a diverging beam. And then finally, this is to do with the deflection. So this is like shadowgraphy with a diverging beam. OK, any questions on that? Typically we set up our system so uppercase L upon lowercase L is much, much larger than 1. So that is to say we have our source, we have our plasma, and then quite a long distance away, we have our detector here. Remember, this is lowercase L, this is uppercase L. This is for two reasons. Firstly, the plasmas tend to be very small. And so by setting it up like this, we get a massive magnification. So because we've got these diverging protons here mapping out over there, we're going to have a nice big image to look at. So if this is only 100 microns across, maybe on your image over here it's 10 millimeters across, which is still pretty small, but maybe you can actually see something on your detector. The other reason is just practical. You cannot put your detector very close to your plasma. This is something like a laser-driven ICF implosion. You really don't want to have your detector close because it will get trashed, so you've got to put it a long way away. So this is typically the regime that we work in. And then, as you can see up here, that means that we have a term that looks like x0 1 plus capital L over lowercase L, and we can just drop that term there as being small compared to the second term. And we can then convert that equation so that it reads x equals L x0 upon lowercase L plus alpha. So I've just taken the capital L outside the brackets here and now we have these two terms inside. So this first term, as I said, is simply magnification. Very poorly written. And this second term here is distortion. Because the rays are being deflected. If we didn't have the distortion, this would just be an imaging system with magnification. Because the rays are being deflected, we have distortion. And because it's distortion, proton imaging is not imaging. So perhaps they should call it proton deflectometry. You can argue about the name for ages. But this is not an imaging technique, like with shadowgraphy. If you see something on your detector, it doesn't have a one to one correspondence with what's going on in the plasma. Features are enhanced and enlarged. The void that we were producing here, that void, for example, is going to be larger than the real size of the plasma. The protons are being pushed outwards. So mathematically what we're doing is we're taking some area inside the plasma, the s0, which has a certain number of protons going through it, and we're mapping that out onto some area on the detector, which does not have to be anywhere close to the same shape because of all the deflections going on inside here. And that area is just ds. So we're keeping our subscript notation here. This is the object. This is the image that's forming here. And the way to link these is to say that ds is equal to the magnitude of this quantity, D at x0 times ds. And this is a quantity that some of you may know. It's the Jacobian. And it has the form partial x and y over partial x0 y0. Of course, we haven't told you how to calculate that yet, but it's going to be related to this here. So this is just a general statement. When you're mapping some area to another area, you can do it using this Jacobian transform, and the exact way the Jacobian transform works will be related to this, and we'll solve it explicitly. Sorry, the left hand side here. Thank you. Yes, didn't make much sense otherwise. And guess this is a vector again, but I've written it out explicitly in terms of the x and y components. Here I just put it in this more compact vector notation. OK, questions? OK, we're going to assume that we can serve particles. Maybe this was another assumption up here, but none of our particles are scattered and lost. But if we can serve the number of protons, then the intensity on our detector is going to be equal to the intensity going through the object divided by this Jacobian. It's equivalent to the statement that we're conserving the area of this Jacobian transformation. Here we're conserving intensity here. And this Jacobian is going to be equal to capital L squared upon lowercase L squared times 1 plus then we start doing a Taylor expansion of this equation here. L radiant of alpha plus L squared partial alpha x component partial x0 partial alpha y component partial y0. And then another term, which follows symmetrically from this, which is minus partial alpha x component partial y0 partial alpha y component partial x0. Plus some other terms that we're going to drop as being small here, but there's probably a more compact vector notation for this, but I just want to write it out more explicitly. This one is clearly just partial alpha x partial x plus partial alpha y partial y. And these are all 0 subscripts because we're doing the derivative inside the plasma. So this is looking at derivatives of how the angle changes inside the plasma. So if you can calculate your angle alpha, and you might think, I can do that if I know the electric and magnetic fields in my system, and you'd be right, then you can now calculate how alpha changes within the plasma, how the protons get distorted, and now you can work out what your intensity is compared to your initial intensity. And it turns out that the sorts of images you get out of here, there's a very useful dimensionless parameter that you can write down to characterize the different images, and this parameter is called the contrast parameter. And it's written as mu, and it is defined as lowercase L over a-- lowercase L was the distance from the point source to the plasma and a is the size of our plasma-- times the modulus of this deflection angle vector, so the size of this deflection angle. And it turns out if you're in a regime where this parameter is much, much less than 1, then you find out that your D, your Jacobian here, is approximately constant and so your intensity is approximately constant. So we talked about this with shadowgraphy. This is a regime where we only have small intensity variations. And when you have small intensity variations, you have a chance of being able to undo this transformation, and work out what D was, and therefore, work out something about your plasma. But if you have a contrast parameter where mu is about 1, this corresponds to having your Jacobian with singularities or with zeros in it. And you can see if you have zeros in D, then all of a sudden, you're going to have your initial intensity over 0. Your intensity will go to some very, very large number. And these are the caustic patterns we talked about in shadowgraphy. I wish I had some more space, but I don't. Well, actually, maybe I have just about enough space. So in this case here, you can show that your intensity on your detector is equal to your source intensity times 1 minus L divergence of alpha like this. And this is equivalent to the equation we came up with for shadowgraphy, where we've implicitly assumed that all of these terms are very small. If you look here, our contrast parameter has an L and alpha has an a. We've got an L, an alpha, and this gradient is going to be on the order of the size of the plasma like that. So we've implicitly assumed that this contrast parameter is small in order to get the solution. And so what we see is we have small variations in intensity because it's 1 minus a very small number. If we have zeros here, then we actually have to solve fully for D everywhere inside the plasma. And this is where we start getting these singularities. So I've got some space. So we have D equals 0. We get singularities in I. I is equal to I0 over the size of D. And this corresponds to proton trajectories crossing. So we have deflection within the plasma and we have two protons, which take different paths, but end up at the same place. And if you can imagine if there's multiple of these happening, you end up with a huge amount of fluence in one spot, and so you get very, very high intensity. And this is usually called a caustic. So these caustic features are very, very strong. You can see them very, very easily, but they mean that there is no longer a unique reconstruction. So we can no longer uniquely take our modulation intensity, I, on our detector and work out the properties of the plasma through D. So we want to avoid being in this regime. Avoid being in the regime where mu is approximately 1. And to avoid being in that regime, we just have to look at the definition of mu and try and make it as small as possible. So we want L over a much, much less than 1. So again, that's the distance between the source-- I've got it here already. The source distance to the plasma is L and the size of the plasma is a. So obviously in this case, I haven't succeeded. L over a is roughly 1. But I could put the source much further away, and then L over a would be much smaller, and that would help me make mu smaller. And the other thing I can do is have the size of alpha be much, much less than 1. So that's effectively insisting that my deflections are small. Now for a given plasma, there are some given electric and magnetic fields. I can't just ask the deflections to be small. What I can do, however, as we'll see in a moment, is use very fast particles or fast protons. Because if we make the protons go very fast, their deflections become smaller and smaller as we're about to show. Now none of this really has talked about protons at any point. All of this applies just the same to shadowgraphy. We're just thinking about particles or rays that get deflected going through a plasma, and we've put some geometric formalism on top of it to tell us what we expect to see on our detector. So bearing that in mind, any questions before we go on and actually talk about protons, and plasmas, and electromagnetic fields? I saw some folks still writing, but yeah, taking questions. AUDIENCE: The L being [INAUDIBLE]. PROFESSOR: Oh, it looks like I got it the wrong way around, didn't I? Yes, L needs to be much less than a. AUDIENCE: [INAUDIBLE]. PROFESSOR: No, I can still do that if the plasma is spherical, right? Because if I have, for example, a source here and a spherical plasma over here, this is a and this is L, and it's pretty clear that L over a is much greater than 1. Well, no, no, no, no. This is the condition that you want. What I said earlier is incorrect. You don't want this condition. You want to have your plasma very close to your source. So my words were wrong, but the algebra is right here. Yeah. AUDIENCE: [INAUDIBLE]. PROFESSOR: Oh, I see. Yes, I take your point. Yeah, if your plasma is spherical. A lot of the time people are imaging plasmas which are quite thin. But yeah, you're right. OK, other questions? Yes. AUDIENCE: [INAUDIBLE]? PROFESSOR: Oh, radians. Everything's in radians. AUDIENCE: Less than 1? PROFESSOR: Yes, much less than 1 radian. It's in radians because we're using this approximation. So if we're using this approximation, alpha has to be in radians for that to be true. AUDIENCE: [INAUDIBLE] there's nothing special about 1 radian? PROFESSOR: No, no, no, no. No, there's nothing special about 1 radian. We want it to be as small as possible. 1 is a number that it can be small there, but guess could have raised it 10 to 6. Yeah, sure, sure, sure. In particular, we said that we can get a less than 1. So if we want to get mu much less than 1, that means that alpha has to be much less than 1 from that point of view. I think this is a reasonable [INAUDIBLE], but yeah. OK, any questions online? OK, let's have a look at what the actual deflection, what this alpha, is here. Because that's what you need to in order to work out this quantity, which is D, and work out all the rest of the stuff. So which board should I use? OK, so alpha for electric and magnetic fields together is simply the integral of that force, which is E plus v cross B along the path, which remember, we're approximating as dz like that. And the deflection angle has to do as the charge, of course, because that's what goes into the force. But there's also a factor of 2 and w out the front here. This w is, again, the kinetic energy of our proton m proton v proton squared. You can work this out if you want to. I haven't proven this exactly. So what we notice is interestingly we got a w that depends on v and here we've got v inside this. And so we can separately write out that there's going to be a deflection due to the electric field, which is e over 2w integral electric field dz. That's pretty obvious. I've just taken the first-- ah. And we have a deflection due to the magnetic field Alpha B equals E over square root 2 mw. We're taking this velocity and we're combining it with the velocity squared over there. Integral of dL crossed with B. Whereas of course this dL is really dz because of our paraxial approximation. So we're going to have a deflection angle that depends on the cross product of the path and the magnetic field. What's interesting about these two is that you have different scalings. One has a scaling with the particle energy. One has a scaling with the square root of the particle energy. So this suggests to us straight away how to distinguish between E and B. Or should I call them alpha E and alpha B. So the deflection is due to electric and magnetic fields, and we distinguish between them by using two or more proton energies. If we have a source that produces 1 MeV protons and 10 MeV protons, they will go through the same plasma, but the 1 MeV protons will be much more sensitive to-- I've got to get it right now-- lower energy. So this is a bigger number to the electric field than the 10 MeV protons. And so we can use a differential measurement effectively to work out the contribution of alpha E and alpha B. There'll be two simultaneous equations and we'll have two things we want to solve for. So we'll be able to get it out. So this looks an awful lot like when we talked about two color interferometry to determine the difference between vibrations or neutrals in our plasma. And people do this a lot because they feel doing an experiment you don't a priori know whether the electric fields are small compared to the magnetic field. Most people want to use this technique to measure magnetic fields. And usually it's the case of magnetic fields do dominate, but you still need to check, and you can do it with these two different energies. So we've been talking a lot about alpha here, but there's a rather neat formulation, which says, well, instead of talking about this vector alpha, and that deflection angle alpha depends on where you are in the plasma, so it depends on your position x0 within the plasma. If this is a vector, then we can just write it as the gradient of some scalar. That's just generally true. So we can say what if there is some scalar, phi, of x0 and alpha is just its gradient. And the reason you want to do this is can now write down an expression for phi, which works for shadowgraphy as well as proton imaging. And then all of the mathematics that you've done previously, you just do it in terms of phi, and it's agnostic to the exact technique that we use. So for shadowgraphy you find that your deflection potential is equal to minus the integral of the logarithm of the refractive index n. And I'll explicitly write this out in terms of the position within the plasma. The z0. So the integral along through the plasma in the z direction. And this logarithm here, or this n here is, of course, the one where we have n squared equals 1 minus nE over nc that we saw in all our refractive index things. So of course it could be whatever-- O modes, X modes-- but people don't do proton radiography [INAUDIBLE]. So the plasmas we're working with are mostly unmagnetized. This is fine. But E fields, this potential here is just equal to q over 2w. I use q over there? No, I switched to E, so I'll put it back to E. Integral of the real electrostatic potential, lowercase phi, x0 y0 z0 E z0. So they are related. They don't have to be, but they are in this case related. The deflection potential is related to the electrostatic potential. And then the final one is the magnetic fields. Capital phi of x0 is equal to E upon square root of 2mw integral of A dot dz 0. And this A is the magnetic vector potential. So B is equal to the curl of A. So the idea is that we will seek techniques which will take our intensity variation, I, on our detector, and we will use them to infer some sort of deflection potential. And then the neat thing is, it doesn't matter whether we're looking at electric fields, or magnetic fields, or shadowgraphy. That's a detail that the end user has to use, but the algorithms are all the same. So this is very powerful. We can share techniques between very disparate looking diagnostics. AUDIENCE: Professor? PROFESSOR: Yes. AUDIENCE: Does the B is A, where B equals the curl of A, is that from the Maxwell's law B-- like the divergence is 0? Is where that comes from? PROFESSOR: Gosh, yeah? Can divergence free fields be written as the curl of another vector? I think so, yeah. AUDIENCE: OK, then where does the potential come from? Is it just a regular electric potential? PROFESSOR: Yeah, this is the electric potential. This is the potential that's defined by Poisson's equation. AUDIENCE: I see. OK. I just wanted to check. Thank you. PROFESSOR: Yeah, no worries. OK. Any questions on that? The next thing I'm going to cover is how do we make these beams of protons, so we're going to switch gears a little bit. So if you have any questions on what's happened so far, this is a good time to ask. AUDIENCE: [INAUDIBLE]. PROFESSOR: These are contradictory, yes. So the paraxial approximation also makes it very hard to have your object very close. But if you were able to do that, you would reduce the contrast parameter, but you would also break all of the mathematics we've done, which is use the paraxial approximation so far. So in practice, your best bet is to reduce alpha instead. And you reduce alpha by making your protons as fast as possible. Because the faster your protons are, the higher their energy, the lower their deflections. So this is what motivates us using MeV or higher beams of protons. Yeah. AUDIENCE: [INAUDIBLE]. PROFESSOR: So the particle is being deflected a little bit, but I could approximate mostly in the same direction. It's certainly not being decelerated. Its velocity is constant. So the size of its velocity, its speed, doesn't change as it goes through the plasma. That's also consistent with our test particle picture, where the test particle is experiencing the field, but it's not being slowed down. Yeah. AUDIENCE: Is that like when [INAUDIBLE]? PROFESSOR: The particles are really, really fast. They don't scatter at all. They're way faster than everything else. So this is a completely reasonable approximation. You can treat the plasma as this static, unmoving thing on the time scale of these protons going through. And the protons barely see it apart from they get deflected a tiny amount. And it's only because we put our detector so far away that those tiny deflections show up as significant changes to the fluence. OK, any other questions? Anything online? Yeah. Yeah, the deflection potentials add linearly, which is neat as well. And then when you use two different energies, you can solve separately for the two deflection potentials and then see how big they are. OK, let's talk about how to make a beam of protons. This is actually quite frustrating. When I taught this in 2021, there was no good reference on this and I had to spend ages looking it up. And then Derek Schaeffer put his paper on archives and explains it all really straightforwardly. So if you want to understand it, you can just read his paper. But yeah, it's interesting stuff because this technique is relatively modern. It's only within about the last 20 years that people have really been using it. And a lot of the development of this technique was led by the HED group here at MIT. So a lot of the things I'm going to tell you about are MIT diagnostic development. So we need to have a source. And for each source, there is a detector. So the first source that I'm going to talk about is a technique called TNSA. This stands for Target-- and it doesn't really matter-- Normal Sheath Acceleration. You can't quite sing it to the tune of Teenage Mutant Ninja Turtles, sadly, but it's getting pretty close. So the idea of TNSA, and I will never say target normal sheath acceleration again-- is you have a metal foil. And it's thin-- tens of microns thin. And any metal that you have is going to have absorbed on its surface lots of hydrocarbons, which means lots of little protons. So again, we're not accelerating copper or something like this here. We're still accelerating protons, it just turns out there's lots of little protons absorbed in the structure at the front here. And we take our laser beam, and we focus it down, and hit the back side of this target. And our laser beam is ultrashort pulse. So this tends to be something like a 50 Joule laser in 1 picosecond, 10 to -12 seconds. So this is over 100 terawatts of laser power. Or if you've got a really short pulse, you can do 1 Joule in 50 femtoseconds. A femtosecond is 10 to the -15 seconds. So these are seriously short laser pulses and this is a seriously intense focal spot here down to about 10 microns, we can focus the laser down to. So that's a lot of electrical energy being focused into a very small spot on a very short time scale. And what that does is it initially blasts electrons off the foil. So there is some heating, there are some strong electric fields. For reasons that we're not going to go into very much this drives electrons off the foil at a high speed, and they form a little cloud going outwards. And because there's been electrons ejected, we then have an electric field. And that electric field drags these protons off the surface like that. And this is how the protons are accelerated. So this technique, some good things about it, it gives us many protons. It's a very efficient way of accelerating the protons here. It gives us a range of energies. Now remember, we said we'd like several different energies for telling difference between electric and magnetic fields. The spot size was around about 10 microns. So as we said, that's a nice small spot. That's something we wanted in order for all our paraxial approximation and stuff to work. And it's a beam. It's in one direction. And we'll see why that's important when we compare it to other [INAUDIBLE] experiments. The big disadvantage of this technique is you need a petawatt class laser or higher. So you need to have one of these ultrashort pulse lasers. Not every facility has that. These are very expensive things to build. So it's not like you can just go down to the shop and get this. I've thought about trying to get one of these for the PUFFIN facility, and we got a quote for about $2 million for a laser just to do not very good TNSA. So these are very expensive. [INAUDIBLE] So that's your source. Your detector-- Is something which is called radiochromic film. Another acronym I'm not going to repeat, we'll call it RCF from now on. Now RCF changes color when it's hit by protons. That's the chromic part of it. And the more protons that hit it, the more it changes color. It sort of starts off quite clear, and it turns this beautiful ghostly blue, and then goes into a very dark, deep blue. They're very beautiful raw proton radiographic film images. Now what you end up doing is you have stacks of this film. You have multiple sheets. So imagine these are squares of film and I've just made a stack out of lots of them like this. And the reason you make lots of stacks out of these is due to a peculiarity of charged particle stopping, and this peculiarity is called the Bragg peak, which some of you may have come across before. The idea of the Bragg peak is your charged particles tend to stop in a very specific place for material, which is to do with their energy. So for example, if this is the stopping location x, we'll call this the x-coordinate. I guess I've called it z everywhere else, but whatever. You tend to have a lot of deposition of your particles all at one place, and this is energy-dependent. So this is energy-dependent deposition. The reason this is peculiar is you remember that for photons we just had this exponential decay here that was to do with the opacity of it. But that's not what we get for particles. The fact that all the particles of the same energy roughly stop in the same bit of film means that each film corresponds to a specific energy. Maybe the first film stack here corresponds to 1 MeV, and the next one to 2 3, 4, 5, 6, and so on. You can put different filters between these to select different proton energies, but it means that each of these bits of film now corresponds to a different energy, which is very useful if you want to do your multi-energy trick of unfolding the electric and magnetic fields. So this first result of this is that we have we call it 1 energy per film. It's obviously not quite true because you have some bleed through, some particles which stopped early. So there's a little bit of crosstalk between them. The other neat thing about this, if you don't want to use these in order to back out the difference between electric and magnetic fields, you can make a different argument. And you can argue, in fact, that my particles were born here, and they were born at different energies, and they streamed through this plasma here. But that means that a particle going at 10 MeV reaches the plasma at time T0. So instead of 10, I'll write 6 because I've got 6 here. So these particles all stream through at T0. But a particle at 1 MeV is going more slowly, going six times more slowly modulo, some relativistic stuff. And so that means that this particle arrived at T0 plus delta T. And so not only did the different energies maybe give you a way of unfolding the electric magnetic fields, the different energies are different slices in time. So if this is energy in this direction, it means that if you read the films back, they correspond to time in this direction. And so you can make a little movie of how your plasma evolves in time. So that's really cool. You get time resolution. And the more different bits of film you have, and the wider range of energies you have, the better time resolution you can get. Questions? Yeah. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, the radiographic film isn't sensitive to other particles. So not to neutrons, and gammas, and things like that. In this case, I think it's almost that the lower energy particles and photons you could filter out and the high energy ones, like gamma rays, will just go straight through without interacting. So this is pretty much just sensitive to the protons. Yeah, that's a good question. Yes. AUDIENCE: [INAUDIBLE]. PROFESSOR: It's the same thing. So the protons are all emitted in a very, very narrow burst. Yeah, that's great. Thank you. Let's have a little figure here of time and fluence. So at time 0 all of the protons are emitted. They're very narrow. Remember, this is picosecond or femtosecond here. So I can say this is picosecond. And maybe I've got two energies here the blue energy, which is 1 MeV, and the purple energy, which is 10 MeV. They have to travel some distance from the source to the plasma, and they will go through it at different times. The 10 MeV one will get there first and the 1 MeV one will get there second. And then finally, they will travel through to the detector. And they'll arrive there much later, but we don't really care. The detector is time integrating. So it will see protons, which sampled the plasma at an earlier time. Sorry, the 10 MeV film will see protons that sampled the plasma earlier than the 1 MeV film. But you do actually have to have that requirement that you have a very, very short burst of photons. Yeah. AUDIENCE: [INAUDIBLE]. PROFESSOR: Yeah, so you can only do one or the other. You can either claim that there are no electric fields, and you know it's all magnetic fields, and then you get a time series. Or you can claim that this plasma doesn't change in time, and therefore, you can use a different energy. So you've got to be a little bit careful when you see someone claiming this proton radiography. It's difficult to do both. But you might have a simulation or some other experiment that suggests the electric fields are minimal, in which case you can just say, OK, everything I see is due to magnetic fields. Now I have a time series. People do that and it may be reasonable. Was there another question do I see? OK, let's talk about this second method, which is an imploding capsule. The second method would be, for example, a D helium-3 implosion. So this is literally a little inertial confinement fusion target here filled with deuterium and helium-3. And you get a load of direct drive lasers pushing on it. I'm making a nice symmetric-ish implosion of this. And then when this implodes, it's going to send out all of the particles that come from the fusion reaction between the deuterium and the helium-3. So just to say for this, you need to have multiple, I'm going to write many, kilojoule class beams with a 1 nanosecond pulse. 1 nanosecond in the laser world is considered a long pulse. So these are much longer than picoseconds and femtosecond, but these are long pulse lasers. So the interesting thing is that this reaction produces two different energies from it, but only two. We have the 3 MeV DD protons. Remember, we're not interested in neutrons. There may be neutrons coming out, but they're not being used for this diagnostic. And we have the 14.7 MeV D helium-3 protons as well. So this could be a positive or a negative. The positive is you know exactly what your particle energies are. They're set by fusion. They're going to be shifted a little bit because there's thermal motion here, but roughly what the energies are exactly. But it's good so the positive is we know the energy. The negative is there's only two. People have recently add a little bit of spice in the form of some tritium, and then you also get the tritium helium-3 reaction, which gives you a proton at 12 MeV. So that gives you a tri particle source. So that's nice. Now you have three particles. So that's pretty good. Another negative here is the spot size. These implosions tend to get down to slightly larger sizes than you can reach with the focal spot for TNSA, so the spot size is rather about 40 microns or so. It's not too much worse, but it is significantly larger than the TNSA size. The other problem is that we have isotropic emission. So the nice thing about TNSA is all the particles are going roughly in the direction you want to. Here, if your plasma is over here, most of your particles are just flying off in completely random directions, and you don't get to capture them at all. And so this means that we have few particles. This is in contrast to TNSA, where we were able to create a very large number of particles. But the fact that we have very few particles also leads to a change in detector. And this is the big workhorse of the MIT HED group because they realized that they could use a detector called CR-39. Now CR-39 is a polymer. The interesting thing about it you have a sheet of CR-39, you have a proton going through it, is It causes a little area of damage, a little track as a proton goes through. And you can then dip your polymer after the shot in an etching solution. And it turns each of these tracks into little holes. So now if you look at your CR-39, there's all sorts of little holes on it. It turns out the size of the hole is in some way related to the energy. So you can tell the difference between the 3 MeV and the 14.7 MeV. And theoretically, though again I'm not sure that anyone has done this since it was first suggested, if you have multiple pieces of CR-39 and protons are just going through it, you should be able to correlate the tracks in different bits of film and go, aha, a track in this bit of film is now over here and I should be able to work out the angle of the particles. So you may be able to correlate tracks. But I'm not sure anyone's actually used that as a serious diagnostic. Just in one of the early papers. So what do you actually do with all these tracks here? Well, you set up a very good microscope, and you put one of these bits of CR-39 in it. And the microscope looks at a small square here, and it counts all of the holes in this little square. And this square represents a pixel, and it outputs, for example, a hundred holes. And now it goes to the next one and it outputs 102. So it's effectively reading off the number of protons, one by one, individually, automated, but still individually, and building up that intensity. Whereas the radiochromic film, we don't know the exact number of protons, we just have a different darkness of the film depending on the proton fluence here. So I did a scientific poll at APS DPP 2021. Three out of three physicists preferred-- I probably spelled preferred wrong-- PNFA plus radiochromic film. However, there are many facilities which do not have a sufficiently powerful short pulse laser, but they do have an awful lot of kilojoule class laser beams floating around that they don't have to use. So for example, on omega or on NIF, you have lots of these beams. You can use some of the beams to drive your target plasma, and the other beams to drive your imploding capsule, and you'll still be able to do proton radiography. So there's definitely a place for this technique. It's just that it seems like people prefer, in general, TNSA. OK, questions? Yes. AUDIENCE: [INAUDIBLE]. PROFESSOR: I don't know whether people have done that, but yeah, possibly. You need to have magnetic fields in this. And so in general, people don't think too much about the magnetic fields in ICF, but they might be there. There are mechanisms which could self-generate the field like the [INAUDIBLE] battery. I don't know if anyone's done that, but it could be done. I imagine it's then complicated. You have to work out exactly what you're measuring. But yeah, it's an interesting idea. Yeah, sure. AUDIENCE: You mentioned [INAUDIBLE]. Why not ELF particles? PROFESSOR: Why don't we do ELF particles? AUDIENCE: Because they are lower energy [INAUDIBLE]. PROFESSOR: Yeah, I would have thought that the velocity is a fair bit lower because of the higher mass. I don't know exactly, so good question. I mean, in general, the reason why people haven't put tritium in this, is you don't necessarily want to be playing with tritium. So this is relatively easy to fill this capsule, and this is just a pain in the ass. So it's not like everyone has rushed out to start doing the tri particle thing. Yeah. OK, other questions? Anything online? OK, now we'll go through some issues with doing this diagnostic practically. We've got our source, we've got our detector, we understand how to make a map of the intensity after the plasma, but are there any problems with doing that? Yes. So issue 1. This condition, but our beam was uniform. Remember, we need that because we're trying to determine from I equals I0 over the size of the Jacobian. And remember, the Jacobian is the thing that has all the physics in. If we want to work out what the size of the Jacobian is, we are measuring I, but we need to make some assumption about what I0 is. We can't measure it because we do the target normal sheaf acceleration through the plasma. We can't put a beam splitter in and measure what the protons look like because we don't have beam splitters and protons. So knowing I0 and knowing that it's uniform is very, very difficult. And so you could say, well, what if I have reproducibility? What if I do the TNSA over and over again and I measure it without any plasma and I see that it's uniform? Well, the trouble is we see that it's not uniform. So we have poor reproducibility. So a solution to this is that you include a grid in your system. You still have your point source and your protons streaming out through your plasma, but on the other side of the plasma you put a grid like this. So in the absence of the plasma, this splits up your beam into a series of beamlets. So for example, we have regions where the protons can pass through the grid-- I'm just going to draw nine of them here-- and we effectively have little points on a grid like this. This is the case without plasma. Now when the plasma is present, these protons are going to be deflected in different ways, and we'll see these points move. So for example, this point might move to here. This point might move to here. This point might move to here. And actually, we can now directly calculate the deflection angle, alpha, and therefore back out the electric and magnetic field from this technique. So this is very good because we can calculate alpha directly. But the downside of it is we have very low spatial resolution now. Because you've gone from having a continuous image into a discrete image instead. But this technique is still very powerful because it does give you a direct angle on alpha without needing to know what I0 was. OK. The second issue I want to talk about. Is if we have a uniform magnetic field as well as some varying magnetic field. If we have a uniform magnetic field all of the particles are going to be swept in the same direction as well. It's actually extremely hard to align these detectors and know exactly where the center is because we're talking about submillimeter precision. So if you see this grid initially-- If you see a grid like that, that could be a grid that's been formed without any background magnetic field or it could be a grid that's been displaced some distance by a background magnetic field. You're not going to be able to tell the difference between those two, so you cannot measure this background magnetic field. And in many experiments where this is externally applied with some magnetic coils, they really want to be able to measure that background magnetic field externally applied. So a very neat technique that's come up recently is to use a second detector, which is sensitive to X-rays here. So the idea, once again, is that we have our source of protons, but alongside the protons we're also going to have a source of X-rays because this thing is hot. And in this case, I drew the mesh afterwards. Some of you may have realized I think the mesh needs to be before this part. Now we have our mesh again here and we have our plasma. And the neat thing is we have, for example, our RCF, which is sensitive to the protons, and we have some image plate, which is sensitive to the X-rays. Then although our protons will be swept up by magnetic fields and do all sorts of strange things, the X-rays are just going to go in straight lines, which are completely unaffected by any magnetic fields here. So we're going to use high energy X-rays here, such that the refractive index at 10 kilo electron volts is roughly 1, and so we don't see any refraction of the X-rays. And so that means what we end up with is we'll have a grid of the X-ray points. In the absence of any background magnetic field and in the absence of any plasma, we should see all of the protons end up in the same place as the X-rays, if there is some uniform magnetic field that will shift all of the protons like this, this is uniform. And then if there's a uniform magnetic field plus a perturbed magnetic field, it will further shift them on top of that. So the X-ray grid acts as a really nice fiducial. It allows us to directly measure the deflections without having to worry about the uniform magnetic field screwing things up. So this is very nice bit of work by Johnson and others. And this is in Review of Scientific Instruments published last year.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_16_Line_Broadening.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So just briefly, there was some discussion last time about Lyman sequence and the Balmer sequence and how these correspond to the K shell and the L shell. And I just looked this up, and I thought for your amusement and entertainment I would explain it. So the idea here is that we are looking at hydrogen. So the Lyman, the Balmer series only apply to hydrogen. And we have levels of principal quantum numbers, 1, 2, 3, 4, and things like that. And there were two series which are historically seen. There, first of all, was a set of lines which turned out to correspond to transitions down to-- sorry-- n equals 2. And these are the Balmer series. These are transitions to n equals 2 from some other level like this. Confusingly, the symbol that we use to describe these is h. So we have, for example, h alpha, which is n goes from 3 to 2. h beta is n goes from 4 to 2, and so on. And the reason that these were discovered first is that h alpha is actually in the red. It's in the visible. And the other lines are increasingly high energy, as you can tell, because we're doing a transition from a higher and higher energy. And in fact, after h beta, these go off into the ultraviolet. So they're very hard to see in early experiments. The other sequence of lines which seem more fundamental, but were discovered later are the transitions down to n equals 1. These are the Lyman series. So this is n goes towards 1. And these are usually called Lyman alpha, Lyman beta, and so on. So that's transitions from n equals 2 to 1, and 3 to 1, and so on. And even the lowest energy of the Lyman alpha series is deep in the ultraviolet. So it took much longer for people to spot this. And so this is the Balmer series came first, and the Lyman series came later. So you're looking at this and you're saying, OK, this does look at an awful lot like what you told us was the K shell and what you told us was the L shell here. And apparently, the pedant's answer is when we defined the K shell, the way we were thinking about it involved some other electrons here. And so you need to have greater than three electrons left for both of these definitions. And so that means that you have to have at least lithium-like atoms. But I would say, frankly, these look almost a pretty obvious continuation of the L shell and the K shell stuff. There's also a series of lines for helium lines as well, which are for two electron atoms. These are for one electron. So my short answer would be, yes, the Lyman series corresponds to the K-shell emission and the Balmer series corresponds to the L-shell. And then, of course, you have all the fun things like L shell, Lyman, H K. Spectroscopy is awful for that sort of stuff. So it's just nomenclature. Any questions on that before we move on to line broadening, the reason we're here? So we're going to be talking about line broadening. So far we've mostly been considering line emission, which is at a specific frequency. We've said that we have some lower energy level and some upper energy level. And we have some transition of electron from an upper to a lower. It could be stimulated, or it could be spontaneous emission, and emit a photon with energy H bar omega upper lower. So what we'd expect to see in our spectrum, because we have lots of different levels available, is a spectrum that has a series of very sharp lines. Their spacing is determined by these energies. And the height of them is determined by the probability of transition. And in our plasma also, the occupation of the upper energy level. So how many atoms are in that excited state. And you with an electron up here have a chance to decay down. Now, in reality, when we look at it, our spectra does not consist of these delta functions. It has lines, some of which may be very broad, some of which may be very narrow. And so there are several different mechanisms, which we'll go through one after the other today, which broaden out these very narrow, single energy peaks into a range of different energies here. So the first of these is the one that we absolutely can never escape. It's always going to be present in our plasma. And this is called natural broadening. The idea of natural broadening is that our electron does not immediately transition down into a lower energy level. It has some finite dwell time that it will stay up here. So the transition is not instantaneous here. So we have what we call a finite excited state lifetime. And we assume this is going to be some sort of random process. It's not like all of the excited energy levels decay simultaneously after 1 millisecond. They're going to decay at various different times. So it looks a little bit like, for example, radioactive decay, where as a function of time, the probability of being in the upper state is going to decay away. And that probability is going to be proportional to exponential of minus time over some time scale tau. And that tau here is what we refer to as the finite excited state lifetime. So different states, some may have a short tau. They may decay very quickly. Some may have a long tau. They may decay very slowly. So we want to have some handle on what this tau parameter is. And so we make a very hand-wavy appeal to a sort of uncertainty principle here, where we say the energy of our line is h nu upper lower. Again, I'm switching between this in hertz and this in radians per second. But the same thing here. And we say there's going to be some uncertainty in energy, which is going to go like h over 2 pi this excited state lifetime. So this is effectively a bit like having delta e delta t is greater than h upon 2 pi, so greater than h bar like that. So this is an extension-- this sort of is an uncertainty principle type argument. You actually don't need to invoke the uncertainty principle in order to get this. But some people this way of looking at it. So this means there's some uncertainty in the energy of the line related to the excited state lifetime. And so some of the photons will just come out with a slightly different energy. So that should be enough to explain the broadening. But we'll do it a little bit more rigorously in a moment. The other thing I want to say is, yes, how to actually calculate this tau here. This is one way. Another way to do it is to say that the intensity of the emission i is going to go like the probability of the decay squared. And that's going to give us a factor of 2 upon tau, which is equal to the sum over all of the spontaneous emission processes that take us from the upper level to the lower level, because, of course, what we're interested in is not just decay from this upper level to a specific lower level. What do I mean by that? Because I cannot work out what this means in my notes, I'm going to delete this here. Possibly, there's a decay chain that includes some other processes, and we want to sum over those. But in general, we're just going to be looking at, for example, in this case, spontaneous emission here. And this has units of 1 over seconds. So we can quickly identify the excited state lifetime with 1 over this. The factor of 2 just comes from the fact that when we're dealing with intensities, we square the probability. So that goes into the exponential here. And that's what this comes out here. Now, we know that this Aul has some relationship-- could someone close the doors, please? Thank you-- to the black body. This is how we were able to calculate this before. And this black body has a relationship like nu cubed. So what we're saying is for the higher energy lines here, we're going to have a shorter excited state lifetime. And we're going to have a larger energy uncertainty. So ie equals h nu. It gives a short excited state lifetime, and that gives a large range of energies. So in general, if you have i energy lines, they're going to have larger natural broadening. So as I said, this is a slightly unsatisfactory way of getting relationship between energy and time. And we can do it more precisely by looking at Fourier transformations of this time series here. So if we have a system, which is decaying in time-- no, I guess I've drawn it already. We have an intensity of time that has a proportionality exponential minus 2t upon tau. And we can Fourier transform this time-varying signal. And we will get out the spectrum that corresponds to that. And that spectrum, does anyone know what the Fourier transform of an exponential decay is? It's got a name. It looks like whatever the intensity is of the unshifted line here divided by 1 plus nu minus nu 0 2 pi tau. Where's the squared go? Here. This has a shape which is symmetric in frequency, because we're just squaring the difference between the frequencies here. And it looks like this. Does anyone know the name for this function? This is Lorentzian. Sorry? Yeah, this is Lorentzian. What this is saying is that there is some width through the distribution of frequencies and the width in the frequency space, which is obviously also the energy space. The full width at half maximum is equal to 1 over pi tau, which you can get by just setting what is nu when i is half of its peak value here, so making this denominator equal to a half. And so what we find is that this full width at half maximum, which is proportional to 1 upon tau, which is proportional to nu cubed from our argument over here, means that as we said before, if we have higher energy photons, they're going to have more and more natural broadening here. So this we can't get rid of. This is just due to the uncertainty in when the electron is going to decay from the excited state to the ground state. And that naturally just leads to some broadening here. And so you're always going to have some Lorentzian broadening to your system. And then you'll also have other broadening mechanisms on top of it. But this means that when you're trying to fit, for example, like trying to fit these lines of something like a Gaussian function, because you look at it, and you go, ah, that looks like a Gaussian. Well, Lorentzians are slightly different from Gaussians. And you're fitting procedure won't work properly. So you need to be careful when you're fitting spectral lines not to just assume that the spread is Gaussian distributed. There are some other functions, like Lorentzian, which are more fundamental. Questions on that first natural broadening mechanism? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Do you want me to give you intuition for quantum mechanics? [LAUGHTER] I don't think I've got a better answer to you than this mathematical derivation. I think the energy levels are not as exact. Because of this uncertainty principle, we can never be quite sure exactly what the momentum, and therefore the energy of the electron wave function is. There's some uncertainty in exactly what the energy of the function is. And that translates to this uncertainty in the frequency. AUDIENCE: [INAUDIBLE] JACK HARE: I feel like that's above my pay grade. AUDIENCE: How much [INAUDIBLE]? JACK HARE: Thank you. But if I can't answer this, then that's very embarrassing. AUDIENCE: [INAUDIBLE] the probability of [INAUDIBLE]? What's the state we're talking about here? Is that the end state? JACK HARE: No, the up-- sorry. So let's say this is 1 here at time t equals 0. We have whatever process has just promoted the electron up to here. This took place at time t equals 0. And then the down process takes place at time t greater than 0. AUDIENCE: Or up [INAUDIBLE] JACK HARE: Sorry. Yeah, so the probability that is still in that upper state. Yes, exactly. Yeah, I mean, in this-- if you're like, but it could decay to a different state, or it could go up again, then that wouldn't affect the shape of the line, because we're looking at a very specific line, which is the line with the energy h nu ul. And if you want to-- u and l-- you want to know the broadening for some other state, then you have to do this calculation with whatever other state you're going for. Any questions online or other questions in the room? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: OK. AUDIENCE: [INAUDIBLE]. That doesn't get [INAUDIBLE], does it? JACK HARE: Right. Yes, I agree. So we are dealing with an ensemble of different [INAUDIBLE]. Yes, thank you. That's actually a very good point. So the point that's being made here is that, in fact, in reality, any given photon decay, the intensity looks like this. It's a delta function at a particular time. That's when the photon was emitted. This treatment here is talking about an ensemble of particles that we've prepared into the upper state. And then we're watching the intensity as one after the other, different ones decay. More of them will decay early in time. A few of them will decay later in time. And so when we take the Fourier transform of that, we get this. You're right. Each individual photon has a specific energy. And this is made up of lots of different photons coming from particles. The ones here decay at t around 0. And the ones out here decayed at t a great deal larger than 0. Thank you. That's a better interpretation. OK, Doppler broadening. Doppler broadening, we have some intuitive understanding of it. It's to do with whether the particle is moving away from us or towards us. And we get a shift in the frequency of the light that we receive. To keep things simple here, I'm going to just stick in a non-relativistic limit. And then this limit, the shift in frequency of our light, which is defined as whatever frequency we see minus the initial frequency, the sort of, if I was in the same frame as the emitter frequency. And this is just a sign convention here. So I'm choosing it so that delta nu is greater than 1 when the light has been upshifted in frequency. And for a non-relativistic system, this is just the velocity over the speed of light times whatever the initial frequency of the light was. So I can rewrite this in terms of velocity and I get nu over nu 0 minus 1 times the speed of light. So the intensity of light which I see at frequency nu is going to be proportional to the number of particles traveling at speed v, which cause a Doppler shift, which gives a frequency nu here. So this is the distribution of particles frequency nu over nu 0 minus 1 mc-- not enough brackets. So if we're dealing with something like a Maxwellian, where F of v is proportional to the exponential of minus 1/2 mv squared upon the temperature like this, then we will end up with an intensity, which is proportional to exponential nu over nu 0 minus 1 squared mc squared over 2t. Have I missed the minus sign? It seems likely. So this is the broadening for Maxwellian. Now we're looking in frequency space here. So if we initially had some line at nu 0, and I'm plotting i of nu against nu with my spectrometer, what can we say about this function, any facts about it? Is it symmetric? Close to symmetric, good answer. So for delta nu over nu 0 much less than 1, so for small shifts, this is a very symmetric function. And it's peaked at nu 0 here. And it does indeed look like a Gaussian. So your normal approach when you see some function that is symmetric and peaked is to fit it with a Gaussian. And that would be a reasonable approach here. It just turns out in the wings, where probably you can't measure it, because signal to noise is so bad, this may be slightly asymmetric. This is symmetric if you do the Taylor expansion around 0. And what we find here is that the full width at half maximum total delta nu at 1/2 is equal to 2 nu 0 2t over mc squared 2 natural logarithm 2 to the 1/2, which the punchline of this is that the broadening is proportional to t for the 1/2 here, which is the thermal velocity, as you might expect. So this means if you measure the width of a line, then you may be able to infer the temperature of it. Now, you noticed here I've used delta nu to the half. I like using the full width at half maximum for these, because this is a relatively unambiguous thing to define. If you start writing down sigma, then you're intrinsically thinking about Gaussians, really. And that means that when you use this full width at half maximum here-- again, this I could also write as delta nu to the 1/2-- it makes it harder to compare. This picks up loads of extra factors of log 2 and stuff like that. So I like working with full width at half maximum. It's easy to find here. So this result is not really very surprising. Effectively, the line traces out the distribution function here. There are some particles which are moving-- so a few number of particles which are moving very fast away from you, redshifted, a few particles which are moving very far towards you, blueshifted. But most of the particles are not moving towards or away from you. So we have most of our mission at the initial line frequency here. Any questions on Doppler broadening? AUDIENCE: Professor. JACK HARE: Yeah. AUDIENCE: So I'm assuming all these happen in laboratory plasma settings. But which one plays more of a role? Do they have equal amounts or is it one or the other that has more of an effect? JACK HARE: Yeah. So we'll go through all of these and maybe discuss that in a bit more detail. But all of these have an effect. But some of them may be negligible. And so it will depend a great deal on your plasma, and you'll want to calculate it. So for example, we did an experiment recently where all of the broadening we saw was due to natural broadening and the response of our spectrometer, and not to do with Doppler broadening. Though we thought it was going to be Doppler broadening originally. But you can calculate it. If you think, I know what the temperature roughly is, I can calculate the amount of Doppler broadening. And then you can compare that to the natural broadening. And remember that you can calculate the natural broadening if you have access to a database with these spontaneous emission coefficients. You can quickly estimate what the lifetime of the state is. And so then you can estimate the natural broadening. And you can compare that to the Doppler broadening you expect. And depending on your plasma-- remember, the natural broadening doesn't care about the temperature. So for a colder plasma, the natural broadening will probably dominate. And for a hotter plasma, the Doppler broadening might dominate. Or there might be other mechanisms that dominate as well. I briefly mentioned it then, the response function of your spectrometer is also important, because your spectrometer has some finite size detector to it, which means that even if you give it a nice delta function here, it'll be broadened in some way. And if you're lucky, this will be Gaussian. And it'll be easy to understand. But of course, your spectrometer doesn't have to have a nice response function. It could have a response function like this due to some weird reflections inside it. So you also need to understand the response function of your spectrometer before you try and use, for example, the broadness of a line to estimate the temperature. It might just not be possible. AUDIENCE: I see. That makes sense. Thank you. JACK HARE: Yeah. I'll talk a bit more about that with the other three mechanisms as well. Other questions? AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. We've taken a one-dimensional slice of the distribution function here. And because we're doing non-relativistic, we don't really care about the motion perpendicular to us, which in the relativistic case does happen. So yeah, this is sufficient for a three-dimensional system, because the three components of the velocity are-- no, this is OK. We can use this one-dimensional approximation as long as the distribution function is isotropic. If the distribution is anisotropic, then you would see different broadening in different directions. So the next mechanism we're going to talk about is called Stark broadening. Now, the only way to do this properly is with quantum. And you can go have a look at Professor Hutchinson's book. And you can see that there's a very long derivation for this. And even in his derivation, he admits that he doesn't cover all of the possible cases. So Stark broadening in general is extremely complicated. So what I'm going to give you is a very hand-wavy motivation for why Stark broadening might be happening. And then you can either accept that, or if you need to go and do spectroscopy Stark broadening, you can go do the calculations properly. So effectively, the idea of this is we have some ion, or some atom, I guess, we've been calling them. And if this atom is alone in the entire universe and it has few enough electrons, we can just about calculate the energy levels. And then we can do all of our nice quantum mechanics that we talked about. These have wave functions that possibly we can calculate. And so we can work out what the spontaneous emission coefficient is, because it goes lower upper upper, all that sort of stuff. But this is great. But as soon as we put anything else in the universe, this treatment will stop working very well. And the reason is, say, we've got some sort of electron here flying around. At some snapshot in time, this electron has a radial electric field like that. And that radial electric field will distort the potential around the atom. And it, for example, will distort these energy levels. And maybe we don't know exactly how they're being distorted, but maybe some of them go up. Some of them go down. It changes all of these energy levels. And that means now when we have transitions, their energy, h u prime l prime-- this is the lower state. This is the upper state-- does not have to be the same as the original energy level hul. So the photons will come out differently. So effectively, what Stark broadening is is the effect of the electric fields of other particles on the energy levels, shifting the energy levels, giving us different energies. And we can imagine as these electrons move around, they're going to create some sort net effect, because there's not just one. There's more of them. And all of these are going to shift the energy levels in different ways. And we might imagine there's some smearing out at this point, because some atoms are going to see some electric field, and some atoms are going to see different electric field configuration, which may move the lines apart or closer together and give us that broadening. So that's the first level of hand-wavy approximation. The next level is to try and put a little bit of math to it. And we say that the change in the frequency is going to be proportional to the electric field felt by the atom. Now, I've used this symbol to mean energy a lot. In my notes, I started writing the electric field like this, simply to make it clear that there's a difference here. So I'll do that. This big curly E is the E field. I want to know-- I kind of just stated this. You might be able to motivate it by thinking about the energy of an electron in an electric field and how that energy shifts. But it turns out that, quantum mechanically, sometimes you get delta nu goes as the electric field squared. This is called the quadratic Stark effect for obvious reasons. This is the linear Stark effect, which is the one I'm going to use as my toy model today. And the only way to really find out whether you have quadratic or linear is to go do the quantum theory properly for this. My apologies. So this electric field is going to be related through the distance between-- say there's one electron that's closest that's doing the most distortion. And so that is inducing electric field by Coulomb's law that goes as 1 over r squared. There's some distance r between these. And so we'll get some shift that's related to distance to the closest electron. Well, how close are these electrons? Well, we have a density n like this. And so that density is going to give us an average spacing of 1 upon r cubed. So we can write that the other way around and say that r goes as density to the minus 1/3 like this. But this is asking for some density n, what is the average spacing between particles? So if I just plot them around in this volume, I can see as n goes up, r goes down. If I put more dots here, I get a smaller r. This is a very crude estimate. So that means that the electric field that the atoms are feeling is going to scale as n to the 2/3 like this. And so therefore, that's also our scaling with delta nu. Obviously, what I've not got is the coefficient out the front here. You need that coefficient to actually link the frequency shift back to the density. But in general, the idea is, as we had before, that we have some line at nu 0 and that line is broadened and the width of that broadening at full width at half maximum is proportional to 2/3 of the density. So potentially, by measuring this broadening, as long as you've got this coefficient from doing all the quantum mechanics right, you can infer the density here. Doppler broadening got us the temperature. Stark broadening might get us the density. Not very satisfactory, I appreciate it. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. These are the electrons which are inside your Debye cloud. So they are the Debye shielding electrons, but some of those electrons may still get very, very close to the ion. AUDIENCE: [INAUDIBLE] JACK HARE: I think that that's the electric field outside of the Debye cloud, but I might be wrong. I think that's the electric field, which drops off far away from the atom, and then goes to 0 very, very quickly. But I think if you imagine you have your ion and you have this cloud of electrons which are shielding the charge, there's still a possibility of one electron going very close. And that's sort of averaged closest approach thing is what we're estimating here. So you still will get some perturbation to the energy levels here and some broadening from it. Again, I think my picture with individual electrons here is not the right way to do this. To do it properly, you have to do it with a wave function. But people do this. People use Stark broadening to measure density. And so what you'd want in order for this broadening mechanism to dominate is to have a dense plasma at relatively low temperature so that you don't get the Doppler broadening, which is proportional to temperature to the half. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: OK. Maybe that's a better answer. [LAUGHTER] Any other questions on Stark broadening? Sometimes called collisional broadening as well. [INAUDIBLE] Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: I never worked out what the motional Stark effect is. It's like it's the electric field that the particle feels in its lab frame. So it's actually related to the magnetic field. So you use the motional Stark effect to make local measurements of the magnetic field. But they call it Stark because it's effectively the electric field. But I don't think it's Stark in terms of density. I think it's to do with that lab, like you transform into the co-moving frame with the particle, and it fills an electric field instead of a magnetic field. Never quite got it. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. So the question is, if you have both Docker and Spark, are the line shapes different in such a way that you might be able to pluck out both the density and temperature? I think, in general, because all of the line shapes that we get are Gaussian-like functions-- they look pretty Gaussian. They are symmetric, and their peak's in the middle. It's quite hard to deconvolve both of those. For the natural broadening, Though we have a Lorentzian, you end up-- if you have a Lorentzian, or if you have the natural broadening and the thermal broadening, the Doppler broadening, you end up with something called a Voigt profile, which is a well known function. And then you can fit a Voigt profile to your line. And then you can get out the thermal broadening component, which is what you actually want to do. Presumably, there is some function that we can come up with, which is a convolution of whatever the Stark broadening is and whatever the thermal broadening is. And maybe you could fit that. But as always, you're really at the mercy of noise. And so if you're not measuring this line very precisely, it's going to be really hard to determine the difference. By the way, if you look in Hutchinson's book, there's some really weird Stark broadening features, some of which actually have a little dip in the middle like that. So Stark broadening, depending on the quantum mechanics of the quadratic and the linear can give you really funky line shapes as well. So that could be a good way of trying to deconvolve, because then this function here looks quite different from a peaked function. Any other questions? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. I think you're right. I think any particle coming near will distort the electric field. And it could be another ion as well. I was just using electrons [INAUDIBLE]. The next mechanism is Zeeman splitting. I'm going to explain this. And you're going to say, this isn't broadening. But it is. I promise you. So Zeeman splitting is what happens in the presence of a magnetic field. So this is effectively the B field splits energy levels. So if we think of a system that has, for example, a shell with p angular momentum and s angular momentum-- if you haven't seen this in quantum. Don't worry about it. But these are letters which refer to specific amounts of orbital angular momentum. Then we can have a transition from the p level down to the s level. And this is allowed because we have a change in angular momentum of plus or minus 1, which is the selection rule that's enforced by our dipole operator here that converts from the s energy level or the p energy level down here. When we have a magnetic field, we see that this line actually splits. So the s does not split. But the p, for example, will split into three energy levels here. And these energy levels will now have different angular momentums and they will also-- we're going to give them a label of sigma minus, sigma plus, and pi. The reason for these are just obscure spectroscopy notation. But this is what they've been called. The energy is different because the magnetic field is coupled to the magnetic dipole of the electrons. So we have a change in energy-- do I want to draw that as delta e? No-- of delta e. And this change in energy, delta e, is equal to minus mu, the magnetic moment of the electron, dotted with e like this. And this mu is a combination of the orbital angular momentum of the electron and the spin of the electron. If you've ever seen this in quantum mechanics, I apologize for butchering it like this. I'm just trying very quickly to get to the result. But effectively, you can have electrons which are spin up and electrons which are spin down. And so some of them are going to have their energy shifted up and some of them are going to have their energy shifted down. And that means that we're going to be looking out for transitions. Now, as opposed to just having a single transition down, we're going to have three different transitions, all of which are allowed, and all of which have slightly different energies here. So if I now look at my initial frequency on an intensity plot here, as opposed to initially having a single line, I may now have three lines. I've labeled them with their labels. And see the sigma minus has lower energy. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah. AUDIENCE: Is there anything like [INAUDIBLE]? JACK HARE: No. So this is this s. AUDIENCE: Ah, no, no. [INAUDIBLE] JACK HARE: This doesn't, because it's a singlet state. But that's quantum mechanics. And if you haven't seen it, I'm not going to explain it to you now. You can, in general, have a splitting of the other state, but then often transitions in those states is forbidden by the selection rules. So you won't see those lines. And also, this can split into two, or five, or all sorts of other numbers like that. I'm just doing a case where it split into three. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: It's to do with a number of different ways you can add up the angular momentum vector and the s vector. So it is finite. There's only a finite number of them. But oh god, you've really taken me back now here. But we have some vector B here. It depends on how we can add up i and s. And because s is the spin of the electron, this can only take finite values. And so this is 1l plus s projected onto b. But we can also have s like this. And so this would be another l plus s projected onto b. So it does take discrete energy levels, and it depends on the angular momentum. And this s is actually the total spin of the electrons in this energy level. So these electrons, it doesn't have to just be one. There can be three here. And then there's lots of different ways you can get s out of that. We're really not going to go into quantum mechanics in this. You should look it up if you're interested. I realize as I try to explain it we're getting deeper and deeper. The fact is magnetic fields, I think the heuristic motivation here is that the magnetic field will shift these energy levels, and it will create new energy levels, which are-- either have their spin aligned with the magnetic field or against the magnetic field. And those energy levels will have different energies. It's like a little bar magnet rotating to align itself with the magnet. And because those energy levels have different energies, the lines will have different energies. The light that we get has different energies. And so if you're in a state where your broadening is very narrow-- so there's not very much natural broadening, or not very much thermal broadening-- you will get out these maybe three very distinct lines. And then the cool thing that you can do there is you can measure delta nu. And you can get out the magnetic field magnitude. So that's pretty neat. In many plasmas, we don't know the magnetic fields. And so Zeeman splitting is a good technique. In something like a tokamak, we tend to know the magnetic field very well. So Zeeman splitting is not that useful a technique. The trouble comes that actually a lot of the time, these lines are relatively small splitting. So the magnetic field may not be very large. And so these lines may be very close together, or they may have large thermal broadening. So if I draw this now with each of these lines having significant thermal broadening, we can see that we won't see this well resolved triplet anymore. We will instead see some big blob of lines. And so this is effectively another broadening mechanism, because we split these lines, but we don't resolve them. So the total overall shape here, delta nu 1/2 is greater than delta nu 1/2 of the Doppler. So if you interpret this line as just being Doppler broadened, you'll infer like some ridiculously high temperature. But in fact, that only looks so high because you've got some of this Doppler-- this Zeeman splitting broadening built in. Questions on this? I have another point, but I just want to pause here. Yeah. [INAUDIBLE] AUDIENCE: [INAUDIBLE] JACK HARE: Oh, sorry. Yes. That's from-- so you know mu because it's actually very constrained by quantum mechanics. And so if you're seeing this line, you know what the energy is. And so you know what transition that corresponds to. So you know the electronic configuration of the upper state. And then you can go do your quantum mechanics and be like, there's three levels, or there's five levels, and the spacing of those levels is this or that. So that's OK. AUDIENCE: [INAUDIBLE] JACK HARE: Yes. AUDIENCE: [INAUDIBLE] JACK HARE: Oh, yeah. Yeah, yeah. AUDIENCE: [INAUDIBLE] the bottom [INAUDIBLE]. JACK HARE: Yes. That will smear things out. As always with spectroscopy, if you're line integrating through a region that has different parameters, then there will be some smearing there. AUDIENCE: [INAUDIBLE] JACK HARE: Oh, we've got time. It's a rather famous work by the group at the Weizmann Institute in Israel, where what they do is they do Zeeman splitting spectroscopy, but they do Zeeman splitting on a range of different lines. And what they have in their Z-pinch is a temperature profile. Maybe something that looks like this, the temperature of the Z-pinch is hottest in the center. And so they make the Z-pinch out of oxygen. And the reason they do that is that as you look at different shells of the Z-pinch, different radial bins like this, for example, in this bin, it's dominated by oxygen 2. And then this bin is dominated by oxygen 3. Maybe the center is dominated by oxygen 4. And so these different ions of oxygen-- so this is oxygen that's once ionized, twice ionized, three times ionized-- has different energy levels available to it. So it has different excited states available to it. And so it has different lines available to it. And so here, for example, there's a well-defined line at this frequency. For this one, there's a well-defined line here. And for four, there's a well-defined line here going up in energy. And so they know when they see this line that it must correspond to the outside of the plasma. And they know when they see this higher energy line, it must correspond to the inside of the plasma. And so when they look at the Zeeman splitting of these lines, they can effectively localize where the magnetic field is in space, because they're able to localize where those lines come from in space. This looks a little bit like with the electron cyclotron emission all over again. So you need to have some idea of what the temperature is inside here. And this, effectively, then looks like an [INAUDIBLE] inversion problem. So if we look at the Z-pinch down from above, there are these different rings with oxygen 4, oxygen 3, and oxygen 2. And you can do spectroscopy on a series of parallel lines of sight like that, and do an [INAUDIBLE] inversion on it, and work out what regions emit oxygen four lines, three lines, two lines, and so on. So a rather nice technique that you can use. But in general, you're right. If you're looking at some astrophysical plasma where you can't do an [INAUDIBLE] inversion easily, then this may not be the best technique. A couple more things on Zeeman splitting-- interestingly, these transitions here from the sigma plus and minus and the pi levels have different polarizations. And that can be used to actually do things like measuring the field direction. So trying to find somewhere where I've got enough space to draw this. If we've got some magnetic field B like this and a plasma, which is emitting Zeeman split lines, then if we observe at 90 degrees, then the light has two polarizations, depending on whether it comes from the sigma plus or minus levels, or the pi levels. And so in this polarization here, we get the sigma plus or minus. And in this polarization, we get the pi. And so by using a spectrometer with a polarizer in front of it and measuring the ratio of these two lines at different polarizations, we can back out the angle of the magnetic field here. And this can actually be very useful in a tokamak, where we know what the size of B is. So we don't need the Zeeman splitting. But we do want to know what the twist of the field lines is because that tells us about how big the poloidal field, particularly a useful diagnostic on devices with large poloidal fields, like reversed field pinches. It turns out if you look along the magnetic field, by the way, then you just see the sigma plus and minus lines. But you don't see the pi line. And I did at one point understand why that was the case. And it probably comes into quantum here as well. I don't think that was a chat noise from Zoom. I think it was something else. All right, any questions? Yes. No, not for quantum mechanics. What are you talking about? Are you talking about this gyration? AUDIENCE: Right. [INAUDIBLE] JACK HARE: Is it? AUDIENCE: [INAUDIBLE] JACK HARE: The electrons are not gyrating. The ion is gyrating. It has electrons which are bound to it, which have wave functions moving around it. All the electrons are gyrating with the ions. I don't think that's going to be a big perturbation to their energy. So when we talk about hyperfine, we talk about coupling with the nucleus. And we're-- I am not talking about the hyperfine coupling here, because that's very hard to measure. So here, this is the spin angular momentum. And this is the orbital. I don't know whether someone has done the quantum mechanics on how the gyrating particle affects the energy levels. I imagine it would be small. Can we come up with an argument that shows that it is small? Or is it the case that because-- does it not change the energy levels? Because if you move into the frame of the particle, the ion-- all the energy levels are going to be shifted by the same amount. I don't know-- interesting question. Any other questions on-- sorry. I don't know the answer to that. As I said, it's been like a decade since I had to really do this math. So any other questions? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: Yeah, I guess it depends on the exposure time of your spectrometer. You're capturing this spectrum on a camera. And that camera has a shutter, which is open for some time. So if the shutter time is shorter than the time over which parameters change, you effectively got a snapshot of the plasma. So that doesn't matter. But if the plasma is changing a lot and you just have a spectrum that is collecting for all time, then it will be a problem. So for example, if you have a spectrometer that's using a bit of film without a shutter, and you're just like give me all the light over the entire course of the plasma experiment, then the time changing properties are important. And you would be better off using a camera that had a shutter. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: So I don't know where your statement comes from. You might be right, but I can't verify that. [LAUGHTER] Is it really smaller? I mean, it feels [INAUDIBLE]. So the field strength here, we're talking about relatively large magnetic fields. You see Zeeman splitting. And the nice thing about Zeeman splitting is it does just keep increasing with magnetic fields. So you can get very large splittings. I think it is a pretty large effect in general. The final mechanism we're going to talk about is blackbody absorption. So we talked a little bit about this before. We had this idea that if we are an optically thick plasma, where tau is much greater than 1, then we end up having an emission profile that looks like i of nu is equal to nu squared t upon c squared. This is the blackbody spectrum. Now, it turns out from thermodynamics, you can also show that for whatever plasma you have-- optically thin or optically thick-- the emission i of nu is strictly going to be less than or equal to this blackbody spectrum. You can never exceed that. And so that means if you've got a system with a very bright line, like this line here, and it's got some width to it because of all the other broadening mechanisms that we've discussed, then the intensity of this line is going to be capped by the blackbody emission spectrum. And so for example, if the blackbody emission, we're dealing with a small range of nu here. So this is roughly flat, even though it's got a nu squared dependence on it. Then our line would end up actually having a flat top to it. And if the temperature was even lower, then our line would have a broader flat top like this. So this T1, T2, but T2 is greater than T1 here. And again, because I'm-- although there's a nu square dependence here, I'm only looking at a very small range of nu. So the nu is roughly constant over this. And so we get these little flat tops for the lines. And this looks like another form of broadening, because, previously, we were defining delta nu to the 1/2, which is the full width at half maximum, which is where you take the peak value and go halfway down and define it like this. But now when you take-- if you just naively apply this formula, you're going to end up with a broader looking line. You may observe some broadening, but it's actually just using this blackbody effect. So this is only really a problem for very, very bright lines. This is usually a very large value for your plasma. But those lines will have a cut off to them where the intensity can't go any higher than the blackbody level. Any questions on that? Yeah. So I was saying, for an optically thick plasma, you get the blackbody spectrum. But from thermodynamic arguments, you also find that the spectrum from any plasma for a given frequency cannot exceed that blackbody value. If you did that, you could work out how to break the second law or something like that. So not allowed. Other questions? So maybe a couple of applications here for the spectroscopy, starting with this Doppler broadening. There's the obvious use of Doppler broadening to measure the thermal velocity. Less obviously is when the Doppler broadening actually stops you making a measurement of the overall Doppler shift. So if we have a plasma which has some bulk motion, all the particles have some bulk velocity towards you or away from you. Then you're going to have a shift of wavelength, delta lambda over lambda, which is approximately the same as delta omega over omega. And that delta omega is going to be whatever the k of your wave is dotted with v here. This is just the classical non-relativistic Doppler shift that we talked about back here. So this Doppler shift has a size delta omega a dot v upon omega, and the broadening, as we've already said, has a delta omega that is about the ion temperature square rooted. So if you have a system in which your Doppler broadening is very large, then you will not be able to measure your Doppler shift. So for example, if your Doppler shift is just a tiny bit over, it's going to be very hard to measure that on top of a very broad profile. On the other hand, if you have relatively narrow Doppler broadening, then you should be able to measure that Doppler shift very easily. So this kind of puts a limit, but the interesting thing is we can exploit the fact that this square root of temperature is really square root of temperature over the ion mass, because it's related to the ion thermal velocity here, which means that we can actually choose to look at ions with very heavy masses. So these would be impurities in a tokamak. So this would be, for example, your hydrogen, it's got a very low mass. So it has very large thermal Doppler broadening. And so we can't measure the Doppler shift very easily. But if you've injected some tungsten into your plasma from the walls, for example, then you're going to have very, very small Doppler broadening. And you're going to be able to measure that Doppler shift very accurately. So this is basically using an impurity as a tracer of the flow. And if you are APS this year, and you made it to the Tuesday morning plenary, where John Rice was talking about this, people have built their whole careers out of this idea that you can use the impurity ions, which are easy to measure Doppler shift for, as a proxy for the motion of the whole plasma. And it turns out it is more complicated than that, but this is, at least, the basic idea is it's much easier to measure Doppler shifts for heavy ions than for light ions. Questions on that? Yeah. AUDIENCE: [INAUDIBLE] Doppler broadening [INAUDIBLE]. JACK HARE: So sorry. Let me be very clear then. So Doppler broadening is when you-- the Doppler shift is to do with the entire plasma movement. So imagine a tokamak. This is a toroidal velocity of the plasma moving round and round in a circle. The broadening is due to within that moving plasma, some particles going slightly faster and slightly slower. So it's sort of like they're all going like this, and some of them are also going like this as well. AUDIENCE: Broadening [INAUDIBLE]. JACK HARE: Yeah, exactly. Yes, yes, yes. So in order to get the broadening, you need to know your distribution function, like a Maxwellian like this. In order to get the shift, you just need to know the velocity of the entire fluid. Yeah, that's a good way of looking at it. Any other questions? Yeah. AUDIENCE: [INAUDIBLE] broadening [INAUDIBLE]? JACK HARE: Yeah. In general, with all of those diagnostics, the answer is, in theory, yes, but the noise floor of your diagnostic will probably limit you, because there's very few of those fast particles, and so they're not making very much light. And so any noise is going to make things hard. So this is true with any diagnostic that promises to map out the distribution function. The problem is that it's usually proportional to the number of particles at that velocity, and there's not very many of those particles because our distributions are close to Maxwell. I'll talk about another one of these diagnostics that you might be able to use as well in a moment. I want to finish up with a couple of diagnostics which are spectroscopy adjacent. And these are often called active spectroscopy diagnostics. So all the other diagnostics we've been talking about, we just look at the light coming from the plasma. With these active diagnostics, we actually do something to the plasma, and then look at the light coming out of it. So it gives us a little bit more control. Active spectroscopy-- so the main one I want to talk about here is called laser-induced fluorescence-- LIF. The idea of laser-induced fluorescence is that we have a system with maybe three energy levels like this, lower, upper, and intermediate. We actively pump the atoms in our plasma up to this level u. And then we look for light emitted when the system drops from level u back down to some intermediate level i like this. So we're stimulating the absorption. And then we're looking for the fluorescence coming out. And the reason this is a powerful technique is if you have a plasma like this and you focus your laser beam through it, you can now have your viewing cord perpendicular to the laser and collect the lights like this on your spectrometer. And you have chosen this transition to be very unlikely to happen. So you're not going to randomly get that many atoms in the upper state. So you're not going to spontaneously get that many photons with this wavelength being emitted. So that means when you see any light there, you know it came from the region of the plasma, which is the intersection between where you are pumping the plasma with your laser, and where you are collecting with your spectrometer. And you remember previously the trouble with spectroscopy is you collected all the way along the line of sight. And you didn't have any localization. But this technique gives you that localization. That's very powerful. You can also-- because of this, if you have your spectrometer absolutely calibrated and you know exactly how many photons you've counted, and how many photons you've put in with the laser, you may be able to get out a measurement of the density here, of the atom density. So this can be a way of directly measuring the number of atoms in this volume. And indeed, people use this even for neutral gas, or measuring neutral gas density. An extremely clever thing you can do on top of this is to scan this frequency here. So the laser frequency, you can get a laser, which you can gently tune up and down by a very small amount. And the neat thing about that is that you can then choose which atoms are absorbing the light inside here, whether the atoms are moving towards you or away from you due to their blue shift here. So if we-- this is a little bit exciting to draw. So we'll see how it goes. If I scan my laser over a range of frequencies around omega 0, which will correspond to this transition in a stationary atom, then for these frequencies down here, the red-- the lower frequencies here, they will be resonant with the redshifted or with the-- they will be resonant with the particles which are moving away from them. They'll be resonant with these particles here. If instead I'm using a laser light, which is upshifted, then it will be resonant with the particles which are moving towards me here. So as I scan my laser, I'm exciting particles, some of which are moving towards me. Some of which are moving away from me. And then I'm going to get out a spectrum at the upper to intermediate transition. Here is the upper, the lower transition here. And the amount of light I get out is going to tell me how many particles there were that were resonant with this frequency here, how many particles there were that resonanted with this frequency. And so I can actually map out the entire distribution function using this, because I'm very carefully tuning the light. And so I know when I meet the resonance condition how fast these particles are going with respect to me. You need to have a steady plasma to do this, because what you're actually doing is ramping the laser frequency in time from some initial value. You will ramp it around that, and then you'll probably sawtooth it like this. So your plasma has to be nice and steady during this time now that you're ramping it. So this doesn't work on very short lived plasmas, but it does work very nicely on like low temperature plasmas that people use for various chemical processes. Any questions on that? Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: So it can perform arbitrarily well if your plasma is very, very stable, because you can just do this many times, or you can have a very long integration time. So if you run it for twice as long, you're going to get twice as many photons, and then the error goes to the square root of n kind of thing. So you can make this very good. The trouble is if your plasma is only short lived, then you can't do this many, many times. But if you've got like a glow discharge that just sits in the chamber for 24 hours, then you can just do this 24 hours, and get very good statistics. And you really can measure the tail very, very carefully. Any other questions? There's a variant on this, which is called two-photon absorption LIF, Laser-Induced Fluorescence. And so this has the very compounded acronym TALIF. TALIF is used because in many of the interesting transitions we've talked about-- and you'll remember I talked about the Balmer series and the Lyman series earlier. And I said for the Lyman series, even the lowest one of these transitions here requires an ultraviolet photon, maybe something the Lyman series at 121 nanometers or higher-- lower, shorter wavelength. So if you're trying to do this LIF technique with hydrogen, which you might very well want to do, then you're not going to be able to do it, because these photons don't propagate through free space. And so you can't just set up your laser outside the vacuum chamber. The air will absorb it very strongly. So tau is much, much greater than 1 for air. That's your big problem with all sorts of vacuum spectroscopy is that a lot of the wavelengths of vacuum light gets absorbed in air. So you either have to put your spectrometer inside the vacuum chamber or something else. You can't put a laser inside the vacuum chamber. That just simply doesn't work. So what you do instead is you take advantage of a very, very strong laser, because if you have a very strong laser, there can be a virtual energy level here, where we have a photon coming in with h lower upper divided by 2, half the energy. This will excite the electron into this virtual energy level, which absolutely does not exist. But if quickly enough another photon comes along, also with the right energy level, then we can successfully promote that electron all the way up into the other energy level. Because this frequency only has to be half of this transition frequency, we can go from having something at 121 nanometers, which is in the vacuum ultraviolet, or maybe 240 nanometers, which is just in the near ultraviolet. And we can propagate through a vacuum here. So this means you can still do TALIF, even if the energy levels of the atoms you want to work with aren't really playing ball. But what you do need is a very intense laser, because at this point, if you don't have another photon come along, then the electron will realize that this is not a real energy state. And it will just return back down to here. And the original photon will just go on as if nothing has ever happened. So you need to have a very high probability of two photons coming in a very short period of time. So we need a very intense [INAUDIBLE]. And you can still do exactly the same tricks we talked about here with scanning your laser in order to slowly map out the distribution function of the particles in your plasma. Any questions on that? Yes. AUDIENCE: [INAUDIBLE] JACK HARE: Not really. I mean, almost going back to what I'd erased with the natural broadening, any energy state in a particle does exist for a very, very short amount of time. The stable energy states are the solutions to Schrodinger's equation. But we can have unstable states, which have very, very short lifetimes. And so we can make an electron into this state here. It's just that this is extremely short lived. So it may be picoseconds, or something like that, before we need another photon to come along. Yeah. AUDIENCE: [INAUDIBLE] JACK HARE: No, these are not measurable light. This is why you need a very intense laser. This is not something that you can-- I don't believe that you can measure the virtual state lifetime. Oh, sorry. [INAUDIBLE] first, and then. AUDIENCE: [INAUDIBLE] JACK HARE: This is better if you have the same energy, because then it's more likely the second photon that comes along will have the right energy, and also half as many lasers, which is always good. Yes, I believe you're right. This scheme could work with various other things. I'm sure you'll get $10 million in venture capital with no problem. Good. AUDIENCE: [INAUDIBLE] JACK HARE: And I think there's even a way of looking at this where you're like, ooh, there's a nonlinearity in the system, and the medium itself does the second harmonic generation, and then absorbs that photon. So if you prefer to think of it in terms of second harmonic generation within a medium, any medium is nonlinear if you drive it hard enough. So again, a very intense laser-- you can get out the second harmonic generation from anything. So you can think about it like that, if you prefer. And then all it is is the medium converts these two photons into one photon, and then it immediately absorbs that photon. So maybe that's a nicer way to think about it. Yeah. AUDIENCE: Thank you. JACK HARE: Oh, OK, cool, great. Any other questions? Okie doke. See you on Thursday.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_14_Line_Radiation.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: Right. So we talked already about free-free radiation, bremsstrahlung, electron, cyclotron radiation. We talked a little bit about free bounds or recombination radiation. And these are all relatively simple compared to the complexity of bound-bound radiation, which is what we're going to be dealing with today and maybe for the next few classes. The bound-bound radiation is the radiation that we get when electrons move between discrete energy levels within what we're going to call it Eta So we have maybe four energy levels like this. We might have an electron in some excited state up here that drops down to this energy level, and it will emit a photon. Or there could be an electron in the same energy level here that drops down all the way to the ground state, and emits a photon of a different energy. And what we would see on our spectrometer would be some distinct lines corresponding to these different photons. And so one of the things we might want to know is how strong should these lines be? Where should they be? If I see these lines, what does it tell me about the temperature of my plasma, about the density of the plasma, things like this. And so, again, bound-bound radiation here, we might also call this line radiation. The reason being that when people first did spectroscopy on films, you would get these peaks showing up as very discrete lines on their spectra. So they're called spectroscopic lines. And just a note on notation here, or at least on the [INAUDIBLE], we're going to be using the word atom a lot here. And for the purposes of this section, an atom is a nucleus with one or more electrons. So this could be 73rd states ionized tungsten. As long as it's still got an electron, we're still going to have spectroscopic lines coming out of here. So this is not an atom as in an unionized ion. This is a slightly different definition here. So, yeah, this could be something like argon 5 plus. That still counts as an atom. So in order to get a feel for some of the physics that's involved in these transitions, we're going to go to a very simple system. We're going to go to a two level system and consider the processes which exist in that two level system. So in this two level system, we just have a level L lower, and another level U for upper. The upper level is an energy EU, and the lower level is energy at EL. So this is a two level system, and we're going to consider three processes that go on inside this two level system. OK. So two levels, three processes. And some of you may have seen this before already, we're going to be looking at things called the Einstein coefficients. Can't be bothered to write out coefficients. OK. So the three processes that we're going to be looking at, the first process I'll just draw on here, is what is called spontaneous emission. This is probably the one that we're most often thinking about when we're thinking about spectroscopy here. We have an electron in the upper level, and it drops spontaneously the lower level and emits a photon. There's a frequency H nu, which is equal to the difference in energy between the upper level and the lower level here. So this is spontaneous emission. Just want to note as an aside here that Hutchinson calls these levels I and J. This is his notation, and I've swapped it around because then you start writing like N subscript I for the number of states. But then we also use the ion density, and I found it confusing. And I just found it easier to think of them as the lower and upper states instead. OK. And this process here has a rate, which it occurs at an Einstein coefficient that we call A subscript U subscript L. So this is the rate of spontaneous emission going from level U to level L. Of course, it's a two level system so that's the only spontaneous emission process we can have. But when we expand this to a multi-level system, we have A subscript, sub number subscript, some other number, and that will be the spontaneous emission from one level to another level. OK. If we have spontaneous emission, we should have a process which looks the exact opposite, and this is spontaneous absorption. We tend to drop the spontaneous bit and just call it absorption. And indeed, it looks just like the mirror of this process. We have some electron in the ground state, some photon that happens to have exactly the right energy comes in. The electron is excited up, and the photon is absorbed here. And this has a rate, which is the Einstein E coefficient between the lower and the upper level. But this is now multiplied, as well, by the density of photons with this specific energy. So we'll use RHO to be the density, the number of photons per meter cubed, of photons with exactly the right energy RHO UL, which can do that excitation. There's an important difference here. This happens spontaneously. It doesn't matter how many photons or electrons are around in this system. This one has to happen only when this photon comes in. And so the rate at which this process occurs is going to depend on the number of photons. If there aren't any photons around to do this, this process just won't happen. OK. And then the final process that we're going to consider, instead of spontaneous emission, is stimulated emission. Stimulated emission is a very interesting process, because here, we have electron in the excited state up here in the upper state, and some photon comes through with energy that is the same as this band gap here. Now, of course, this photon can't be absorbed. There's only this upper level. There's nowhere for that energy to be absorbed. What it does, instead, is it encourages this electron to drop down, and it stimulates the emission of a second photon. So the first photon comes through still at this energy, and we have a second photon at exactly the same energy. And this is the process by which lasers work. So this has Einstein coefficient BUL. So we have a transition from the upper state to the lower state. And again, the rate at which this process occurs depends on the density of photons. We have to have some photons to stimulate the emission so it has the same RHO density of photons with that specific energy. So note these are actually different coefficients. The rate of absorption, the LU, is not the same, in general, as the rate of emission BUL. OK. So those are the three processes we're going to consider, and we'll then start deriving what these coefficients actually are. So any questions before we keep going? Yes. AUDIENCE: [INAUDIBLE] JACK HARE: Basically, every subscript is going to be U and L. So UL here. So this new UL is just to say it's a wave with energy which is equal, the difference, between the energy of the two levels here. So the total energy in the system is conserved. Yeah, I guess I could write these as uppercase L and uppercase V like that. Yes. Yeah. So the important thing about stimulated emission is that you stimulate a photon with exactly the same energy. And, in fact, the photon in the same mode of your cavity. So polarization and things like that are also the same. Yes. AUDIENCE: [INAUDIBLE] JACK HARE: Excellent question. We will get on to that when we do line broadening in a few classes. The rough answer is, roughly, there's a sort of uncertainty principle type thing involved if you do-- if the process happens quickly, you can be less sure about the energy. So, yeah. But these lines can be extraordinarily narrow, right? And so if you look at a spectra, then these lines could be extremely narrow. That would correspond to needing very precisely the same energy. Yeah. OK. Any other questions? Yeah. Is what we're going to get onto, you're quite right. But, no, no, you're right. They can't just all be random because in steady state, they're going to have to match up. Otherwise, our system will be driven in one direction or the other. And so we are, indeed, thank you for the transition, going to use thermodynamic arguments, which we did once before already to match the emissivity and the absorption. So now, we're going to use thermodynamics, and we're going to do that to link these coefficients AUL, BLU, and BUL. OK. I think it's best if I just cover this off and start. OK. So one thing we want to be able to write down is the number of electrons in the upper state versus the number of electrons in the lower state here. And to do this, we're going to use a Boltzmann distribution for that. So this is a Boltzmann type argument where we say that the number of electrons in the upper state-- imagine we have an ensemble of these two level systems non-interacting a large number of them, on average, the number of these systems, which are in the upper state, is going to be proportional to the exponential of minus the energy of that upper state divided by temperature. And again, we're writing temperature in energy units, like, joules, or EV. I folded the Boltzmann constant inside the T here so I don't have to keep writing. This doesn't have any normalization on it. But for a two level system, the normalization is very simple. We'll just divide an upper by and lower, and we will get exponential of minus E upper minus E lower upon temperature like that. And you'll notice, of course, straight away that this quantity is just Planck's constant NU UL. There's a slight subtlety to this. If you have systems with degeneracy so that there are multiple different ways the energy can be in these states. So it could be spin, or something like that. Then you have an extra factor due to degeneracy at the front here. This degeneracy factor is like G of U over G of L like that. So these could be numbers like 1 or 2, things like that. So this is just making this slightly more generic for a broader range of two level systems. For now, you can just consider this factor to be one for what we're doing. But this factor is due to degeneracy. Then we also need to have an expression for the density of photons with a given frequency here. And here, we're going to go back to our blackbody spectrum. And that density of photons is going to be the standard result 8pi H nu cubed over C cubed, 1 over the exponential of H nu on T minus 1. So this is for a system at temperature T. What is the density of photons with frequency nu, or what is the density of photons of energy H nu. And finally, we're going to use the thermodynamic argument that these systems should be in equilibrium. And so that says that the rate at which atoms fall from their top level to the bottom level, which is AUL plus BUL RHO of nu UL times by the number of atoms in the top level must be equal to the inverse process, which is the rate of absorption BLU of photons with energy nu UL times the number of atoms in the lower level. So this just says the rate of de-excitation is equal to the rate of excitation our system is in steady state. And if you put all of these together, these effectively form a set of simultaneous equations, you find that, indeed, our Einstein coefficient for spontaneous emission AUL is equal to 8pi H nu UL cubed upon V cubed BUL, like this. This means that we have something linking the spontaneous emission to the stimulated emission. And more simply, we also find that GLBLU is equal to GUBUL. So again, this is the two different types of-- sorry. This is linking the processes going up and down and this is the absorption. And this is stimulated emission. So similar to how we did last time, we have used a system in complete thermodynamic equilibrium with Boltzmann occupation, steady state, and blackbody photons. And we've used that to pin these different coefficients to each other. So if we calculate even one of them, like, BUL, we now immediately know BLU and AUL. And so that's good because we get all of them for free. But it also means that all the processes are linked together in this system. OK. Any questions on that? I then tell you in a moment how we actually calculate even one of these so that we can get the [INAUDIBLE]. But now, it's important just to realize that those three are [INAUDIBLE]. Also, the cool thing is, this has no material physics in the relationship between these, has no materials physics in it. This is related to lasers. For laser, you need a three or a four level system because you have to pump the excited state. And then you have to have a metal stable state halfway down. You can't actually get lasing in a two level system. But in a three level, four level system, you have. So the systems which are favorable to lasing have specific energy levels that are spaced in a way that you can get stimulated emission. I just mentioned stimulated emission as that sort of-- I think when you look at this diagram here, you initially think to yourself, do I need stimulated emission? It seems a bit niche, like excessive bit of physics that I could throw in there. Can't I just get away with balancing spontaneous emission with absorption? They look quite similar. But it turns out that for the thermodynamic balance, and this is the thing that Einstein realized, which is why the Einstein coefficients, you have to include the stimulated emission in there. Otherwise, the theory doesn't work. You can't get them balanced properly. So I mentioned this as a thing which is important that lasers in the two level system. It doesn't play an important role and, also, this is material independent, right? Once we've calculated one of these, the other one depends purely on the energy level gaps. It has nothing to do with the crystal structure or something fun like that. Yes. Any reason it isn't stimulated absorption? What would that look like? So at the moment for spontaneous absorption, we have one photon come in, and the electron goes up. So with stimulated absorption with two photons coming in, one getting absorbed and one going forwards, like, the opposite of this process. I don't have a good answer for why that doesn't occur. But it's an interesting one. It would go as the square of the photon density field because you'd have to have two photons in the same volume to do it. So possibly, for most reasonable systems where we don't have that much radiation density, that wouldn't be a big-- that wouldn't be a very important process. But I see what you're saying. I hadn't thought about that. Yeah. AUDIENCE: [INAUDIBLE]. JACK HARE: Not in this, because in order for that to happen, you need to have some ill-defined, metastable level halfway up like this. But there has to be some level. It has to-- it doesn't have to be exactly there. Because of the uncertainty, it could be like this, or something like that. But it has vaguely possibly exist. But in this model, I've said very clearly I only have two levels. And so I don't have to think about that. But you're right, if you-- in a real system, you can have two photon absorption where one photon takes up to there. And before the atom has time to realize that this energy level doesn't really exist, the second photon comes in and takes it up even further and then everything is OK. So there's like-- yeah. But in this, I've been very clear I only have two levels so I don't have to worry about that. But, in general, [INAUDIBLE]. Any other questions? AUDIENCE: [INAUDIBLE]. JACK HARE: Yeah. So actually, none of this is about ionization at this point. So these are just two levels within an atom. The number of electrons is conserved. And you're right, they're just energy levels. So this could be-- actually, the simplest way to think about this, maybe the simplest quantum system, would be a spin which is flipping up and down. So these could be the hyperfine splitting of the ground state, or the fine splitting of the ground state and the magnetic field, or something like that. Yeah. So that's why this is quite generic. The way I'm thinking about it is about energy levels within an atom that isn't undergoing ionization. And we'll talk about all the ionization processes in a moment. But you're right, this applies more generally to other systems that have discrete energy levels. They've got two discrete energy levels. All of this math still works. Cool. OK. So, unfortunately, if you want to work out any of these, you're going to have to do some quantum, right? And the reason for that is, if you go into Hutchinson's book and you look at an expression, for example, BUL here, you find out that it has 8pi cubed over 3H squared 4pie epsilon zero. Don't ask me where any of these factors come from. And this is times a quantity called SUL. SUL is like the dimensionless rate, and then this is all the constants that you need in order to get all of your equations to match up properly. SUL is the thing that depends on quantum. And that is to do with the transition between your two states. So we're calculate an overlap integral, the integral between your upper state and your lower state. But we're not just doing the overlap between those two things because there is a operator, which acts on the upper state to transform it into the lower state. And that, for most electromagnetic radiation, is the dipole operator, which is the charge times the position vector here. We integrate that, as we always do, over all space, and we square it because we're interested in the power here, rather than the electric field. So this-- you can think of this as representing a transition probability. How likely is it that the emission of a photon can take you from the upper level to the lower level? And so levels which are more alike, you're more likely to have transitions between them. Of course, there are some levels which are very alike, which we can't have a transition to, where we have SUL equals zero. And these are called forbidden transitions. And these forbidden transitions don't do things like conserve angular momentum, other important things like that. And the way that that manifests itself in the mathematics is that under the operation R on the lower energy level, the PSI of U is orthogonal to R of PSI L. And so when you do the integral, you get zero out. So effectively, this keeps track of angular momentum conservation, or momentum conservation for you. So you have SUL there equals zero. So you might have, for example, a system with three energy levels where you can have transitions like this that both produce photons with different energies. But for some reason, this transition here, as SUL, equals zero. And you can't have a transition like that at all. So these are forbidden transitions, and sometimes people call these selection rules. So if you've done any amount of quantum physics or atomic physics, I think you'll probably have come across this before. If you haven't done a class on that before, I'm just trying to give you a very brief flavor of it. In general, this is a difficult problem. For a two level system, you only have to calculate SUL once. But if you imagine for a three level system, now, we have to calculate the overlap between this level and this level, this level and this level, and this level and this level. And for a four level system-- and then you get the idea. So this quickly becomes very, very challenging. If you want to actually do this, you have to calculate this overlap integral for all the different wave functions. To make it worse, you also need to know what the wave functions are. And so you have to solve the Schrodinger equation, first of all, to know what the states are allowed in your system. But, of course, if you have more than one electron in your system, not just one more than one level, more than one electron, this is now a many body problem and so you can't even calculate the wave functions properly. So very quickly you can see that spectroscopy is going to be a horrible mess compared to all of the other types of radiation we've dealt with before. Well, line emission is going to be a horrible mess compared to all these other types of radiation. But say you have managed to overcome all of these problems, the main takeaway from this is, if we know BUL by calculating this overlap integral, then we also get the other two coefficients AUL and BLU for free. And that means that even in a system that is not in thermodynamic equilibrium where we don't-- where we can't assume Boltzmann occupation, we can still calculate now the occupation of the upper and the lower energy levels through a rate equation where these are the rates. This is effectively a steady state rate equation here. And that means we now know how many electrons are in the upper level, how many are in the lower level, and we can make a prediction of how many photons we're going to get per second. And so that means that we can predict the intensity of one of these spectral lines here. Again, this is an extremely simple system. There's already quite a lot of work to get there, but we can do that. So that's a very thorough explanation of just the two level atom before we go on to more general systems with multiple levels and multiple electrons. So any questions on that before we move on? Yes, go ahead. Pointed to the wrong person. There you go. You get slightly different frames, I think. Yeah. So the question was, do you have to do this every time, or is it tabulated somewhere? So the answer is, someone may have already done the tabulation for you and made that available in what's called an atomic code. AUDIENCE: [INAUDIBLE]. JACK HARE: Oh, generally, these are experimentally-- these are theoretically predicted, and then experimentally confirmed. For some of the transitions, they're so weak it's really hard to do it experimentally. If you go on the NIST website, then NIST has a huge table of all the different transitions here and they are like oscillator strength. And sometimes, they have a star next to them, being like this was theoretically predicted, or they have something else being like, no, this has actually been experimentally checked, or like someone tried to check it but we don't think it's a very good check. Because these are just-- there are so many of these lines, it's impossible to check all of them. And, of course, you can theoretically predict these four states of matter, which are very hard to reach in a real plasma. So for very, very hot very, very dense plasmas made out of plutonium, or something like that. We probably do actually have good measurements of that, but not unclassified ones. So there are things that maybe it's hard to experimentally verify. But actually, you can still theoretically. So, yeah, but if you are doing something where that hasn't already been tabulated, you may have to do it yourself. So you may actually have to do these calculations yourself. And there are people who build careers out of doing these calculations more and more precisely. Because as I mentioned, this is a very hard calculation to do. So there's always going to be some approximation. And so then, it's like you could make your career out of doing a slightly better approximation than previously, and you'll get slightly better results. And that will be worthwhile to some people. Yeah. You have a question? AUDIENCE: [INAUDIBLE]. JACK HARE: Yes, this is-- you should think of it as a number density of photons. We're using RHO here, which often is used for mass density, but I'm using it here for number density because that's what [INAUDIBLE]. So, yeah. AUDIENCE: [INAUDIBLE]. JACK HARE: We will actually talk about that later, you're right. And unless your spectrometer is absolutely calibrated, it's very, very hard to get-- it's very, very hard to get it just from a single measurement of a line. What's interesting is, if you have two lines here, this isn't the best drawing, actually, to show this. A better drawing would be if I had one of these transitions coming from this state here. So then you'll have two lines which have a downward transition, which are both emitting photons at different energies. There are two things you'll need to know then. You'll need to know what the strength of this transition is, which is the strength of the spontaneous emission AUL, which will be different for these two different lines. But then you'll also want to know what the occupation probability is of these two upper states. And that will, in general, be a function of temperature. So it will be-- this state will more likely to be occupied than this state just from a Boltzmann argument. If you happen to be able to use a Boltzmann type argument, that's what you predict. So that means that the ratio between the amount of light in this blue line and the amount of light in the orange line here will tell you, once you've taken out the factor of AUL that you've precalculated, it will tell you the occupation of this upper state versus this state. And that will be related to temperature. And then you have your plasma temperature. And it's a relative ratio. You don't need to know the absolute intensity. So in reality in spectroscopy, we normally look at line ratios. They're much better to work with. There'll be many, many lines, not like in this two level system. There'll be many, many lines out there. We look at line ratios like a differential measurement again. The thing that we love to do with experimental. Does that make sense? Good. We will now go through lots of other processes which can excite and de-excite your electrons. Not just radiative processes like this. And then we will use a subset of those to define different sorts of equilibrium, which help you to calculate what this occupation probability is for the different energy levels. So it's quite an involved series of steps, and I'm trying to go through it quite slowly. Any questions before we move on to all the other processes out there that aren't just these simple radiative [INAUDIBLE]? OK. OK. So the first set of processes we're going to look at are processes which excite or de-excite electrons. So in this case, we're looking at a change in the electron energy, but, or and, the number of electrons is constant. So we are not dealing with ionization here. We're going to get on to ionization and recombination in a moment. So the first set of processes we've actually already covered. But I'm going to draw them on the board here so that we have a nice complete set. These are the radiative processes. So these are processes where we have, for example, some energy levels, again, lower and upper. We have different processes going between them. We have spontaneous emission, absorption, and stimulated emission. And again, we just covered this, but I'm just writing it down here so that we've got a full set of [INAUDIBLE]. Continuous emission, absorption, and stimulated emission. And I'll put the density of photon states on here, as well, well, RHO of new UL. Competing with these processes, or ongoing with these processes, a set of processes which are collisional. And in some cases, the collisional processes may dominate, and in other cases, the radiative processes will dominate. And we'll talk about those in a moment once we've got through this whole zoo. So the collisional processes, again, between the lower and the upper levels like this, are called things like-- this one here has a coefficient CLU, which is proportional to the density of electrons. Because this happens when an electron comes nearby the atom. Maybe it's an electron which is a free electron so it's up here at some high energy. It interacts with the electric field of the ion, and then the atom and flies off. But that interaction with the electric field promotes an electron from a lower energy level to a higher energy level, taking some energy from the free electron here. And this process is called electron impact ionization. So I'm just going to write that as P minus impact-- not ionization, excitation-- excitation. And actually, exactly the same process can happen in reverse. This electron can come in, its electric field can interact with the electric field and the wave functions of the atom. And this can drive a downwards transition, as well. And this has a coefficient CUL, which is proportional to the density of electrons, as well. And so this would be the same name as above, but the excitation. So you can imagine in a very dense plasma where there's a lot of electrons flying around, you may well end up with these processes being very dominant. And they will dominate the population of these different excited states. And so when you're trying to calculate the excited state population, these are the ones you're going to have to focus on. We'll talk a little bit about how we actually calculate these in a moment. So those two-- well, we've seen these already. Hopefully, they make sense. Does this make sense? OK. These were the excitation, and the de-excitation. Now, we're going to have processes which actually change the number of electrons. So these are ionization. And we don't tend to call it de-ionization. We call it recombination, instead. So in general, you can think of these as processes that give an electron enough energy to be free, so delta E, and that makes it free, or they absorb free electrons energy so that our electron becomes bound. So that first process is an ionization process where we lose an electron from our atom. And the second one is a recombination process where we gain an electron onto our atom. So I'm going to draw this as one long sort of [INAUDIBLE] tapestry of various different processes here. This is the lower energy level and this is the-- now the ionization energy level I. And I'm going to shade the region above here, because if an electron is in this region, it's now a free electron. But you can think of this as the ionization energy you get to this point. And I've called this L. This may not necessarily be the lowest energy state in the system. It's just the lower state. There could be other states down below. There could be states up above it. We're just considering one level L at the moment here. OK. So the first process, we're going to consider is where we have an electron in the ground state-- oh, sorry, in the lower state down here, and we have a free electron that swings by. And it gives up enough of its energy to excite this electron, not to the upper energy level, but actually, up so that it's ionized, and then two electrons come out. And this we give a coefficient CLI. And we give it a superscript CI here. And that is for collisional ionization. And we call this electron impact ionization. You can see this is analogous to electron impact excitation here. I'm just going to make it clear that this is a lowercase l. But there's another coefficient here. What's this coefficient proportional to? Remember previously, we've had density, we've had photon density, all sorts of things like that. What's this coefficient proportional to? Yeah, exactly the same as previously. There's nothing special. Yeah. OK. Now, we have a slightly different one. We actually have two electrons coming in. As they get close to the atom, one of the electrons runs off with a load of energy, and the other electron has lost so much energy to the first one that it drops down into this energy state here. So this is a recombination process, and this is called three body recombination. And we give it a rate coefficient CIL, and a superscript three body recombination. What is this proportional to? Someone else. But I will come back to you. Yeah. What are the neutrals in this? AUDIENCE: [INAUDIBLE]. JACK HARE: Why didn't the atoms come in here, in that case? It's a good point. In fact, all of these are proportional to the atom density, as well. I've just left it off because every single one of these is proportional to the atom density. So we don't need [INAUDIBLE]. Yeah. AUDIENCE: [INAUDIBLE]. JACK HARE: Why is that? Yeah, exactly. So we've actually got to get two electrons in the same place close to the atom. And so you're going to square the density in order to work out the probability of that happening. So this three body recombination can be a really important process in dense plasma. It's actually particularly important in dense cold plasma. So plasmas with very low ionization tend to have dominant three-body recombination. OK. The next one is also quite odd. We imagine that there's some intermediate energy level like this in which there are two electrons. And there is some spontaneous process by which this electron drops down to this energy level, which provides exactly enough energy for this electron to bump up and be free. In reality, this will happen as long as this intermediate level is somewhere above halfway, because then one electron dropping down, will have enough energy to boost this off and it will be not just a free electron but a free electron with some bonus kinetic energy, as well. This is still written as a coefficient from lower to upper, even though we should probably think of a name for this intermediate energy level. So we call it CLI. And its superscript is AI because this is a process called autoionization. Does this have any dependence on the electron density? I saw a shaking head. It doesn't have any dependence on the electron density. We don't require any free electrons to do this. This is something the atom has just decided to do by itself. Of course, for this to happen, there has to be an unoccupied lower energy state. So this has to be some excited atom, right? So some process like electron impact excitation has moved an electron from the lower level to this intermediate level here, leaving a state for this electron to fall back into. So this can't just happen in us. We're not going to suddenly start emitting electrons by autoionization. Because in general, most of our electrons, if we're well behaved, stay in their ground state, OK. But if in a plasma where these processes are happening, we very well may have a hole in a lower energy level, which allows this process to happen. Pretty cool. OK. I saw Nicola. AUDIENCE: [INAUDIBLE]. JACK HARE: Yeah. So basically, it needs to be more than halfway. It needs to be greater than energy lower ionization divided by 2, right, as you say. Otherwise, there won't be enough energy for-- I don't know if there's auto excitation that can happen. That wasn't in the book. But it seems like it might be possible, right, where you could have excitation of this electron up to some higher energy level. Maybe that doesn't happen for some reason, I don't know. Did you have a question, as well? OK. Yeah, we're coming to that. That's a question I'm about to ask in a moment. You're absolutely right, that can still happen. But it's subtly different from these four. I'm up to the fourth process in a way which I think is informative. And then we will go on and cover what you're talking about, yes. Any other questions on this before I do the full process? OK. You may have noticed there's a sort of symmetry to these processes so far. And so this is the symmetric partner of this process. This is where we have some intermediate energy level. We have an electron drop down out of the unbound states, and this spontaneously promotes an electron for this intermediate state here. OK. And this is called-- well, this is, again, from-- got LI written here. I'm going to go with it. But maybe it should be IL. And this has a superscript DR, and this is called dielectronic recombination. What is it proportional to? Yeah. Then see electrons, again, it relies on there being electrons here, free electrons. When we're writing any, obviously, we're talking about free electrons here, ones which are not bound by atoms lies on that. So a couple of interesting things here. Notice that these reverse processes do not have the same scaling with density, and so we don't necessarily expect these processes to balance. In fact, in an arbitrary plasma, they will not balance. In steady state overall, you have to have balance between the upward processes and the downward processes. But it doesn't mean that each process has to be balanced by its mirror because they are not symmetric. These recombination methods here require one more electron than the ionization mechanism here. What else is interesting about all of these ionization and recombination processes from the point of view of a spectroscopist? How would-- yes, go on. They are non-radiative. Does that mean, given this is a diagnostics course, that I just wasted your time by telling you about them because we cannot diagnose them? I've wasted your time, OK. Any other votes for wasting your time? I've been known to do it before. Yeah. They are absolutely key in some plasmas. But understanding the ionization state of the plasma, and also for understanding, in the case of these ones, the excitation state of each ion. And if you don't know that, you can't predict how many photons you're going to get out. So even if your signature is a radiated signature, it will be strongly modified by this. So these are still important for the population. And in this case, this is the population of excited states and ionization states. Eh, I definitely ran out of room. That says states. OK. Cool. Any questions on these before we go on to some [? radiative ?] ones that we can directly [INAUDIBLE] and also [INAUDIBLE]. Yes. Yeah. I mean, you-- what I did-- what I did here without making it very explicit, and we will go on to this in a moment, is write a rate equation. And this is just a rate equation with two levels. But instead, you would write a set of coupled rate equations which has the rate equation for each pair of energy levels. And you'd have to write that for each pair of energy levels within the hydrogen or the deuterium, and each pair within the tritium. Now, deuterium and tritium are close enough that we don't have to worry about. But you will have tungsten in your reactor, or something like that. And then you will want to solve the tungsten rate equation so you know how much ionizing radiation is coming out of the tungsten that can then reionize hydrogen and you'll scrape off layer. I don't know, something like that. This stuff may be important. And so, yes, of course, as soon as you do anything more interesting than these very simple systems, you're going to have even more complicated equations. And I will write down the full-- think, the full-rate equation towards the end of this class if we get to it. But it looks like this, but with lots of some symbols to indicate the fact that you have to do it lots of times. Yeah. Any other questions? OK. Right. Now, we're getting on to processes that, I think, Grant was asking about that. So these are, again, ionization and recombination processes. So again, I have this shaded region with free electrons in it. I'm going to call the highest energy level the ionization level. And the lower level L. And now, I'm going to look at processes, for example, which involve protons. So here is a photon coming in and it's got energy H nu IL. So it's got an energy which is matched to this gap between the lower state and the excited state. And as you might expect, this takes an electron and ionizes it. OK. And so this process here is called photoionization. It has a rate coefficient C lower to ionized state. It has a superscript Vi And what is it proportional to? Yeah, thank you. With that specific energy. So RHO of nu [? LIL. ?] OK. Now, we have the opposite of that process. We have some electron, which is in some excited state. And then it drops down to here. We've already covered this, right? This is spontaneous radiative recombination. We talked about this briefly, but now we're putting it into our more quantum picture of what's going on. And we will give it a rate, as well, so that we can include it in our rate equation here. So this is CIL radiative recombination. I'll write that up here, CIL radiative recombination. So technically, this is spontaneous radiative recombination. What's this proportional to? Thank you. Note again, big asymmetry between these two processes which look like mirrors of each other. One of them depends on the photon density, one of them depends on the electron density. There's no good reason to believe that these two processes balance each other. OK. The final process is, again, you can see these if you want to as the ionization recombination analogs of the excitation and de-excitation processes. And indeed, we do have a process where we come in with a photon with energy HIL. We have an electron drop down, and then we get two photons out here, H nu IL. This is stimulated radiative recombination. You see, I'm getting lazier and lazier as the class goes on. I can't be bothered to write out any full words anymore. And this has a coefficient CIL stimulated radiative recombination. That is proportional not only to the electron density, but also, the density of photon states with frequency nu IL, like that. And these processes, as we discussed, are radiative. So we can look out for them, but there are nice diagnostic signature. But they are also important for setting the occupation of the different energy and ionization states. Question. AUDIENCE: You mentioned that the process of [INAUDIBLE] there is some [INAUDIBLE] by [INAUDIBLE] but like [INAUDIBLE] far above [INAUDIBLE] having [INAUDIBLE] was this really like a quality [? type? ?] JACK HARE: Yeah, absolutely. The process up to some energy state I, which could be a free state up here. So this could be some other state like that. We can think of the free states as a continuum of states. So once you get above that ionization energy, you don't have discrete states, you have a continuum of states. And that would depend on this energy density here, which of course, for very high energy photons will be very low. We don't have very many gamma rays from a black body unless it's very, very hot. The only thing I would say, and I'm not sure exactly how to make this not hand-wavy, is that your gamma rays, although, they are good at ionizing, they tend to just go through stuff, right? I mean, the point is if you put a bit of paper in the way, you're like great. The gamma ray hits a bit of paper. It's got all the energy it needs. It should just be absorbed and ionize something. But in fact, it goes through that, and it goes through lots of other things. So I think there is a quantum mechanical effect about the interaction of this very high energy radiation with these energy levels, which makes that a less likely process. So the gamma ray's, in general, not going to have very high cross-section for that interaction. But I don't know how to take that hand-wavy argument and turn it into something mathematically. Yeah. Yeah. Yeah. And so I'd have to average over-- integrate over all the energies, which is what I'm about to do. So, no, it's a great question. It's a great question. So for the collisional processes, so these non-radiative ones, these ones. And the ones on this side, as well, at least this set here. We actually-- we have a different way. We don't tend to write these coefficients in terms like this. We tend to think about them in terms of reactivities, right? So we have some cross section for the interaction which we could write as sigma IJ. So that is the cross section for an interaction going from state I to state J. We multiply that by the velocity of the free electron that we're working with multiplied by our distribution function of electrons. And then we integrate that up over all the electrons in our distribution function. And that will give us something that looks like, for example, any sigma i j v some reactivity here, times the density. If we're dealing with a process that involves two electrons, like three-body recombination, we'd actually have to put the distribution function in twice and integrate over the interactions between particles going from distribution 1 V1, and particles going from distribution 2 at V2. So you don't just square this, you actually have to integrate and then do a double integral over these two distributions. AUDIENCE: [INAUDIBLE]. JACK HARE: Per nucleus, yeah. So the total rate would have an additional times N. I'm going to go with I here, as in whatever. This is an electron causing a change in the atom state from I to J, and so we need to know how many atoms there were initially in state I. And in this case, when we're writing this in terms of I and J, so this is transition from I to J, and that could be excitation or de-excitation, or it could be ionization, as well. The reaction rates, the reactivities, have the same form as that. So you'll still end up with something like that. OK. And so then, we're now in a position to write down our full balance. So we can say the change in particles, atoms in state I-- for this example is an excited state, or it could be an ionization state with respect to time is going to be equal to the sum over states not equal to I states. J not equal to I, because if we have a transition from state I to state I, we don't change the number of particles in state I, so we don't want to double count. This is going to be the number of particles in I, the spontaneous emission process from I to J minus the number of particles in J, the spontaneous emission from some other J to I. These coefficients may be zero if spontaneous emission is forbidden. For example, if spontaneous emission would involve going up an energy level. So in this case, we're looking at spontaneous emission from I downwards and from a different J back upwards here. So this is I. This is I like that. Then we would also have a term that's to do with our absorption and stimulated emission. So that is ni Bij minus nj Bji density of photons with energy, or with frequency IJ like that. So that covers these processes, as well. Then we have the two-body processes, which are these collisions, impact ionization, autoionization, this sort of thing. Actually, autoionization is not a two-body one. OK. We'll get that back flat. Well, we're going to have the density of electrons now, the density of ions in state I, and then this sigma IJV that we just derived, or at least handwaved, into existence up here. And that's also going to be balanced by the opposite process driving the system in the opposite direction. So sigma JI instead of sigma IJ. That should be times V. These processes are two-body. These processes on the top line here are one-body, maybe plus 1 photon, but that doesn't count. And then we have the three-body processes at the end here. And E squared and I sigma IJV. And the meaning of the angle brackets here is actually an average over both the distribution functions, or at least over the distribution function with itself. And then, of course, there'll be the opposite process here. And E squared and J is an opposite process. I'm not convinced there is. I'm going to put it in anyway because it's my notes. I can think about that in a moment. And these are the three-body processes. Now, in general, we allow ourselves a small simplification, and we tend to look for plasmas in steady state. And that steady state doesn't have to be an absolute steady state. It can be on some time scale of interest. The plasma does not change its occupation state very much. And so this is a reasonable approximation, quasi steady state. Because even with this approximation, I think it's pretty clear that this is a very long equation that you would have to solve in order to work out the-- and this, I believe, is just what we need to get the excitation state of a single atom. So this is the excitation, for example, of an atom at state Z, like that. We'd have to have another coupled equation, which would tell us whether we're going from NI of Z to NIZ, plus 1, or NIZ minus 1. So that'll be our ionization balance equation, as well. And you have to couple all of those equations together. And this is where I start thinking that I've mixed up the excited states in the ionization states, because if I'm talking about excited states, I don't have these three-body processes. So it's a bit of a mess. OK. But my general point is a very, very large number of equations. What we will do in the next lecture is, we will work out some types of equilibrium where some of these processes are unimportant and, therefore, we drop them. And that simplification allows us to make a calculation of the occupation states in a much simpler way that is actually tractable. But, in general, for some arbitrary plasma where we can't make some of the assumptions we'll make in the next class, you will need to solve this full-rate equation in order to work out the occupation of excited states, the ionization states in your plasma and, therefore, what radiative signatures you're going to get out of it so that you can do spectroscopy. So all gets a bit complicated. We will leave it there. Does anyone have any questions? Yeah. AUDIENCE: [INAUDIBLE]. JACK HARE: Yeah. Oh. Yeah. Thank you, that's great, good point. You were paying attention. Very good. Fixed it. Yeah. NI in this case, yes. I would prefer to think of it as atoms in a certain electronic configuration. OK. So imagine you've got two electrons here. They're indistinguishable, so it doesn't really matter what state the electrons are in, as long as the configurations are indistinguishable. Yeah. So this is an atom in some state which has some energy. And so, for example, in a three-level system with two electrons, this would be your ground state, and this would be another state. But because we're not tracking which electron is which, this would be an identical state. So this would actually be like a degeneracy. When we're talking about that G factor before, these two states would be degenerate here. So this is-- so these are all different-- or this is, for example, N0, and this could be N3, or something like that. And maybe I would label my states in terms of their increasing overall energy. And so you can imagine there's some intermediate states with different energies like this. OK. Pick three out of a hat. I think it's probably two. Anyway, yeah, Nicola. AUDIENCE: [INAUDIBLE]. JACK HARE: Which one? AUDIENCE: [INAUDIBLE]. JACK HARE: Stimulated radiative recombination. This process is possible for any of those, but it doesn't-- there's a proportionality here, but there'll be a constant that depends on quantum in front of it. And I would have thought for very high energy electrons, this is not going to be a very significant process. So-- yeah, that could work. But the trouble is, of course, if you make your photon energy density field very large in order for this process to be big, then your photoionization also goes back up. And so if you have a radiation dominated plasma like that, it will still come to some equilibrium, and that equilibrium will not be everything in the ground state. It will be something different. And we'll talk about that. But, yes, you're-- I don't know anyone who's done this in the lab, just pumping in photons and somehow reducing the total energy of the system and getting light out. I think it's just one process that takes place with all the others. So you're unlikely to be able to engineer a plasma that is dominated by this. Do any of the other ones have a proportionality to both electron density and photon density? No. So there is scope here for making a very dense, very photon dominated plasma where you could imagine that this coefficient could be larger than any of the others. But I don't know how dense and how energetic, or how dense your photon field and your electron density needs to be for that to happen. Cool question.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_6_Refractive_Index_Diagnostics_II.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: OK, so you'll remember, in the last lecture, we discussed the technique called schlieren. And schlieren, we were relying on the fact that the plasma has refractive index gradients inside it which cause a deflection, and that deflection angle of the rays going through the plasma is going to be something like 1 over 2 times the critical density times the integral of the gradient of the electron density along the path of r probing b. And what we did with schlieren is we filtered the rays using a lens and a stop at the focal plane, and we we're able to filter out rays with certain angles. We could let through undeflected rays. We could block undeflected rays. We could do it in the isotropic fashion. We could do it with a knife edge. And this enabled us to image effectively the gradients in the electron density inside our plasma, and this is particularly useful if you've got something with sharp density gradients such as shocks. But when we were doing this, we actually made a pretty big assumption that I kind of buried here. We assumed that we have no displacement of the ray, that the rays pick up some angle as they travel through our plasma. So I'll draw a little plasma here. We assume that the rays picked up some sort of angle but they didn't themselves get displaced. Whereas, for some realistic extended plasma, they would be displaced inside the plasma as well as picking up some angle. And so the rays themselves would not only have an angle, they would have some displacement, and when we use our imaging system with a lens, the rays should end up somewhere else on our object plane, not in the same place that they started with. For schlieren, we assumed that there was no ray displacement. And this is actually equivalent to assuming that our object was very, very thin. So if instead of having this extended object like this, we have an object that's incredibly thin like that, and you have rays of light coming in that then just get instantaneously deflected like that-- this works out OK because there's no sort of path along inside the plasma for these rays to become displaced. Whereas, they are in an extended object. So we made this assumption, but we're now going to relax that and have a look at what happens when we do true shadowgraphy, which is where we're interested not just in the angles of the rays but their displacements as well. So I'm going to try and draw a slightly better version of this image, hopefully with some nice, straight lines. Let's see how it goes. Here's my plasma again. I'm going to have a lens here. I'm going to set up my lens such that it brings into focus rays from this object plane onto some image plane back here. And what I'm going to have is the undeflected rays. So imagine there was no plasma here. The rays would just go through here. And they would all go through a focal point back here. So this again looks like the setup we had for our schlieren, but the difference is now if we consider the rays being deflected inside the plasma, they're going to pick up some sort of displacement. So for example, this ray is going to be displaced down like this. This ray maybe is also going to be displaced down a little bit, and this ray is going to be displaced upwards such that when they get to the object plane, they now have some distance that they have been translated compared to the undeflected one. And then, all that the lens does is it puts light back on the image plane where it came from on the object plane, with some inversion, other things like that. But that means because these rays have actually been moved, because the light has been moved around, they're going to show up in different places on the image plane. And this is where I really hope I can draw this properly, because this is quite a complicated diagram, so give me a moment to focus on it. So let's start with this ray here. It's going to go down. It's going to go down below the focal point because it's being deflected downwards. And so it's going to end up here, so slightly above where the original ray was going to go. This ray here is going to get deflected downwards, and then the lens is going to push it up. And it's also going to go below the focal point here as well, so it's going to be deflected up from its original position. And this one here is going to go up. I'm going to bend that one slightly. Oh, no, I didn't want it to go over there. Ah, I got it right the first time. There we go. And this one is deflected downwards because it was deflected off of the object here. So what we're seeing is that this lens is now imaging the light into slightly different places than we had before, so we have the displacement of the rays. And you might want to think of that displacement as an intensity modulation, because, of course, there aren't just three rays. There are a large number of rays, and these all make up an image. And so initially, maybe you had a nice uniform pattern. These yellow rays would have made a nice uniform circular beam like this. And now your blue rays are going to be distorted, and there's going to be regions of very bright intensity and regions of lesser intensity and regions that are about the same. So we'll have all sorts of interesting features inside this. And one way to think about this, imagine we took our beam-- and now we're looking at the laser beam. It's coming directly towards us through the plasma. Imagine we arranged it so we have nine little dots. Maybe we've got nine little laser pointers pointing through our plasma like this. Now, as these rays come through the plasma and they're deflected onto our camera here, we would end up with the dots in different places. So we could have this one is moved up here, this one is moved like this, this one is moved like this, and so on. And you can just think there's a series of displacements for these dots, and if you're able to measure all these displacements, perhaps you could work out something about the plasma itself. And we'll talk a little bit about some more quantitative measurements for doing this, but this is just qualitatively the picture here. Another way to think about this, if you prefer-- these are just different conceptual ways of trying to think about the same thing-- imagine we initially put through this sort of tic-tac-toe grid of light here. We've sort of blocked it off so we've got two vertical lines, two horizontal lines. These lines, as they go through the plasma, the light will be deflected by different amounts. And so for example, this line here could be deflected like. This one could be deflected like this. And so you sort of see we have some sort of distortion to the grid that we're getting out of this, and our initial grid with all these nice, straight, parallel lines is now a distorted grid instead. And that's very important because shadowgraphy does not actually produce an image. I've called this the image plane here, but we don't actually get an image out, because we have something which is significantly distorted and it no longer preserves the things that we like to preserve in images. We take a picture with a camera, we'd like straight lines to remain straight. Here, the straight lines do not remain straight. They don't remain parallel. We don't preserve length, anything like that. But there's still clearly information stored inside these images, pictures, that correspond to the density inside the plasma, and we'll talk a little bit more about how to do this quantitatively. So any questions on this? Yeah? AUDIENCE: So if the plasma here is acting like a lens, what do we need the second lens for? Just as a reference? JACK HARE: Absolutely, yeah, great question. Yeah, so I put the second lens in here for a couple of reasons. First of all, I find it conceptually simpler because you can see the deflection angles at the focal point here, because when I read off the position of the rays at this focal plane, I can tell whether they've been deflected up or down, which is slightly harder to do here. You could absolutely put your detector right here, OK, so you could put a bit of film or something out here. In practice, you can't, because it's right next to the plasma. So you need some sort of lens optic. This technique actually looks a great deal like proton radiography, which we'll talk about later on here. In proton radiography, you don't have any lenses. You can't-- it's very difficult to make lenses for charged particles. And so you do, in fact, put your bit of paper, like your film in this case, very close to your plasma. You might put it a little bit further back, and we'll talk in a moment about actually how important is your choice of exactly where you put your detector, because I could put my detector here or here or here, and I will still get an image forming, but the image will look different. But yeah, we do not need the lens to do shadowgraphy, but in almost all realistic setups where we're using some probing beam through a plasma, we're going to have a lens that allows us to put our detector nice and far away and safe from the plasma. So, yeah. AUDIENCE: All right. Thank you. JACK HARE: Cool. Other questions? Anyone from Columbia? I don't see anyone, so I'm going to keep going. Cool. So let me erase this side. So as I just mentioned, then, we can actually change the shadowgraphic image we get by moving the position of our object plane. So that could be moving our detector, if we've just placed a detector here, or it could be by moving our lens, because as we move the lens, we change the place we're focusing on. Or we could move other-- anyway, you get the idea. There's lots of different ways of moving this to some other place. So let's have a look here. We've got, again, our plasma. And we got our rays coming through it. We've got a ray that gets deflected downwards and a ray that gets deflected upwards and another one that gets deflected down like this. I can draw some different planes in which we can put our detector. Let me just get them in the right places. Or maybe I can call these 1, 2, 3, and 4, like that. And so then we can have a look at, What would we see for this very simple system where we've got some sort of focusing density gradients and some defocusing density gradients here? So this would be a maxima in electron density, and this would be a minima here, right, so that we would get focusing and defocusing. So let's draw what those patterns would look like. So if I have intensity like this-- and let's say that this is the y-coordinate, so I'll call this y here, and I'll do the first one here-- we'll get something-- we're going to have a little bit of deficit of intensity around about here because the ray's being deflected away. So if this is the background intensity of our probing beam, we'll have some sort of drop, and then we'll have an increase because the light is being focused together in the middle here. We'll have another deficit in this region, and we'll have another little increase over there. So we get some sort of nice little modulation, what it's looking like. If we go to 2, we can see that the rays are now getting closer together and further apart, so you'd expect for this pattern to be even more exaggerated. Maybe I should've drawn this less exaggerated to give myself more space. But let's say it looks like this, and this is bigger. If we get over to 3 now, we see something interesting. We actually have rays crossing here, so this means that a lot of light is all being piled into the same place. So we're still going to have a little bit of defocusing, but we're going to have a very sharp spike here, which goes off my page. And then we don't really have so much precise focusing down here, but maybe there's a little bit of light, so like that. And then for number 4, we've got away from having this crossing here, so we won't see it so strongly. It's going to look a little bit more like this, but something maybe a little bit like that. Now, one thing I haven't really exaggerated enough here is the fact that as these rays move outwards, the positions of these maxima and minima are changing. So in reality, maybe this one is a little bit closer in, and then I'll make these ones go further out. So you can see that actually the position of where we get peak intensity is going to change depending on how far away we are from the detector, because the rays have traveled, and so their displacement has changed as well. So there's a lot going on inside here. You can see that there's a lot of richness, and you can see that although we can identify why there are light and dark regions inside our image, it's hard to map them directly back onto whatever's going on inside the plasma. And again, that's because, as I said, this is not an image. You can tell it's not an image because if you move your detector or if you move where you're taking the data, you'd get a very different picture out here. So although this maxima here corresponds to this region of the plasma, it shows up at different positions on your detector at different places. So you can't really say this is a direct image. It is bijective, so this is means that at least for these smaller deflections, we have what's a one-to-one mapping. That's certainly true for the small deflections, 1 and 2, like that, for these two slices here. Once we get to 3 and we have the rays crossing, you can see that at this point, the light which is coming from these two regions now maps into the same place on our image-- or on our detector, and so we no longer have that one-to-one mapping. And indeed, after the rays have crossed, these two cross over, and so it's very confusing looking at this and trying to work out where all the light has come from. Although we do have this one-to-one map for 1 and 2-- so in principle, we can work out where everything came from-- it doesn't have properties that we'd normally like to have from an image. So for example, parallel lines do not go to parallel lines from the object plane to the image plane, as we sort of discussed over here, and so you might want that in a normal imaging diagnostic. If you take a picture of a square and it comes out looking like something spaghetti, that's not really an image. And also, the lengths are not preserved. Things get even worse-- preserved-- things get even worse for 3 and 4 because now we don't even have this bijective property. We end up in a regime which is called the caustic regime, which I'll talk more about later. Caustics come up all the time in other fields. They are sometimes called optical catastrophes, and although they're very, very pretty, they do make analysis of this data very, very difficult. So what I'm trying to show here is something as simple as moving exactly where you make this measurement makes a big difference to the data that you have. It turns out that the easiest place theoretically to analyze your data is nice and close to your plasma, but the easiest place to actually make the measurement with decent signal-to-noise is somewhere around about 3. So you can't have it all. You can't end up in a regime where you have the absolute best data. Again, this has all been very qualitative. We're about to make it quantitative by showing, at least in this small deflection regime, what you can measure. But does anyone have any questions on this general schema before we keep going? Yeah? AUDIENCE: Are you saying that there's a bijective map between your measured intensity distribution and the incident intensity distribution, or between the measured intensity distribution and the plasma density profile? JACK HARE: Really the former. AUDIENCE: OK. JACK HARE: So between the incident laser, the laser on this side, and on this side. But I would argue that if you have that, you can then infer something about the plasma that you've gone through, because you have some idea of-- if you're measuring here and you know it corresponds to here, then you know that the chord that you took for the plasma was roughly that. AUDIENCE: Right, you have-- JACK HARE: So you have some idea of what sort of plasma properties you were sensing. Or, alternative way around, you have some idea that if you ended up here, that you must have gone through this bit of plasma, and to get that displacement, that bit of plasma must have had some certain density gradients within it. AUDIENCE: Right, like, some sort of average density gradient. OK. JACK HARE: Yes, exactly. This is all line integrated, and we'll talk about that in a moment when we-- we'll make this displacement. We will work out-- I guess it's a y, isn't it, because I did choose a y-axis. We will work out what that displacement is in terms of the plasma parameters, so we'll make that quantitative. But yeah, this is kind of what I'm getting at, is have some idea of what you're actually measuring. Once you get into this caustic regime, you don't really know exactly where everything has come from anymore. Yeah, so I saw a hand first. AUDIENCE: Yeah, is the intensity still proportional to the density gradient, as it was previously? Or is it-- JACK HARE: I'm sad that you took away from my previous lecture that initially the intensity is proportional to the density gradient. It almost never is in any realistic situation. So that's not true, and it's still not true here. And we will derive that it's proportional to the second derivative of the density gradient, and we'll also show that is also not true in most reasonable cases. So please take away from this that neither of these diagnostics are easy to interpret, but you will read in the textbooks or online that they're proportional to the first or the second derivative. It's almost impossible to set up your diagnostics such that that's actually true. So, cool. AUDIENCE: So you say that if we are at, like, 2, we can guess that it's coming from the first ray, but how can we say that? Or why can't we say that it's like the lower ray, incredibly deflected off? So how can we know what the caustic regime is? JACK HARE: We will-- yeah. Yeah, so that's a great point. So it is very hard to find the caustic regime, but the caustic regime is defined by extreme intensity variations. So if you see only small intensity variations, you can't be in the caustic regime. Theoretically, this is a spike to infinity, right. Fortunately, the universe doesn't allow that to happen because your optics aren't perfect and your detector isn't perfect, but this is very, very bright. So you can tell by looking at an image, if it's got no hugely bright regions, you're not in this regime. And then your second question was, yeah, but still, it could come from somewhere else. You're right. When we talk about some of the advanced methods for, I guess, deconvolving or processing this data, we will come across some long, exciting-sounding phrases like optimal transport and Voronoi diagrams, and there you're trying to minimize how ridiculous your density distribution has to be to give you this result. And so there are very mathematically grounded ways of trying to put this back, but if you're just staring at an image and it's got small intensity perturbations, you're probably going to be like, hey, it's most likely to have come from up here. It's unlikely that it went, whoop, like that. AUDIENCE: So if we image way far to the right, then it would start to look more reasonable, right? But then the image would have-- JACK HARE: Well, yes. What I haven't drawn here is, if I draw in more rays, they will actually be more-- oh, can do it easily here? We'll find out that there are more caustics coming in later on. So once you've gone past this point, you will always have caustics in here. I just haven't drawn enough rays to make that point really clearly. But if you can imagine drawing more in, you'll see that they will-- like, maybe there's one that doesn't cross this one here, but eventually-- no, that doesn't work because it's not a straight line. Like that, OK, and then this one, crop this here, we'd have a caustic in that region. AUDIENCE: Right. So they eventually just start processing more and more and more. JACK HARE: They will eventually cross. And so I guess what I'd say is there is always a point, some place where you can put your detector back here where you will end up in the caustic regime. And there's indeed a dimensionless parameter that tells you whether you're in the caustic regime or not, and it's to do with the deflection angle and this distance. Thank you. Cool. Any other questions? Anything from Columbia? OK. So let's try and make this a bit quantitative, because I can see that folks want to get some numbers into this. I think here I'm basically following Hutchinson, so if you need to look up the equations in more detail, this is where you want to head. We're going to be working in a regime with small angles, so small values of theta. And remember that theta is going to be equal to, I guess-- I put d dy here, though I want to make this kind of two dimensional, so I'm just going to write "gradient" here. And you can think of theta as a vector which contains the angle with respect to the x-axis and the angle with respect to the y-axis here. OK. This is the gradient of the line-integrated refractive index. So I'm still going to work in refractive index here because it's a little bit more compact, and also because this applies to any inhomogeneous medium, not just the plasma, so you could use this for air and other things like that. So we're going to write it just in terms of N, and at the end, I'll turn this into plasma density so you can see the final result. So we're going to assume that we've got our rays of light, again, incident like this onto some plane that we're going to call-- that has coordinates x and y. We're going to have some initial intensity profile that's incident on this plasma here. So our plasma is just past the plane we're doing this. And of course, this could be something like in uniform, or it could be Gaussian. It could be whatever you want, so whatever you can actually come up with for your laser probing. And our rays are going to get deflected by some angle theta, like this. And then we're going to have our detector, and that's going to be in a prime coordinate system x prime, y prime. And of course, if don't want to put my detector just here, I can always put the lens-- no, I can put my detector here, and that would also have x prime, y prime, with maybe any magnification that the lens does, but that's not really relevant to this. That's just optics. So we're trying to work out how we get from the intensity initial to the intensity-- what am I going to call it-- I detector, x prime, y prime, like this. OK. So we can just stare at this and do some simple geometry. We can say that if we're just talking about coordinates, x prime, y prime, is going to equal wherever we started out, x plus-- ah, this is important. We need a length scale. We're going to put our detector at distance L from the plasma here. We're assuming the plasma is pretty thin still. So this is going to be Ld dx of the integral of N dl, and this coordinate in y is going to be y plus L times d dy integral of N dl. So again, I said we're using a small-angle approximation, so we've taken the approximation that tan theta is approximately sine theta is approximately theta, so this is just a simple linear relationship here, where this is L times the angle in x and this is L times the angle of the line, the theta line. And so we could write this more compactly in a sort of vector notation as some vector x prime is equal to some vector x plus L, gradient operator on N dl, like that. So this is just another representation of this. OK. Now, one thing that we need in order to make some progress here is we need to assume that the overall intensity, so the integral of this over x and y, is equal to the integral of this over x and y. So we're sort of conserving our intensity, so we could write that down as I on the detector d x prime d y prime is equal to I incident dx dy. So the plasma is not absorbing, and I guess it's also not emitting any light in this wavelength equation. Yeah, that's a pretty reasonable approximation. And then we can skip ahead, and we can say that the light which is incident divided-- the intensity which is incident divided by the intensity on our detector is going to be equal to 1 plus gradient squared-- sorry, 1 plus L times gradient squared of the integral of the refractive index along the path. And if we work with the relatively small value of this, so if we assume that this is much, much less than 1, we can rewrite this in terms of a change in intensity. So this would be the change in intensity in our image delta I normalized to our initial intensity, and that is going to be equal to 1 minus L over 2 critical density, delta squared integral of any dl, where in this last step I've substituted out the refractive index for the expression that we had for Ne much less than N critical. So again, we've made an assumption that we have small intensity variations. We've also made the assumption that Ne is much, much less than N critical, so we can use this nice, linear formula for the refractive index of the plasma. That simplifies it. So this is kind of like our final nice result here, so for very small intensity variations, you do indeed get an intensity variation which is proportional to 1 minus gradient squared of your electron density. So using this formula, where do we expect to have bright regions in our plasma? So where-- or, sorry, where do we expect to have bright regions in our interferogram? What do they correspond to? Yeah? Anyone? AUDIENCE: The bright regions would be places where the second term is small, so the second derivative being small, the point of inflection of the density. JACK HARE: OK, so only where it's small? AUDIENCE: Like, where it's negative and big? JACK HARE: Beg pardon? AUDIENCE: Like, where it's negative and big? JACK HARE: Right. Yeah, exactly. So the bright regions here are going to correspond to minima in the density, and the dark regions are going to correspond to maxima. And that's because, very roughly, when we think about what the plasma is doing, we see that minima in the electron density act as focusing lenses and maxima act as diverging regions. And so that's kind of what we saw around about here. OK. AUDIENCE: How did you get the-- how did you get the I initial over Id formula, with the Laplace-- JACK HARE: I skipped a few of the steps. AUDIENCE: OK. JACK HARE: Yeah, it's not obvious, but you can go to Hutchinson's book and see if you can follow the derivation there. But yeah, there's a little bit of magic to do that step there. The thing that you want to recognize is that we're doing something that looks like it's got a Jacobian or something like that involved there, so it all eventually ends up working out. There's also a paper by Kugland that I'll mention later that's really good for this stuff, if you want to see an alternative derivation. AUDIENCE: OK. I'm wondering, how do we-- what is the precise meaning of the gradient of a line-integrated quantity? Is it like we're changing the limits of integration by an infinitesimal amount? Like, what does that mean? JACK HARE: I mean, mathematically, you can just work it out, right. There's nothing wrong with this, because, for example, if we are doing any of x, y, and z integrated dz, like that, then you can still take the derivative of this. You just won't have any z components anymore. You'll just have components in x and y. AUDIENCE: OK. JACK HARE: In reality, when I see this written down, I often see people-- and I do the same thing-- who sort of sneakily move this integration sign to here, and then it makes a little bit more sense because you're actually looking-- for each step, dl, that the ray goes through the plasma, you look at what the local density gradient is. And I think these two things are different, kind of obviously, but they are pretty close for thin plasmas. And what most of the time people are doing is making approximations that the width of this plasma, which might be a or something like that, is much, much less than L. So this is effectively assuming that the actual path that the ray takes through the plasma is unimportant. We only just care about what path it takes through free space afterwards, which is going to be in a straight line. If you don't have this condition, you actually end up in the caustic regime more easily, and again, I'll point you to some references later which talk about this in a bit more detail. For the-- in the case where the thickness of the plasma is much less than the distance between the plasma and the detector, it doesn't really matter which way around you do this operation. So, yeah, good question. Yeah? AUDIENCE: When you say "the thin plasma regime," we're sort of saying that the path inside the plasma doesn't matter. Isn't that just like the schlieren? JACK HARE: Yeah, and indeed, this effect pops up in schlieren as well. But in schlieren, the bigger effect you get is by putting the stop and blocking out the rays, but you will have shadowgraphic effects inside your schlieren imaging system, too. AUDIENCE: OK, I see. JACK HARE: So, yeah, I just introduced the schlieren first, and now we have the shadowgraphy. But these are both present here, and they're also present in some of the interferometry we'll talk about later. But in each of these, there's something that causes the biggest intensity modulations. In the schlieren, it's the schlieren stop, clearly, but here, again, we're only looking in this regime with small intensity modulation for the moment. But without the schlieren stop, this is the effect that shows up. Mm-hmm? AUDIENCE: Our signal is proportional to the Laplacian density, and so why is it that maxima and minima of the signal correspond to maxima and minima of the density rather than maxima and minima of the density gradient? JACK HARE: Hm-hm, hm-hm, hm. Yeah, I see what you're saying. AUDIENCE: I think it's because it's the Laplacian of the line integral of the density as opposed to the Laplacian density. AUDIENCE: Oh, like, the integration over space brings you back a level? AUDIENCE: Yeah. JACK HARE: I'll have a look at that and check. I do see what you're saying, and I see what you're saying. And I don't know what the resolution is right at the moment, but yeah, I'll have a look and see if I can work that out. AUDIENCE: OK. Thank you. JACK HARE: Cool. So again, this looks really nice because it looks like you can get out the second derivative with Nz, and then maybe you could double-integrate that and you could get out the actual line-integrated electron density. But in reality, the assumptions we'd have to make to get here mean this is really hard because the actual signal term, we have assumed, is much, much smaller than 1, which means that it's really, really hard to measure. So for any realistic system with a realistic signal-to-noise, we don't want to have this limitation. We don't want to be working at position 1 or even position 2. We're actually going to get the best signal when we go to position 3, where all of this no longer applies and we can no longer get this nice result. So this is what I'm saying where you will find people saying shadowgraphy is proportional to the second derivative of the electron density, and that's sort of true. But I've never seen anyone do it. Like, it's not actually possible to do that measurement in a meaningful way. You have to work in a regime where you can actually measure the modulation. And I'll show you some example pictures of schlieren and shadowgraphy towards the end of this lecture so you get an idea of what it looks like in a plasma. OK. I think I've kind of said a lot of this already. If we end up in this regime here, where we have these caustics, we've clearly lost information. We can no longer do this mapping here because it's no longer unique. We've got raised crossings, so different bits of x prime and y prime are mapping onto-- or, one place in x prime, y prime, might map onto multiple x and y, and so we can't do this simple thing. And it's kind of obvious, actually, that you'll be losing information, and even a more advanced technique isn't going to be able to do the reconstruction. So I want to talk just now a little bit about some of these advanced reconstruction techniques which go beyond this, and then I'll show you these examples. So the first paper I've seen that really tackles this very nicely is Kugland, et al., in RSI, 2012. So Kugland points out pretty quickly that shadowgraphy is a direct equivalent to proton radiography. So mathematically, they both deal with the same quantities, which is a sort of deflection potential. In proton radiography, this deflection potential is to do with electric and magnetic fields-- and we'll get on to proton radiography later-- and in shadowgraphy, the deflection potential is to do with density gradients. But once you work in terms of this deflection potential and forget where it came from, you get exactly the same results mathematically, and he has this nice geometric approach. And so if you've found the derivation I just did not very convincing, you can go and have a look at this. It's a little bit more rigorous, and also, he then extends it into the caustic regime and beyond, and shows what you would expect to get from a plasma where you have caustics. So this is nice, but he still doesn't really tell you how to analyze it. It's really talking about the problem where you go from knowing the density of the plasma to predicting what you're going to get out. Going the opposite direction to going from your intensity variations to the density isn't particularly well developed, so this is what some people call the forward problem. And the forward problem is going from Ne of x, y, and z, to intensity on your detector in x prime and y prime-- useful, but not exactly the solution. So then in 2017, there were two almost competing papers that came out about this. There was a paper by Kasim, et al., in Physical Review E, 2017. What Kasim did is he tried to reconstruct this deflection potential that Kugland had come up with. And he did this using a technique-- that was borrowed from, some would say, computer graphics, or at least some fields of applied mathematics-- which used a Voronoi diagram. Has anyone come across Voronoi diagrams before? AUDIENCE: They're huge in robotics. JACK HARE: Sorry? AUDIENCE: They're huge in robotics. JACK HARE: OK, cool. Right, so there's like-- they basically-- I think this worked well because they took some work that other people have been doing in different fields and applied it here. So a Voronoi diagram-- ha-ha, please don't shout at me if I get this wrong-- is roughly, if you have a series of points which are randomly distributed, how do you draw quadrilaterals around them such that every point inside the quadrilateral is closest to this point and not to any of the other points? So it's a way of tiling up a space. And from this, you can imagine that these tiles that you've produced in your shadowgraphy can then be related back to a more uniform grid of tiles which all have the same shape and the same intensity, which is your intensity beforehand, and this is your intensity at the detector. Again, this is just a very hand-wavy sketch of what they did. And they came up with an algorithm to do this, and this enabled them to do the inverse problem, which is intensity at our detector in x prime, y prime, going back towards density in x, y, and z. And as we discussed before, this is an ill-posed problem. There are a large family of possible density structures that produce the same intensity. But this Voronoi diagram is making conservative assumptions about where light has come from in order to be able to put stuff back. At almost exactly the same time, Bott-- both these groups are at Oxford. Yes? AUDIENCE: So would this be applicable to shooting an array of lasers through, or shooting in a grid? JACK HARE: Oh, so no one does that, but they should. So they did it in proton radiography in the early days. They actually had little beamlets, and then you can uniquely work out where each beamlet is deflected. But of course, you only get, like, n beamlets of points of data, and it's like, it doesn't look pretty, you can't put it in nature, that sort of thing. So people moved very quickly away from that technique, so now we have data that's impossible to analyze but is very beautiful. Whereas, we used to have data that was entirely analyzable but not very beautiful. So I have opinions. AUDIENCE: So this actually would be just a finite number of rays that should be-- JACK HARE: No, I mean, this is entirely to do with-- this is not to do with the finite number of rays. This is still to do with our nice, initially uniform beam. It's just the way that we segment up the final image. These dots don't really exist. You're actually-- in reality, you're trying to find-- ha, this is where I'm probably going to get it wrong-- you're trying to find regions which contain the same intensity as one of these initial squares, and you're trying to sort of tile them together in such a way that you don't have to have something that looks like a congressional district, right, where it-- [LAUGHTER] OK, because that's obviously silly. That's unlikely to happen. So we're trying to-- this is-- you could maybe use this algorithm to sort out a lot of problems in this. Anyway, so at the same time, Bott was working on an algorithm that ended up doing the same thing, and this is in JPP in 2017. This paper is something like 120 pages long. It very much helps to have your thesis advisor be the editor in chief of JPP if you want to publish a paper with them. And they use a very interesting technique called the-- I'm probably going to say this wrong, and I certainly handled all the accents in my notes here-- Monge-Ampere optimal transport. And this optimal transport algorithm had actually won someone the Fields Medal only a few years before, Cédric Villani, who wears these incredibly huge bow ties, and he has a wonderful book called The Life of the Theorem where he discusses how great it is to be Cédric Villani. But, so this algorithm was derived not at all to do with proton radiography, but it is to do with how we-- what is the most conservative way to map one function into another function like this. And once you derive that, you can then put it back where you started from, so this also enables you to do this same inverse problem like this. And I'm not even going to slightly go through my guess at what the Monge-Ampere equation does because I don't have one. But it seems to work. They give similar results. It turns out this is much faster. I think most people use some version of this at the moment, but this one is maybe easier to understand. So both of these give similar results with slightly different techniques And I think, in the end, Kasim wrote a code based on Bott's paper that was faster than what Bott had done, so I think people have converged on using something to do with this. And again, these techniques were mostly invented for proton radiography, but now, as we said, the mathematics is the same for shadowgraphy. I have not seen anyone use either technique to properly analyze shadowgraphy, but it should be possible. Yes? AUDIENCE: How does this problem differ from more general tomography problems we encounter? Right, like, tomography or when people do tomographic reconstructions-- JACK HARE: There's no tomography here. We only have one line of sight. You can't do a tomographic reconstruction-- AUDIENCE: OK, so there's only one line of sight. JACK HARE: Yeah, I mean you can-- OK, so now you can ask yourself, If I have multiple lines of sight, can I do tomographic reconstruction? Which, like, yes, obviously, but it's also hard. But, you know, it might be possible. But this is a single line of sight, so we're not trying to-- ah, thank you. This is the mistake I've been making. We are not actually getting this out. We are getting our best guess at this out. Right, so this is not a full three-dimensional reconstruction. This is still a reconstruction of the line-integrated electron density here. So, yeah, yeah, that's a good point. I forgot about that. And it's sort of obvious that you should be able to do that. But there still could be multiple profiles that still produce the same intensity distribution, so it's still not particularly well posed. If you do have some caustics inside your protected image, none of these work, right, so we no longer are able to do the reconstruction. You can actually have a go at doing the reconstruction if you have some strong priors. So if you put-- you have some optimization algorithm that thinks, like, there's a shot here and that shot is going to cause caustics, and I think the caustics will look like this, you might be able to do it. But of course, you'd obviously have very strong priors. The techniques are very line integrated, as I just remember them pointed out here, so we're not getting a full 3D structure. That's maybe a bit too much to ask from a diagnostic which is clearly line integrated, but it's still a limitation. And then the final problem that in proton radiography is particularly profound is actually how reproducible this is, how reproducible your initial-- I'm going to run out of space-- your initial intensity is, because they-- before you do the experiment, you fire your laser beam through the chamber, and you measure that beam profile. But then when you actually do the experiment, that beam profile changes. You know, lasers are not completely stable. The beam profile changes from time to time. And so that means what you think you're mapping from here to here is actually slightly different, and that's going to introduce some noise. This is very important for proton radiography, where it's very hard to measure the beam. For a laser shadowgraphy setup, you actually have more of a chance. You can put a beam splitter for the plasma and sample the beam itself, so you actually simultaneously measure this quantity and this quantity. So there's a lot of scope for doing some really cool stuff with shadowgraphy. Questions? Yeah? AUDIENCE: Are there obvious practical reasons why you wouldn't be measuring the dynamically helpful distances, or even if you have-- JACK HARE: I think it's a great idea. No one's done it. I want to do it. [LAUGHTER] Yeah, absolutely. So it seems to me like if you have these images of different places, you should be able to reconstruct the trajectories of the rays, and that would give you more information. AUDIENCE: Yeah. JACK HARE: And in fact, in some of the first proton radiography papers, this is discussed, from looking at the position through multiple stacks. But I haven't seen it actually done in practice. AUDIENCE: You could kind of do it. JACK HARE: OK. AUDIENCE: But not fully, because it's-- the proton-- typically, all of your energy gets deposited first-- JACK HARE: Yeah, so I think that's the problem, but I like the idea. Like, it's a cool-- if it works, if people used this, yeah. But you could definitely do it with shadowgraphy, relatively straightforward. Yeah. The other thing you could do is you could put multiple lasers at different wavelengths through, and they'd be deflected by different angles. And then you could use those different deflections, like when you have your proton radiography and you use different particle energies to determine between electric and magnetic fields. Here, we don't have electric and magnetic fields. There's only one thing that can cause the deflections, which is density gradients, but those different colors would enable you to. So you could imagine that one of your other rays would get deflected less if it had a shorter wavelength, and it would take a trajectory like that. So by comparing where it ends up in one wavelength to where it ends up in the other wavelength, you should be able to actually just precisely measure the angle that's made and therefore what the density gradients are inside, but I haven't seen anyone try that yet. Yeah? AUDIENCE: When you responded to John's question, you mentioned that you only have one line of sight here. But if your detector is able to do x and y positions of your intensity, could you consider each pixel as a different line of sight? JACK HARE: But it's only a chord through a certain bit of the plasma. It's not a line of sight through the same bit of plasma. When we do tomography, you know, you imagine you've got some cloud, and you're looking through the same bit of plasma from multiple angles. Here, you're looking at different bits of plasma, so you can't topographically reconstruct the density there because it's literally a different place in the plasma. AUDIENCE: Sure, OK, yes. JACK HARE: Yeah? AUDIENCE: I would've expected a smaller wave to be reflected more because the length scale of the plasma this large wavelength-- JACK HARE: We are doing geometric optics, so we don't actually care about the wavelength. AUDIENCE: Oh, oh, OK. JACK HARE: Yeah, so in systems where you're not doing geometric optics, where the wavelength is comparable to the size of the plasma, that would be a diffraction effect, and that would be more important. AUDIENCE: Got you. OK. JACK HARE: But we are actually doing geometric optics where that isn't important, but you're right. So all of the stuff I've been doing, the reason I'm drawing straight lines everywhere is I'm doing geometric optics and doing ray optics. So in that case, a different deflection angle comes about because for shorter wavelengths, the critical density is higher, and so this quantity becomes smaller. This is just a little bit of a lighthearted little picture show. And we may finish off the lecture with this, or we might get started on interferometry, depending on how I feel. But here are some nice pictures. Shadowgraphy is absolutely everywhere. You have already seen many shadowgraphs before. If you've ever seen a mirage, that is a shadowgraph. That is the natural focusing of light by refractive index gradients in hot air, right, and you see that shimmering. You see the fact that there appear to be mountains below where the mountains actually are. That's because the rays of light have been bent by the hot air back upwards into your eye. And so this is what I mean when I say that shadowgraphy is not an image. Whenever you take a shadowgraphy thing, what you're really seeing is some sort of mirage. So the first person to ever, we know, study geography was this guy Jean-Paul Marat. That's a portrait of him. Here's his shadowgrams. He actually drew these by hand because he didn't have cameras back in the day. This is the guy. Here he is in 1793. He was actually deeply unpleasant. He was a Jacobin. He was responsible for the deaths of hundreds of thousands of people in the French Revolution, and he was eventually murdered in the bathtub. [LAUGHTER] But before he was murdered, he had a very famous guest. He had a very famous guest who came and actually sat, and he sketched the shadowgraphic effect of this-- the effect of this guest's bald head on the air around. So does anyone know who this is? That is, of course, Ben Franklin-- so there we go-- who had made a habit of hanging around in France. But these days, we usually do things which are much more exciting than Ben Franklin's head. So here are several different models. These are models for the Gemini capsule that was part of the American space program, and they want to understand what the shockwaves were around it. So you can see that this capsule is coming from right to left. It's got this blunt end here, and we have this very well-defined bow-shock structure here. So we think that that's a caustic? It's a big intensity variation, right? It's extremely black here, and it's very bright around the outside here. Behind it, what do we have? We've got a set of shocks there, a shock here. There's another set of shocks coming off here and here, which interact with the outer shocks, and then we have this beautifully turbulent flow behind it, right, and the same thing here on this more hemispherical object. And so don't know how big these were, but you can do this in a wind tunnel. In this case, it was probably not a wind tunnel but a static tube of gas that they fired these through with a cannon. And then they would have used some sort of bright light source, probably not a laser, in order to do this. Also, when you start looking at the literature, you get lots of beautiful pictures of bullets. Bullets are particularly good sources. So I believe, in this picture, the gun is just here, and so you can see the cloud of exhaust vapor coming out of the gun barrel. You can see this is the sound wave of the shot going off, and then supersonically moving away from the gun is the bullet. We can see the trail with the defined structure behind it and these very clearly defined shockwaves here. And again, these shockwaves have these light and dark regions corresponding to changes in refractive index. And this is, I think, a zoom-in of that photograph, but I can't be sure-- it looks like it is-- of that. So you can see you can get an extraordinary amount of qualitative detail. What you can measure from this straight away is the shock opening angle, so you can measure the Mach number. What's going to be a lot more tricky looking at these pictures is to work out what the density and temperature of the air is everywhere inside this picture. That's not really going to be doable because we're not in this small-intensity-variation regime. We've deliberately gone into a regime where we get caustics, which means we also get a strong intensity variation, so we can actually measure. If we were looking at one of those small-intensity-variation shadowgraphs, it would be very boring. It would mostly be gray with very, very tiny modulations to it, so. Yeah? AUDIENCE: On figure B, why are the light and dark regions above and below swaths? JACK HARE: Pass. I'll put it on a little problem set. [LAUGHTER] I don't know immediately why that is. The thing is also to remember, if you end up in a regime where you have caustics and you have strong deflections of your rays, you may actually end up in a regime where the rays don't go through your first optics, that they may be deflected out of the deflection volume of your first optic. And then they would just show up as dark regions. So you can also have, overlaid with the shadowgraphy effects, what are effectively schlieren-type effects, where we are rejecting rays by their angle, but that's just due to the physical size of the optics we use. So I don't know if that's the case here, but it does complicate the interpretation further. Yes? AUDIENCE: In the previous pictures with the Gemini capsule, there's the effect of pressure waves, shockwaves, and there's also supposedly intense heating that should be, like, on the surface of the sphere, and that's also going to change the refraction index. JACK HARE: Yes. AUDIENCE: Which effect of those-- like, how could we distinguish if something is a thermal refractive index change or a pressure-- JACK HARE: In air. AUDIENCE: In air? JACK HARE: Yeah, I think that's very difficult because they both are just the change in refractive index. AUDIENCE: Yeah? JACK HARE: Yeah. I don't think it's possible to tell the difference between those two, yeah. There's also some funky pictures in this book. This looks pretty straightforward until you realize the bullet's actually flying backwards. I don't know why they did that, but there we go. This is a great book, by the way, Settles' Schlieren and Shadowgraph Techniques. It's got wonderful pictures inside it if you want to know more about this. It's in the bibliography on the syllabus, and it's a great read. But I appreciate most people are not going to be using shadowgraphy and schlieren, but it's still good stuff. But you don't have to fire bullets at things. This is actually a picture of the author of this book writing his book next to his heating unit. There he is. His head is not quite as impressive as Ben Franklin, but there's his computer as he types away. And you can see that you can actually make these measurements even in relatively benign conditions, and that's because, again, if we put our detector further and further back, even small variations in the angle going through the refractive medium are going to be mapped into large intensity variations. So people use this technique to look for flows of air. You can look for flows of air for all sorts of reasons. A pretty benign reason would be in the HVAC industry, if you want to see whether these things are working. So there are applications for shadowgraphic techniques and schlieren techniques just in very benign conditions like this, but we, of course, are interested in plasmas. So we talked about this a little bit more on the last lecture, but remember that we need a very bright light source to overcome the self-emission from a plasma. So that means we really have to go to a laser. We just don't have any light sources that are great. Lasers are actually not ideal for any of these techniques. You really want to have a nice, large focal spot. That turns out to be true for shadowgraphy as well, but we needn't really go into why. And so this small focal spot gives us quite limited dynamic range, not to mention the fact that we are assuming in all of this that we don't have any coherence effect so we don't have interference, and we'll talk a lot about interference in a little bit. And so really, we don't want to have that coherence when we're doing schlieren and shadowgraphy, but if you've got a laser, we tend to have coherence that we don't want. So these are not great, but you can still get some nice images out of it. Here's a device called an X-pinch. It consists of two wires that are crossed here. This is only 1 millimeter across here, so this is a pretty small object. And these are X-ray, gated X-ray images of the X-pinch. We put, in this case, 200 kiloamps through each wire, and it forms a plasma here, which pinches. The fields here are maybe a hundred or a thousand tesla. And it pinches the plasma inwards like this, compressing it up so it becomes extremely hot, and it emits a burst of X-rays, which you can then use for imaging things. So when we get on to self-emission diagnostics, X-ray diagnostics, we'll talk a little bit about this. So these are the X-ray images, but this rather beautiful schlieren image was captured of this X-pinch in 2008. Here, they used a dark-field schlieren system with a circular stop. You can tell that because it looks up-down symmetric, so we're not-- we don't have a knife edge, but we have some distinction between the different directions. And you can tell it's dark-field because outside, where there is no plasma, it's dark here. If it was light-field, this region would be filled with laser light, and we'd have darkness wherever we have lightness here. And you can see a beautiful amount of detail. This projector isn't really doing it justice. You can see in the center here, this is the pinching region. It's far too dense for all of the laser light to make it through. It's much above the critical density, and so there's very strong refraction of the laser light outwards, dark in the center. There's jets of plasma going up and down out of this compressed region. And there's also ablation streams coming off each of the wires which have this beautiful modulated pattern, which is actually due to a instability in the wire ablation process. So this is extremely rich. There's a lot of information you can get out of this, even though it's not quantitative. Another thing that's been done is using schlieren imaging to image shocks in what's initially a gas, but quickly becomes a plasma. So what we had in these experiments was a metal liner. This is only about 5 millimeters across, so it's still pretty small. And it was filled with a gas, 8 millibars of argon, 15 millibars of nitrogen. A current was put up through this metal cylinder. Again, the cylinder is only about that tall and that wide. And as it does so, it heats the outside of the cylinder and launches a shockwave inwards. And that shockwave couples the gas, and it launches this first shockwave in. And as the current continues to rise, you actually get above the melt point of the metal, and that launches another shockwave due to material strength that starts coming in. So we get these converging shockwaves. And the beautiful thing here is that in argon, we had a beautifully circular shock, but in nitrogen, for some reason, we got this hexagonal shape of shock which has never been explained, absolutely bizarre, because there's no sixfold symmetry in this system. It shouldn't happen. So there's some instability which is giving it this really bizarre shape. And again, this was dark-field with a circular stop. That's a pretty standard configuration. Dark-field is a bit more sensitive than light-field because if you see any light at all, you know that it's being deflected, and that's really what you're looking for here. And then, shadowgraphy, this is an image that I took of an imploding wire array. So again, this thing is only about 16 millimeters tall, 16 millimeters in diameter. We've got eight carbon rods here. Current goes up through the rods, ablates plasma off them. J-cross-B force accelerates the plasma inwards, and you get a Z-pinch column in the center here. And looking from the side using a green laser beam, we can see that we've got shadows corresponding to the four wires. So there's four wires on this side, and they're blocking the four wires on the other side. And you can see the column of plasma in the middle here, and what you can see is there's these very strong modulations to the intensity. These are caustics, OK. We also see modulations sort of flow, as we saw in the X-pinch. And these caustics mean that this data, while very pretty, is pretty useless. There's not much we can actually do with it because we can't do any decent analysis. But it does tell us, because the caustics are on a large range of different spatial scales, that we must have density perturbations inside the plasma, and a lot of different spatial scales. And so this means that plasma is likely to be turbulent. So this is a little turbulent Z-pinch inside a pulsed power machine. So is that it? Oh, and you can also do some pretty good 3D simulations of these things. And then you can spend a lot of time doing Monte Carlo ray tracing, tracking rays through them, and seeing what the shadowgraphy looks like. And don't think this is a particularly bad match between what we saw from our simulations and what we saw in our actual data here. So it is possible to use computational tools to work out what we would predict. So that's it. That's all I've got on shadowgraphy and schlieren. Any questions on that? Yeah? AUDIENCE: Even if you have to worry about caustics, and you don't feel like you can get all the way back to initial distribution, can you at least analyze it for frequency distribution or something to get, like, oh, I must have had this many spatial scales in my original plasma or something, or-- JACK HARE: Yeah, so it's very tempting, when you have an image or when you have a time-series bit of data, to Fourier transform it and look for spectral content. And in particular, when we're talking about turbulence, we might do that, and we might look for power spectra corresponding to some turbulent density fluctuation spectrum, so, like, Kolmogorov "K to the minus 5/3" distribution. And so I did this. I did this, and of course, you get a really nice K to the minus 5/3 on this. And then I took the background image, the one without the plasma, and you also get K to the minus 5/3. And then I took a photograph of the experimental apparatus and Fourier-transformed the photograph, and you also get something like K to the minus 5/3. The trouble is, when you're doing Fourier transforms on images, you've got to think, How many pixels have I got? You've probably got, like, a thousand by a thousand pixels. And so when you're doing your Fourier transform, your dynamic range is only going to be about 10 plus 3. But at those large scales, at the smallest K, it's going to be like large-scale structure, so you wouldn't fit a power spectrum there. And at small scales, it's going to be down at pixel noise, so you wouldn't fit it there. So actually, you've only got maybe an order of magnitude, and you can fit any straight line you want to a curve and claim that you've got K to the minus 5/3 or K to the minus 3/2 or whatever, because when you do turbulence theory, they all turn out to be roughly the same. So that's one reason it's really hard just to Fourier-transform these. The second reason is, as we've talked about before, this is a mirage. It's not an image. So if I see a region like this black region here, or another region-- maybe it's easier to point out on this one. You see these sort of black voids here, and you think, OK, I could just be like, hey, this is about 2 millimeters long, this is 1 millimeter long, make a histogram, fit a power law or something to it. But these don't represent an object that is 2 millimeters long. They are a defocusing mechanism. That could be a really tiny region of very high density, that defocus, and then it's projected out into this larger region. So we can't-- there's no spatial information properly left inside this image. There's some spatial information. This sort of structure here corresponds to this sort of structure, and you think, it's about 5 millimeters across, it's probably slightly de-magnified, because it will actually make a larger image. But each of these individual voids no longer has the same spatial size as the structure that produced it, so it's very hard to infer things about turbulence from them. But, yeah, it's a good question. Any other questions? Mm-hmm? AUDIENCE: Can you get back into the difficulties of using Bott's base? If you reconstruct Bott's example, even though there's the usual caustics, you still get some information where one-- JACK HARE: So don't think you can. I mean, when you run this algorithm on your data, shadowgraphy or proton radiography, one of these Monge-Ampere optimal transport algorithms, it will always give you an answer, right. So it returns it returns a solution, right. But we know that these reconstruction algorithms don't work when we have the caustic regime, and so that solution is very suspect. Right, like, we don't believe we should trust it. So I don't think there's anything you can use from that solution to help you reconstruct the actual thing. I think, at that point, if you've got caustics, your best bet is having some very strong priors as to the sort of plasma you think you've got, and then pushing Monte Carlo rays or protons through it, doing the forward problem, and adjusting the forward problem until it matches some of the features you see on your actual data. I don't think you can do the inverse problem easily. But there have been a few papers where people have tried to do this, because we almost always end up in the caustic regime. So people have all this data, and they want to use it. Like, it's reasonable to try and do something with that data. It's just very hard, so, yeah. Yeah? AUDIENCE: So for those images of, like, the heads and the room, those are not super small-scale effects. So to see those, do you just push your imaging surface really far away? JACK HARE: If you take a laser pointer and make it diverge by taking off the little lens at the front of it, and you take a candle flame and you project it onto a wall, you will see this. Like, again, you see mirages just using sunlight, so, yeah, this is not a hard thing to observe. Yeah. AUDIENCE: OK, so if you see something with your naked eye, it's either a really small object or something really far away? JACK HARE: Yes. AUDIENCE: OK. JACK HARE: Yes, exactly, and usually, it's quite far away, right. I mean, if we-- you can often see even the heat rising from a vent or something like that, but not when you're right up close to it. Give it a go next time you see something, if it's safe. [LAUGHS] Put your eye right up next to it and see if it disappears, so, yeah. AUDIENCE: OK. Cool. JACK HARE: Please don't, anyway. [LAUGHTER] Any other questions? AUDIENCE: Can you give one more example of when you can see-- like, in this exact duration, is like if you have sunlight coming through a window and over like a heater or something like that. JACK HARE: Yes, onto the far wall. AUDIENCE: Onto the far wall. OK. JACK HARE: Yes. Yeah. AUDIENCE: Yeah, exactly. That's what I'm saying. JACK HARE: Another place-- the bottom of a swimming pool. So when you have waves on the top surface of a swimming pool, you get those bright lines on the bottom. Those are caustics, right, so exactly the same sort of physics is produced as these. So there is a beautiful book called The Natural Focusing of Light which tries to analyze caustics but mostly does it in a way that I don't think is practical. It's theoretically beautiful, and one of the things that they point out about caustics is that every time you get a bright region, you get a dark region on one side but not on the other side, and that tells you what direction the caustic came from. And so you might be able to trace back a series of arrows around one of these caustics here and work out where the point was that the caustic originated from, but it's very tricky to do the analysis of it. But the point is the book is called The Natural Focusing of Light, so people refer to this field as natural focusing. No one has tried to make a lens. No one is trying to do any focusing. Our medium, with its inhomogeneous refractive index, has just done it for us, and then we might try and work out what we can learn about the medium from looking at the light that's gone through it. Yes? AUDIENCE: One more thing, that-- to make sure I don't try this later. So if you have an image like the one with the bullet, for instance, there are some really clear caustics, then parts of the rest of the image look like they might be fine, if you want to analyze the rest of your image, is that-- JACK HARE: Oh, yeah, you can cut out the bit with the caustics. AUDIENCE: OK. JACK HARE: Yeah, yeah, that's fine. So, I mean, almost none of this image is actually suitable, but maybe this bit would be more-- because it's not small intensity variations. You can see, if you think about this gray as, like, 0.5 and you think about the white as 1 and the black as 0, you can see that you're getting modulations on the order of 0.5 inside this image, so it's clearly not in that small regime. But it's not clear that these are caustics, so you may still be able to use one of the complicated Monge-Ampere style reconstruction techniques. You just won't be able to use the nice formula that we wrote down analytically, so. OK, you've successfully timed me out. I was going to start talking on interferometry. Well done, everyone. So we'll leave it there, and we will pick up on interferometry on Thursday. Sounds good.
MIT_2267J_Principles_of_Plasma_Diagnostics_Fall_2023
Lecture_8_Refractive_Index_Diagnostics_IV.txt
[SQUEAKING] [RUSTLING] [CLICKING] JACK HARE: So we talked about interferometry. We had some sort of plasma like this. And we had a probing laser beam that went through the plasma. And we split off a fraction of that probing laser beam and sent it around the plasma, which meant that when the two beams recombined, we got some interference effect between them. And we call these beams the probe and the reference. And we said-- or we found that the phase difference between these two beams, delta phi, which is the phase accumulated by the probe beam minus the phase accumulated by the reference beam, that was going to be minus omega over 2c times the critical density, which is itself a function of the wavelength of the laser times the integral of the line integrated electron density. So the electron density inside this plasma, and e over some plasma length L like that. And so this is the phase difference between the probe and the reference. We want to measure that. We came up with a simple system where the intensity on our detector was just going to be equal to 1 plus the cosine of this phase here. And we realized very quickly that this causes us some problems, because if we have a signal on our detector of 1 plus cos delta phi, and that signal looks like something like this, so some sort of 1 plus cosine of some constant a times t like that, something that's oscillating, then we have a lot of phase ambiguity in the sense that we don't know whether the phase is going up and down. And we also can't measure the phase more than modulo 2 pi. And for this example here, we had a look at what possible paths we could take in our little delta phi space that would give us exactly the same signal. We said we could have one that ramps up like this. We could also have a delta phi that goes down like that. And then every time we get to some multiple of pi, we lose track of whether we are going up or down. And so we start having these multiple branching pathways and any possible path through this space is valid. It will produce the same signal on our detector. And we can't tell the difference between them. So this we then rechristened a homodyne technique. And then we looked at heterodyne techniques instead. So when we started working with heterodyne technique, we borrowed some tricks from FM radio transmission. And we now have going through our plasma some radiation source, which has got a frequency omega 1. And now our reference beam has a frequency omega 2. So they've got some frequency shift between them. We talked about techniques for doing that. We put in some recombining beam splitter here. And we put it through to our detector. And we said, if omega 1 is equal to omega 2, we just get back our homodyne system, which is kind of obvious, because here we split it. So they will have the same frequency. But in the more interesting case where omega 1 is not equal to omega 2, we'll end up with two frequencies present in our final signal. We'll have a signal that on our detector-- even the absence of any plasma. So if we just leave the system running, we'll have a signal that looks a little bit like this. And there will be two frequencies inside this. There will be this envelope frequency called the beat frequency, which oscillates at the difference between the two, omega 1 minus omega 2. And then there will also be this fast frequency within it at omega 1 plus omega 2. Now, these frequencies for any radiation we're likely to use are very high. So it's very hard to get a detector to work at these frequencies. So we won't actually see this at all. Our detector will just average this out. And what we'll see is this slow beat frequency instead. And we can detune omega 1 from omega 2 to get a beat frequency that nicely falls within our detector range. So far so good, but there's no plasma physics in here. What we'll end up then is measuring something that looks like i/i0 equal to 1 plus cosine of omega 1 minus omega 2 t. That's the beat frequency. But we will also have an additional phase term, delta phi, as we had before, because that's what represents the phase of going through this plasma here, delta phi. And when we looked at this, we said, huh, that's interesting. We've got frequencies times time, and now we've just got this other term, delta phi, which means that if delta phi changes in time, we have a change in delta phi with respect to time. That's going to look like an effective frequency inside here. And then we can rewrite this as 1 plus cosine of omega 1 minus omega 2 plus partial of delta phi partial t times time like that. And what we actually measure on our detector at the end of the day is a signal, which is oscillating with some frequency, which I'll call omega prime, which is the sum of these three different frequencies here, which means that if we see a change in the frequency omega prime, we know that change is due to a change in delta phi in the phase. And so now we can measure the temporal change in phase, which is the temporal change in the electron density. The nice thing about this technique is that we found that now we can distinguish between the phase going up in time from the phase going down in time, which we weren't able to do with this technique. And that's why we got this ambiguity every time we went past pi, whether we were going up or down. With this technique, it was resolved, and we saw that we resolved that by looking at this in Fourier space. So in the case of the homodyne technique, we were detecting, effectively, this frequency here at some positive omega. So that's d phi, partial delta phi, partial t. But that gave exactly the same result as some negative frequency here. So we couldn't tell the difference between the negative version of this and the positive version. When we moved the heterodyne technique, because we are effectively encoding our fluctuation quantity around some frequency omega 1 minus omega 2. Now we can tell the difference between whether we're in the negative side of that or the positive side of that. So we've shifted this frequency a small distance by changing the phase by having some time changing electron density. So this enabled us to resolve the ambiguity between these two and, indeed, this technique in general helps resolve a lot of the ambiguities associated with homodyne interferometry. Though as I mentioned in the problem set, you'll come across a couple of other techniques which can do this in a slightly cheaper, but more ambiguous way. So we got through all of that. And I just want to pause here and see if there are any questions on that material before we go on and finish off the spatially heterodyne version where we do this as an imaging technique. So questions? Yeah. AUDIENCE: So I understand that the detector is not able to resolve the omega 1 plus omega 2 component. But does the time change in the phase difference contribute to that oscillation as well if you were able to resolve it? JACK HARE: Yes. But it would, if we look at it in a Fourier domain, I'm on another board. I'll just use this one briefly. I won't exaggerate this a lot. Let's say we've got our two frequencies omega 1 and omega 2 close together, omega 1, omega 2 like that. The difference between them is down here. This is the beat frequency, omega 1 minus omega 2. And indeed, we have some shift to higher or lower frequencies. And that shift is due to the change in the phase in time. The sum frequency is up here. This is omega 1 plus omega 2. And indeed, it will also be shifted by the same amount here. But it will be at such a high frequency still that there will be no way to measure it. If you happen to have a system where your phase changes so much that you can shift this one down into a range you can measure, then I don't think you need a heterodyne technique at all at that point. But anyway, certainly, if that happens, you should make omega 1 and omega 2 larger until it doesn't happen, because you remember that we have this condition, but you only really get good results of this if delta phi delta t is much, much less than omega 1 minus omega 2, which necessarily means it's much, much less than omega 1 plus omega 2. You don't want your shifted frequency getting anywhere close to 0, because then you [INAUDIBLE] again. And you won't be able to measure it. AUDIENCE: Thank you. JACK HARE: Good question. Any other questions? Well, I've either taught it very well or you're going to find the homework very hard. Let's go on to [? spatial ?] heterodyne techniques. So here is the idea that we actually put an expanded beam through our plasma. So we take our laser beam, which maybe is initially relatively small, and through some beam expander, we get out a large laser beam. And we passed that. We've expanded our beam enough that our plasma is maybe slightly smaller than the beam diameter. So there are some regions outside the plasma that we can still image where there won't be any phase shifts. This turns out to be useful for zeroing our system. We'll talk about that a little bit later on. And what we said is this beam, of course, consists of some wavefronts like this. And these wavefronts, as they go through the plasma, are going to advance, because the phase speed of the plasma is faster than the speed of light. And so the phase actually advances inside the plasma. And so the wavefront that comes out is also going to be advanced in respect to the plasma. We want to measure the change in this wavefront. So one thing we could do is we could interfere it with a set of plane waves that we derive from the same laser beam. We'll put a beam splitter in somewhere down here. Send these around the plasma. Send it back in here. And then we'll have another beam splitter as we normally do that recombines these, and then we would have this nice flat phase front. And we would now have from the probe beam some phase front, which is advanced a little bit. And if we put all of this image onto a detector-- here's our camera like this-- we get a series of constructive and destructive interference fringes that maybe look like this. Destroying a set of nested contours here, we assume we've got some sort of peaked central structure here. Now, the trouble with this is this is still a homodyne system in the sense that we can't tell whether the density contours are going up or down. If I draw a random line out across this, my density could look like a peak structure. Maybe I've got a prior that that's true. But I can't prove to you that my density doesn't look like that instead, or any number of other different paths through phase space that will give us the same fringe pattern. And so this is still problematic. So what we want is a spatially heterodyne version of our temporary heterodyne system that we had out there. And we do our spatial heterodyning by tilting these fringes. That literally means slightly adjusting this mirror here so that the fringes come through at an absolutely tiny angle. So we're not talking about 10 degrees here. We're talking about much less than a degree. And that means that our phase fronts are now coming at some slight tilt here. And as opposed to having omega 1 minus omega 2, we now have an inbuilt phase pattern that looks like k1 minus k2. And these are vectors in the xy plane here. We don't care about the z components, where this is x and this is y. And by convention, we usually have the z coordinates going the direction of our rays here. So we're obviously putting a camera here. We don't measure anything in z. We measure in y and x. And we're interested in the misalignment of these wavefronts in x and y. So in the absence of any plasma, this misalignment is simply going to give you a series of straight fringes evenly spaced like this. And so that is like the signal that you have, your beat signal here, the green line. In the absence of any plasma, this signal just goes on and on and on. But when we introduce the plasma into our system, each of these fringes will get distorted. And they'll get distorted by an amount that corresponds to their line integrated electron density. And by looking at the shift between where the fringe was before we added the plasma and where the fringe is afterwards, we can then calculate the amount of density that's being added because we know that the fringe shift is linear functional to the density here. And once again, because we've got a heterodyne technique here, we have avoided this ambiguity. Even if these fringes overlap, even if we have a distortion so large that this fringe goes above this one, when we go out to the edge of the plasma, where the fringe shift is 0 out here, we can still uniquely identify each fringe with its background fringe. And so we can track them along, and we can say, aha, this one's done two fringe shifts or four fringe shifts. And there is a Fourier transform way to think about this as well. But now we need to have a two-dimensional Fourier transform. And what we're looking at here are now kx and ky. And so, originally, your k1 minus k2 beat frequency is maybe up here. And by symmetry, because our signal [INAUDIBLE], down here. So just at the negative number as well. And now we've distorted these fringes. It's taken this initial beat frequency. And maybe we now have Fourier components that look a little bit like this, or they could look like that. And moving around in Fourier space changes what the shape of your background changes look like. This here, where my components are roughly equal in kx and ky, that will correspond to fringes, which are 45 degrees. So if I had my beat frequency down here, this would correspond to fringes with the k vector in that direction. And that would be like that. So you can choose what your carrier frequency is. And there was a good question in the last lecture about your sensitivity to different density gradients in different directions. And indeed, you have more sensitivity in the direction perpendicular to where your carrier frequency, spatial frequency is. There's a lot going on there. I actually have a load of slides with pictures of real data on this that might make a little bit clearer. But before we get on to that, are there any questions? Yeah, uh, John. AUDIENCE: You might have answered this. So by tilting the wavefronts of the reference beam, what in essence we're doing is we're changing the wave vector component that is interfering with what's coming through the plasma. I mean, we're trying to create some beam interference here. So I guess, technically, we're not changing the magnitude of k. JACK HARE: No, because in free space, the magnitude of k is fixed-- precisely-- because the dispersion relationship for a wave in free space, not in a plasma, is omega equals ck. AUDIENCE: And so what we've done by tilting the wave is we've changed the distribution of that magnitude in either dimension. So now the component of k that is interfering with the wavefronts is slightly different. JACK HARE: Yeah. So k2 here is, for example, the reference beam here. And k1, it's been coming through, for example. AUDIENCE: So K1 is exclusively an x component, I guess. I think about that if we align our-- JACK HARE: If we aligned in this way, but if I align the fringes-- if I rotate the fringes, I can measure other components of it as well. And it turns out, as I'll show you, you still get to measure some of the y components and things like that, even if you're in this setup, which is most sensitive to the x component. But you're right, yeah. Was there another question? Yeah. AUDIENCE: So as the probe light propagates through the plasma, it'll refract and bend around in all this. Why don't you get heterodyning for free from these changes to the wave propagation as it transits the plasma? JACK HARE: So those-- so the question was, why do you get heterodyning for free as you get k changing within the plasma here? I mean, that's effectively what you're measuring here with these phase contours. It's just that they're still ambiguous. You need to shift them so that they're-- as we did in frequency space for the temporally heterodyne version, you need to shift them into such a direction that they're completely unambiguous, whether your phase shift is up or down. And here at the moment, even with this, you're going to get these ambiguous fringes. I'll show you some pictures that maybe will make it a bit clearer in a moment. Were there any questions online? AUDIENCE: Hi, yeah. JACK HARE: Yeah. AUDIENCE: So how fine can you get for the amount of x components or y components in your scan? How detailed can you get? Because I'm assuming-- because the wavefront is continuous, but I'm assuming you can't get perfectly granular understanding of the plasma density just from this [INAUDIBLE]. JACK HARE: Did everyone in the room hear the question? It sort of limits the resolution in a way. So our spatial resolution is set by our fringe spacing. So usually, we can say, this is a dark destructive interference, and this is a light constructive interference. And theoretically, you could identify the gray point halfway between light and dark, but it starts getting a bit ambiguous there. Dark and light are pretty obvious. And that means that your spatial resolution is set by the spacing between your fringes. And if I choose to have my fringes closer together-- so I choose a k1 minus k2 which is a larger number, so a higher beat frequency, then I'd have my fringes closer together like this. I would gain spatial resolution. And I'd be able to keep playing that game down to the resolution of my camera, where I need a certain number of pixels to be able to tell the difference between light and dark. The trouble is, as I shrink this down, I'm gaining spatial resolution, but I am losing resolution of the density, because these fringe shifts are now smaller and smaller. The fringe is now only moving a very small distance. Maybe it's only moving two pixels or one pixel. Now I've got a 50% error, because I don't know whether it's two or one pixels. So there is a tradeoff directly between the density resolution of this diagnostic and the spatial resolution of it. So that's a very good question. And that is the same. It's always the same, because mathematically it's the same. It's the same for the temporal heterodyne version as well. So you can have time resolution, or you can have density resolution. But you can't have both. They trade off against each other. AUDIENCE: I see. That makes sense. All right, thank you. JACK HARE: Thank you. Yeah. AUDIENCE: Can this be used for temporal measurements as well? You can just keep [INAUDIBLE], imaging-- JACK HARE: So can this be used for temporal measurements as well? Yes, if you have a fast camera. You can do this-- for example, I know a guy who used a CW laser beam, so basically a continuous wave laser beam at like 30 watts or something terrifying. And he had a fast camera. And so he took-- the fast camera could take 12 pictures, one every 5 nanoseconds. And he was able to make a little movie. And depending on the speed of your plasma, if you don't need every five nanoseconds, but you're working with a plasma where the timescale is milliseconds, then you can actually just have a continuous camera. So you need a nice, bright light source that is continuous enough and you need a camera which is fast enough. And that's what sets the resolution. So you can make a 2D movie of the density evolution in time. AUDIENCE: So the limitation here is just technological? JACK HARE: Yes. Yeah, yeah, yeah. This technique is-- I mean, that is time resolved, spatially heterodyned interferometry. I don't think you can do temporally and spatially heterodyne interferometry in the same diagnostic. But if you work out a way of doing it, let me know. That sounds hard, maybe not necessary, because you've already got around the ambiguity in one way. So I don't know if you need both. And if you had a homodyne system, in one sense, I think you'd be able to use lack of phase ambiguity from the heterodyne part of the system to get over that. But I haven't really thought about it that much-- interesting question. It could be a fun diagnostic-- very expensive. [LAUGHTER] So yeah, cool. Any other questions? Yeah. AUDIENCE: So in this case, it's not just we're measuring the k factor of how we're measuring the wave. We're actually deflecting the probe away a little. JACK HARE: So the question is, is the probe wave actually being deflected as it goes through the plasma? Well done for spotting that. I was going to have that as a question later on when we looked at some data. But as you pointed out, you've ruined the game. So I told you earlier that rays are always perpendicular to the phase front. And so as I'm drawing this, the rays are like this. Fine. The phase fronts are flat. But you can see here that if I drew those lines like this, I would start to get deflection. And so you will have shadowgraphy effects overlaid on top of your interferometry signal. It turns out the modulation from interferometry is usually stronger. And so you see that more strongly. But if these are very big, you may actually just lose the light, because your lens will be this big. And your rays will exit it. Now, in general, interferometry is so sensitive that I've drawn this in a very exaggerated way. The phase fronts can be almost perfectly planar still, and I'll still get really nice interferometry patterns. But it won't be so distorted that the light will all be spreading outwards and I won't be able to do anything about it. So yeah, you're quite right. In this picture, we should have shadowgraphy and [INAUDIBLE] and all sorts of things like that as well. Any other questions? I'll show you some pictures in a moment otherwise. AUDIENCE: I have a question. JACK HARE: Yes, please. AUDIENCE: Because of the shifting of the wavefronts, is there a possibility for interference within the same wavefront? So would k1-- could it interfere with itself if there's enough of a shift in density, the plasma? JACK HARE: In fact, that's when we were talking about shadowgraphy and I said, we don't really want a coherent light source for shadowgraphy. So even if you take out the reference beam, and the question is, can this shift so much that it actually interferes with itself? Yes, that happens. And you can see that in shadowgraphy. And it's bad, because it's really hard to interpret. But there is a technique-- I should have read up on this more before saying this, but it is a technique called phase contrast imaging, which is used with X-rays. And that actually exploits the interference of the X-ray, which is just radiation, like all this other stuff, with itself to make very, very precise measurements of sharp density gradients. So in general, you want to avoid coherence in shadowgraphy, because it messes up your data. And it's hard to interpret. But if you can do it very precisely, you can do some very nice techniques with it. So it's not always a curse. AUDIENCE: Is there-- JACK HARE: In general, all of these effects will be overlaid on top of each other. I've just been presenting them one at a time. But they're all present in the same system. So this is a very biased sample of interferograms. And it's biased in the sense that I just went through quite a lot of papers I've written and tried to grab them because I was in a hurry. But hopefully, some of these will be informative. I tried very hard to find some temporary heterodyned interferometry. And it's actually quite hard to do-- find one where they show the raw signal, because we have such good electronics for this stuff these days that mostly you just do the signal processing on the chip and give the output of it. So you don't really digitize the raw signal. So this is my best attempt so far. This was a HeNE laser beam. So that's a green laser beam on [INAUDIBLE]. And this is from a paper from 2017, so it's relatively recent. They had-- the HeNe is obviously green, but they used the heterodyne technique to produce a probe at 40. That's the beat frequency there. And that effectively sets the temporal resolution of this. And they actually did something even more complicated than we've discussed where they heterodyned the system, and then interfered the heterodyne probe with the heterodyne for reference, which is very weird. And they did it with a quadrature system where they shifted one of the signals out of phase by 90 degrees. And you'll learn about quadrature in the problem set. And then they digitized those two signals. And so what they saw was these signals here. And if you look at this carefully, which you can't do right now, but I'll put the slides up later, you'll find out that these signals are actually 90 degrees out of phase, which is really, really cool. And then they were able to process those together, and they could get out of phase shift. And you can see the time on the bottom here is sort of in millisecond-ish time scale. So this is pretty fast for a tokamak. And you can see that the phase shift is going multiples of 2 pi. So they've resolved that ambiguity. So they're saying, look, the density went up, and then it came down. And it went up again. And then it went down. So they have some confidence that this is real. So this was quite a nice example. Another example is from the pulsed power world. This was on the Z machine at Sandia, where they actually showed the raw data-- not quite the raw data. What they show is a spectrogram. So this is like if you do a very short time window Fourier transform on your temporal signal. And you plot what frequency components are of single time. So if I take a slice at a certain time here, I can see a dominant frequency component down here. And you can see that dominant frequency component change in time. And they've shifted this so that the beat frequency is at 0. But that would actually be gigahertz or something like that. And this would be a shift from that beat frequency of gigahertz. And they've done a technique where they shift the beat frequency in each window a different amount. And this gives them a much higher dynamic range. But effectively, this is looking at an increase in phase that's chirped in time. And the time scale on the bottom here is nanoseconds. So over about 100 nanoseconds, they've measured a significant phase shift. Now, what they're doing with this technique is not actually measuring a plasma. They're measuring the motion of a conductor. So this is photonic Doppler velocimetry for the [INAUDIBLE] kids who have heard about that before. And they're doing that to measure all sorts of cool squishing metal type things, but exactly the same physics at play, because that moving conductor just gives you a phase shift. And that phase shift could be density, or it could be some moving conductor. So it's up to you afterwards to interpret what the data looks like. And they've run out of bandwidth up here. So 25 gigahertz, they can't sample any faster, because that's already a very expensive digitizer, which is why they have this clever technique which effectively [INAUDIBLE] the signal. So it goes up on one, and the beat frequency appears to go down on the other. And then when it hits this point here, it starts going back up. And they do the same trick several times. And by sort of appropriately flipping and splicing these signals together, they'll actually get a signal that just keeps going up and up and up, and then they can measure this motion of this conductor over a very long time scale. So there's some very cool, advanced techniques in electronics involved in all of this. But this, at least, is closest to the raw data. And again, for the p set, you'll be making your own raw data. So you can see what it looks like there. Any questions on these two temporally resolved techniques? Hmm? AUDIENCE: In the [INAUDIBLE] example, did they have any kind of spatial resolution [INAUDIBLE]? JACK HARE: No, it's just a cord. So it's a laser beam through the plasma. And that's pretty typical for tokamak plasmas. One reason for that, which doesn't apply here, is that you often want to use microwaves because the density is more appropriate for microwaves than for lasers. And so that means it's actually quite hard to do imaging with microwaves. We tend to just have an antenna which launches microwaves, an antenna which collects them. So you tend to just have a line. With a [INAUDIBLE], that's not a limitation, but, obviously, they probably couldn't do a camera that is resolving on this time scale. Maybe they don't want to. They certainly couldn't have a camera that covered the entire tokamak cross-section on that time scale. So I think they went for this sort of time resolved, but just one point in space technique instead. We'll talk a little bit about how many cords like that you need in order to do some sort of reconstruction later on in the lecture. Other questions? Anything online? So these are examples of spatially heterodyned interferograms. This is the case with no plasma. You see you've got these nice, uniformly spaced fringes here. And some of them are light. That's constructive interference. Some of them are dark. That's destructive interference. So the probe beam is going straight into the page like this. And the reference beam is tilted at a tiny angle upwards. And we've chosen that angle to give us this nice fringe pattern here, because when we put a plasma in the way, we have plasma flows coming from the left and the right. And we can see that all these fringes are distorted. You can see most prominently the fringes all tick up in the center here. And that corresponds to an increase in the line-integrated electron density. You can also see there are regions where there's quite large fringe shift distortions around here. These are actually for plasma sources on either side here. And you can see that some of the distortions are so large we've actually formed closed fringes again. So in that place, we have violated the condition that k1 minus k2 has to be much, much larger than the spatial derivative of the phase. We've effectively recovered by accident the homodyne system, because we weren't able to keep our fringe spacing close enough together. We made this fringe space even closer. These closed fringes would go away, and we'd lose that ambiguity. But we'd also be sacrificing our dynamic resolution of the electron density. So for an interferogram like this, I have very strong priors that the density is going to be higher here than here. And so I can just, when I'm doing the processing on this data, make sure that the density goes up here instead of down, effectively making choices on that decision tree that we had before. And if you spend a little while processing these interferograms, this is the raw data, and this is the line-integrated electron density here. Now, for this, the electron density is in units of 10 to the 18th per centimeter cubed. So technically, the thing that you get out of this is line-integrated electron density. So that's per centimeter squared. In this system, we have a lot of symmetry in this out-of-plane direction. And we knew how long the plasma was in that direction. So we just divided by that length to get the line-averaged plasma density. And you can see, again, although we did have these homodyne regions here where we have some ambiguity about phase, because we had strong priors, we were able to assign the correct electron density. And if we decided incorrectly, if we said it's going down, we would see a weird hole here that wasn't on the other side. So it also helps to have a bit of symmetry in your system as well to check that you're assigning things correctly. So I won't go into the details of how you process these, though, some vaguely involved techniques, but you can take data like that and get out some really nice pictures of the electron density in your plasma. So that was quite a nice one. You can see all the fringes are still roughly parallel. They only move a little bit. Here's some interferograms with slightly more twisted fringes. You can see the background fringes are like this on this image, and they're like this on this image. It's just how it was set up in the two experiments. And you can see that this is an example I used earlier of a B-dot probe sticking in a plasma and a bow shock is forming around it. And you can see strong distortions of the interference fringes, especially very close to the bow shock, where, in fact, the reflections of the rays are so large from the density gradients that they're lost from our imaging system. And so we don't have interference fringes here, because they've been refracted out of our system. We no longer can do interferometry. So when the density gradient gets too large, it's very hard to do interferometry. And again, these images were processed, and you get nice pictures of the bow shock in these two cases here. And this paper was comparing bow shocks with magnetic fields aligned with the field of view and perpendicular the field of view. And we have very different geometries there. So those are quite complicated. But probably the most complicated one I've ever seen traced was this one by George Swadling, 2013. So this is a 32-wire imploding aluminum Z-pinch. There's a scale bar up there, which is actually wrong. That's a millimeter. It says a centimeter. It should be a millimeter. No one's ever noticed that before. And so these that are positioned of the 32 wires here, and there's plasma flowing inwards. And as the plasma flows inwards, it collides with the plasma flows from adjacent streams. And it forms a network of oblique shocks. So this is two wires here. The first oblique shock is out of the field of view and forms a plasma here. And then these two plasmas interact, and they form another oblique shock structure. And then these two interact, and they form another oblique shock structure. And there may even be like a fourth or fifth generation in here. So this is a complete mess. This is extremely complicated. But because the interferometry is very high quality, with a great deal of patience, you can follow each interference fringe all the way around. And you can work out its displacement. And you get out this rather nice map of the electron density. And we see that there is still sufficient spatial resolution, despite the fact that we're using interference fringes, which limit our spatial resolution. There's still sufficient spatial resolution to resolve these very sharp shock features here. So this is a nice piece of work. And then my final example for this batch here was actually showing something we've already discussed. This is where we had plasma flows from the left and the right colliding here. And as opposed to seeing interference fringes, we just see this dark void. And you can see the fringes are beginning to bend downwards here, which will indicate enhanced electron density. But because the density gradients are too large, the probe beam has been refracted out of our collection optics, but we don't get to see anything. Now, you could say, well, perhaps it's because the plasma is too dense. If we got to the critical density, the laser beam going through the plasma would be reflected. And so that could be the case. It's just that density is really, really high and very hard to reach, whereas we know that in this system, that density gradient is very easy to reach. And so we're pretty certain that in these experiments, it was the density gradient rather than the absolute density that caused us to lose our probing in the center here. And there's not really much you can do about that. You can go to a shorter wavelength if you've got one, so that your beam doesn't get deflected so much. You can use a bigger lens so that you collect more light. But there's only so big a lens they'll sell you. And so sometimes you just have to deal with the fact that your data has got holes in the middle. And when this image was processed in his paper, he just masked this region of the data off. And he said, we don't have any data there-- really the only thing you can do. Any questions on spatial heterodyne interferometry? Yes. AUDIENCE: So I think as you alluded to, just to make sure that I understand, in order to extract useful information from one of these pictures, you need to be able to trace each fringe from edge to edge of the picture. And if you lose that, you're in trouble. So then how do you-- I mean, is this all done with computer image processing for your ability to, say, continually trace all of these fringes? JACK HARE: So you don't have to be able to trace each fringe from side to side. But you do-- ideally, you'd be able to assign-- say you numbered each of the fringes from the bottom to the top of the image here. You'd like to be able to assign numbers to the fringes in the image without a plasma. So you need a reference interferogram. So you always have to have two pictures, because that reference interferogram gives you the background signal that you're effectively modulating here. So in temporal heterodyne interferometry, you'll have to measure the beat frequency for some time before the plasma arrives. The trouble comes that, actually, you don't need to be able to uniquely allocate each of these fringes to a fringe in the reference interferogram. You'd like to. If you can't do that, then there's some offset constant of density that you can't get rid of that's ambiguous. So you're like-- you can say that my density is going to change from here to here by 10 to 18, but it may also be like 10 to the 18 plus 10 to the 18 or 10 to the 17 plus 10 to the 18, something like that. So there's some ambiguity there. If the fringes are broken-- so in this case, some of the fringes-- well, actually, this one is simpler to see. In this case, some of the fringes on this side here, you can't actually follow them through on the other side. But you can make some pretty good guesses in the absence of the plasma, because they'll be nice straight lines. And you can trace them across like that. So for these complicated interferograms, the best process we've found is grad students, but lots of people say, I'm going to write an image processing algorithm. And indeed, they'll send students from the PSFC who were doing a machine learning course, who tried to do this. The trouble is humans have incredible visual processing. So when you look at this, you can work out what all these lines are perfectly. Every single algorithm I've seen to try to do this automatically, it starts getting hung up on the little fuzziness on this line here. And it's like, oh, I think that's really important, so I'm going to spend all my time trying to fit that perfectly. And so there may be techniques which can do it automatically. But at the end of the day, it really requires a human to look at this region where there's no fringes and say, ah, we've lost the fringes because I know that density is really high there. Or in fact, rounding these regions here, it's hard to see, but there's actually some very strong shadowgraphy effects. There's some brightness in this region here. And that looks like additional interference fringes. But if you've looked at these enough, you'll know that it's through shadowgraphy. So it seems to be very hard to train a computer to do it. So I'm not saying it's impossible. I just haven't seen a realistic program yet. If you have an interferogram where all the fringe shifts are relatively small and well behaved, you can do this using Fourier transforms. So there are techniques which are Fourier-transform based, which basically do a wavelet transform. So it's like a small region Fourier transformation and looks at the local frequencies there. And that's like those spectrograms that I showed you. That's the 2D equivalent of the spectrograms I showed you here, where we have the spectra each time. You want the k spectra at each position. And those do an OK job. But as soon as there starts being any ambiguous feature, or even some relatively large distortions, they also fall over really badly. So it seems like a hard problem to automate. So that was a long answer to your question. I'm sorry. Other questions? Yeah. AUDIENCE: So the image processing, it seems like choosing which areas to mask and which areas to not mask is important. JACK HARE: Yes. AUDIENCE: Is there any weird, like cut and dry rules for that, or is it all sort of related to intuition? JACK HARE: At least the way that I do it, it seems to be very intuitive. Yes, exactly. You sort of have to know what you expect to see, and then work with that. Other questions? This side of the room is much more questioning than this side of the room. Next one. AUDIENCE: From a practical standpoint, how many time points can you resolve with diagnostics such as these? JACK HARE: Yeah. So if your plasma is-- the question was, what's the sort of temporal resolution of something like this? If your plasma is only lasting for a few hundred nanoseconds, then it depends whether you can afford a camera that can take more than one picture in a few hundred nanoseconds. These were taken with off-the-shelf canon DSLR cameras bought in 2006. And there, the shutter was actually open for one second, but the laser pulse is only a nanosecond long. And that sets the time resolution. So you can get one picture in an experiment as this. And then you do the experiment again, and you hope it's reproducible enough. And you move the laser later in time. And you take another picture. And you keep doing that. Yes, it's hard. AUDIENCE: This will be a silly question. But what are those circular fringes that appear? JACK HARE: Yeah. So this is what I was saying, where we've effectively violated our-- AUDIENCE: They're very light in the background. JACK HARE: These are diffraction patterns of dust spots. AUDIENCE: Dust spots. JACK HARE: There's dust on an optic somewhere. It creates a diffraction pattern. It's out of focus. It modulates the intensity of the laser beam. It's another thing that makes it hard for automated algorithms to work. We tend to normalize those out, but I'm showing you-- this is the actual raw data from a camera. I haven't done anything to it. But you can do tricks to get rid of those, because it's like a slow moving, slow changing effect. You can do like a low pass filter. AUDIENCE: In these sorts of papers, how is uncertainty communicated with the result? JACK HARE: Yeah. So we tend to estimate something like your uncertainty and density is going to be about a quarter of a fringe shift. And we'll talk about-- I'll talk about what that means actually in the next bit of the chalkboard talk. So you can estimate uncertainty by saying how certain are you that the fringe has shifted up this far versus this far. So there's sort of like a pixel uncertainty, and also how good you are at assigning, this is the lightest part of the fringe, this is the darkest part, because, effectively, you're looking for the light parts and the dark parts, but there's several pixels which will be equally light, because it's near a maxima or a minima. You tend to whack a relatively high uncertainty on it and just call it a day. In this field, if we get measurements right within about 20%, we're pretty happy. So this is very different from other parts of plasma physics. Where there any questions online? I'm sorry. I can't see hands online at the moment, because I hid that little bar, and I don't know how to get it back. So if you put your hand up or something like that, I can't see it. I assume I'm still on Zoom somewhere. No idea how to get back there. AUDIENCE: Escape, I think. [INAUDIBLE] JACK HARE: Ah, OK. There was something in the chat. No questions here. All right, well, that was easy. Thank you. [LAUGHTER] I think we'll go back to the chalkboard for a moment then. I've got a few more pictures, depending on how we do for time. I guess I should look down here for the remote control at some point. So maybe just a little bit more practical stuff-- one thing that really matters is your choice of your probe wavelength. I've been talking a lot about frequency. It turns out that a lot of the time, people quote their frequencies in terms of wavelengths, and obviously, they're very intimately linked. So if you remember, we had our phase shift was minus omega over 2c [INAUDIBLE] like that. We often define a quantity called a fringe shift, which I'm going to write as capital F. And a fringe shift is just a shift of an intensity maxima or minima by an amount in time or space that makes it look like another intensity maxima or minima. Having said that out loud, I realize it's pretty incomprehensible. So let me draw the picture. Let's say that this is space or time. It doesn't really matter. And you've got some intensity here and some background fringe pattern like this. So this is the beat frequency that you're measuring, either in space or time in the absence of any plasma. And then, say, that you have some plasma signal. I'm going to draw this wrong if I don't look at my notes. Give me a moment. So in the presence of a plasma, your fringe pattern has been distorted. Why have I got that one twice? There we go. I did it. So this fringe here, you would think, should line up with this one. But in fact, it's been shifted all the way so it lines up with this one instead. So this is the case with plasma, and this is the case with no plasma. And that is the definition of one fringe shift. So it's effectively delta pi over 2 pi. We're just counting the motion of minima and maxima here. And we can write that in practical units as 4.5 and 10 to the 16 lambda times the line integrated electron density. So people often write this quantity here like this, I think, because it looks nicer on one line in an equation, because you don't have the big integral sign. But effectively, it just means the average electron density average over some distance L like that. And all of these units here are SI. And so that means you can then work out what your line-integrated electron density is in terms of fringes. It's 2.2 times 10 to the-- that was a minus 16 here, 10 to the plus 15 over lambda times the number of fringe shifts. And that's in units of per meter squared. And so now I'm just going to give you for different lambda what this number actually is so we can have a look at some different sources. So I have a little table where I have wavelength of the source, and then I have any L or F equals 1. That's in units of meters squared. I'm just going to go down a list of sources. So if, for example, we're in the microwave range here, this might be a wavelength of 3.3 millimeters. So that would be a 90 gigahertz source. So this is relatively low density plasmas here. And that density here will be 6.7 times 10 to the 17. We could jump quite a bit and go to a CO2 laser. This is a nice infrared laser. And you can make very powerful CO2 lasers. So they're quite popular for some diagnostics. [INAUDIBLE] had a CO2 interferometer. So this is 10.6 micrometers here. So I've dropped a couple of orders of magnitude from the microwaves. And you can see that the densities that we're measuring here have gone up quite a bit by similar amounts. Ah, that should be 20. And then something like a neodymium YAG laser, this is the sort of thing I use. If we use the second harmonic, that would be 532 nanometers. That will make those beautiful green images that we looked at. And that is 3.2 times 10 to the 21. So if I see a fringe shift of one, and on those images that I showed you first, the fringe shift was maybe two or three fringes, each of those corresponds to an electron density of 4.2 times 10 to the 21. So just by looking at the image and by eyeballing it, you can start estimating the line-integrated electron density. And then if you have some idea of how long your plasma is, you can then get a rough estimate of the electron density itself. I think what's very interesting about this, which I worked this out just before the lecture, and I hope I'm right, because I was very surprised by it. If we take, for example, a wavelength of 1,064 nanometers. So this is an infrared Nd:YAG laser. When we take a length of 10 to the minus 2 meters, so a centimeter-- so this is the sort of experiment that I might do-- one fringe would then correspond to a density of 2 times 10 to the 23 per meter squared. Not that interesting so far-- meter cubed. But then you ask yourself, what is the critical density here? This turns out to be 10 to the 28 per meter cubed. So what is the refractive index? We've been saying that it's 1 minus Ne over 2Nc. This is a very small number. This is now-- well, it's almost close. It's 1 minus 10 to the minus 5. So all of these interferometry effects we're looking at are to do with changes in the refractive index on the order of 10 to the minus 5 or so. I kind of find that remarkable. We're able to measure very, very, very small changes in the refractive index. It's not like N ever gets close to 0, or 2, or something bizarre like that. Anyway, I thought that was interesting. So you will pick your source to match your plasma. If you're doing low density plasmas, then if you use a 5.2-nanometer interferometer, you won't see any fringe shift. The fringes won't move at all. You won't be able to measure any plasma. So you need to use a long wavelength source that is more sensitive to those lower densities. And conversely, if you try to use a long wavelength source on a nice dense plasma, first of all, the beam may just get absorbed, because it'll hit the critical density, or it might get refracted out. And even if it doesn't do any of those, you'll have such huge phase shifts you won't be able to meet that heterodyne criterion. And you'll just have very complicated fringe patterns and no chance of processing. So you've got to pick very carefully what sort of source you have. And there are other ones out here as well. But I just sort of picked a range that might be relevant to some of the people in this room. Any questions on that before we move on? So I want to talk about a few extensions to this technique. And the first one we're going to talk about is called two-color interferometry. There are two reasons to do two-color interferometry, conveniently. One is to handle vibrations, and the other one is to handle neutrals. So let's have a little talk about vibrations, first of all. Your system is made up of lots of mirrors and other optics. And there are vibrations everywhere. And so all of these mirrors and optics will be vibrating slightly, which means their path length will be changing by a small amount. How big a deal is this? Well, if we imagine that you've got some mirror, for example, here, and we bounce our beam off it like that, and the mirror is oscillating with an amplitude little l here, we're going to get a phase change, very simply, just by looking at the distance that this moves on the order of 2 pi little l upon lambda. So if the phase-- if the amplitude of these vibrations is on the order of the wavelength, you're going to get a phase shift of 2 pi, which is actually already pretty huge. That's one fringe shift. But this is a tiny number. I mean, if I'm working with green light, this means I'm sensitive to vibrations on the order of 532 nanometers. So this is extremely hard to avoid. You can't get rid of vibrations after all very easily. So you're going to have big problems with these vibrations. And if your whole tokamak is vibrating, so you've got all these cryopumps and neutral beams and exciting things going on, this is going to be an absolute nightmare. It turns out not to be a huge nightmare for my stuff because although these are very sensitive to vibrations, the timescale over which our experiment takes place, the nanoseconds of a laser pulse, the vibrations are not in gigahertz. You don't have mechanical vibrations at those frequencies. So we can ignore it. But if you have vibrations at kilohertz or even hertz from people walking around, this will ruin your nice steady state experiment. And what we want to notice is that this phase shift in particular, as I've just sort of alluded to, is large or small wavelengths. And this already suggests the beginnings of a scheme that we're going to use, especially because I called it two-color interferometry to deal with this. So the solution is we run two interferometers down the same line of sight with two different wavelengths. One of these wavelengths is short. And so that could be something like on a tokamak a laser. So it could just be a HeNe laser beam. It goes straight through the plasma. It doesn't see it at all because it's not dense enough. But that short wavelength is very sensitive to vibrations. The vibrations are proportional [INAUDIBLE], should we call it, a vibration [INAUDIBLE] over lambda. So this will be a very good diagnostic, not of plasma, because it won't see the plasma, but it will be a very good diagnostic of vibrations. And the other one, the long wavelength here, which might be on a tokamak something like a microwave source, so many, many, many differences in wavelength by two or three orders of magnitude. That long wavelength will be very sensitive to the plasma, because my plasma is proportional to lambda. And so what you can do, when you measure the overall phi with these two devices-- well, I'm not going to do it explicitly-- you're going to have two sources of phase. One of them is going to be the standard plasma here, NeL. And the other one is going to be the vibrations due to L. So you'll have two unknowns. And now you've got two measurements. And so you can use a system of equations to solve for that. And if you're very, very quick, you can even use the short wavelength to feed back into your mirrors with the PA side and stabilize the mirrors. So you can do vibration stabilization feedback mirrors. So you can use this very fast, short wavelength interferometer to vibration stabilize all of your mirrors. I don't know if anyone's actually done this. It's in Hutchinson's book, so presumably someone tried it. It sounds like a lot of work. But I guess it could be very, very effective. So this is maybe more of a question mark, rather than something that everyone does. It's pretty clear that if you've digitized both these signals, you should be able to work out what was the vibration and what was the plasma. Of course, if the vibrations are huge, it might still ruin your measurements. So it might be worth doing this feedback system. So that's one use for two-color interferometry, just in case you were trying to work it out. The two colors is because we have two wavelengths, and we tend to associate wavelength with color. Any questions on that? The second thing I want to deal with is neutrals. So far our refractive index has been derived assuming a fully ionized plasma. And so in that fully ionized plasma, we just have ions. And we have electrons. Now, these have associated frequencies with them, the ion frequency, which is much, much less than the electron frequency. And so when we write down the refractive index, we just have 1-- this is refractive index squared-- 1 minus omega squared over omega [INAUDIBLE] squared. I need the chalk there. There's technically another term in here, 1 minus omega squared omega Pi squared. But because the arms are so much more massive, we always just ignore this term. And we subtly drop the subscript on the plasma frequency here. So this is the refractive index we've been using so far. However, when you've also got neutrals, you've got some density of these as well. And I'm going to write that N alpha, because-- no, not alpha, Na. The a is sort of being atoms. So let's go for that. Now, neutrals are much more complicated, in fact, because they have atomic transitions inside them. And those atomic transitions change the refractive index. If you're closer to an atomic transition, you have very different effects, like absorption, than you do if you're far from an atomic transition. And so your spectra or your plot of refractive index for your neutrals here against frequency will look like some sort of spiky minefield of lines, something like that. It will depend exactly on the atomic physics. And in general, we can write down this refractive index as equal to 1-- always a good start-- plus 2 pi e squared upon me. Ignore that. It's just some constants that they're normalizing by. And then we sum over every single atomic transition in this neutral gas. So first of all, we want to sum over all of the atoms in state i here. So for example, there may be atoms which are partially ionized. They've lost one electron. And so then they have a different refractive index here. So all of the atoms in a certain state i. And then all of the transitions between that state i and some other state k as this is divided by the transition frequency, which is one of these lines here between i and k minus omega squared here. So in this case, Fik is the strength of one of these transitions, which determines how likely it is to happen, and so how strongly it shows up. And this is the frequency of one of these transitions. I misspoke earlier. I spoke about this being through ionization. It's just that it was excitation. So it's to do with whether your atom is in its ground state, or some other state, or something else like that. Now, this formula is intensely complicated. And you can spend a very long time doing quantum mechanics calculations to try and work out both of these two terms. And of course, as soon as you go to something above hydrogen, it becomes very complicated. Even for hydrogen, it's pretty complicated. But above hydrogen, it's extremely complicated, because there are multiple electrons interacting here. So you don't actually stand a chance of solving this directly. Your best bet is the fact that if you look at some parts of the spectrum or some part of the refractive index that's away from one of these lines. So this is for omega not equal to any of these transitions here. There's a general formula that works pretty well, which says the refractive index is just equal to 1 plus 2 pi alpha Na, where this Na here is still the number density of neutrals. And this new alpha is a quantity called the polarizability. [INAUDIBLE], which is easy to calculate, and also very easy to measure. So you can measure this for your different gases. And so for example, this little table of different gases, if we have it for helium, or hydrogen, or argon here, here's the 2 pi alpha in units of meters cubed. The 2 pi is just some normalization constant that comes from somewhere else in the theory. So we always quote it with the alpha. I'm just going to quote you 2 pi alpha here. But this number is like 10 to the minus 30 for helium 5 times 10 to the minus 30 for hydrogen, and 1 times 10 to minus 29. I wrote this as argon, because I missed the i in my notes. It's actually air. So there we go. And then just as a little calculation here, air at standard temperature and pressure, the number density is around about 2.5 times 10 to the 25 per meter cubed. And so therefore, the refractive index of air is about 1 plus 2.5 times 10 to the minus 4. So what began to change in the refractive index is very small compared to 1 for the neutrals. It's on the same sort of order as the change of refractive index you get for a similar sort of plasma. But crucially, the refractive index is always greater than 1 for neutrals. And it is always less than 1 for a plasma. This is our first hint at how we're going to use two-color interferometry here. So let's have a look at how to use two-color interferometry to determine both the number density of the neutrals and the number density of the electrons. And you might come across this scenario quite a lot. If you're doing low temperature plasmas, you always come across this scenario. Even if you're doing something in a tokamak, maybe there's a region at the edge where there's a large number of neutrals, and your beam has to go through that region at the edge. And you want to ignore it. You just want to measure the core. But you're still actually picking up a big phase shift from these neutrals at the edge. So this is a big problem. And this is a hard one to solve. So let's have a look at this. So remember that when we derived the phase shift, delta phi, I did it, first of all, just in terms of a generic refractive index before specifying it to be a plasma. And that refractive index was just n minus 1 dl. There's 2 pi lambda at the front here. So this is just affecting the change in the path length, effective path length that the probing beam sees. And we also said that NA is greater than 1 and dNa d lambda is equal to 0, which is mathematically saying that in this polarizability model here, we assume that it doesn't depend on wavelength, as long as we don't go too close to one of these transitions. So the polarizability here is the same as the polarizability here is the same as the polarizability here. So then we'll end up with a total fringe shift on our interferometer of minus 4.5 times 10 to minus 16 times lambda times the integral of the electron density. That's the plasma component here. And we will also have a term, which is plus-- notice the difference in the sign here-- plus 2 pi alpha upon lambda integral of Na here. So they will cause shift and fringe in different directions. So to a lower effective spatial or temporal frequency, but they also have a different dependence on lambda. And this is key, because, again, as we saw with the vibrations, we have different dependencies on lambda. We can use a two-color technique to get around this. So phi plasma is big for large wavelengths. And phi neutral, phase shifted for neutrals, is big for small wavelengths. And again, we've got two unknowns. We've got the density of the electrons. Oh, sorry. We don't see the neutrals. And if we have a two-color technique, we have two equations. And so we can solve all of that. And I'm not going to write down the algebra now. It's quite boring. And you can work out uniquely, apparently, what the electron and neutral densities are. And I'll show you that this doesn't generally work in practice. So in practice, you often end up with negative predictions for your density of both the neutrals and the electrons. And this tends to be, as far as I can tell from reading the literature, that when we did this approximation of n is about 1 plus 2 pi alpha here, we have assumed that alpha is constant. But it doesn't have to be constant. It could change with wavelength. And if we're using two different wavelengths, and there are two different values of alpha, then that will cause chaos with your two equations, two unknowns, because alpha is actually relatively hard to pin down in some sources. You can very rarely find it for the exact wavelength you're working on. You may also have horrifically ended up using one of your wavelengths, like here, halfway up one of these resonances, or even worse, at the peak of one of the resonances. And if you did that, your whole model is completely off. And I think this is what causes this. And I'll show you some data I took where we predicted negative densities. And I'll talk a little bit about that as well. Obviously, negative densities are unphysical. So we thought that was probably wrong, but we published it anyway, because other people were doing the technique and not pointing out they had negative numbers. And we thought it'd be nice to point out that we knew that it was wrong. So that was a quick roundup of two-color interferometry. And I have some quick slides after this showing some examples. But I'll just pause here and see if there are any questions. Yes. AUDIENCE: Does this technique work better for certain levels of [INAUDIBLE]? Is there a spectrum where it works better than others? JACK HARE: The question was, does this-- sorry. Go on. AUDIENCE: If you're using a really weakly ionized plasma, it's really not a good option as opposed to a more [INAUDIBLE]. JACK HARE: Right. Really good question. So the question was, does this work better for different levels of ionization in a plasma? So you might be thinking to yourself, if I have a very, very weakly ionized plasma, then it may be very hard to measure the electrons over the overwhelming change in refractive index from the neutrals. And that's true. It's going to be hard. But if you look at this equation here, this is the thing you're measuring, the fringe shift. You can choose your two wavelengths to optimize the wavelength sensitivity of one of them to the electrons and the sensitivity of the other one to the neutrals. So you're going to need to have some widely spaced wavelengths. So if you try and do it with just two different frequencies from the same laser, that's going to be really hard. But if you have a microwave interferometer and a HeNe green laser beam, like they do on the tokamaks for vibration stabilization, that will work much better. The difficulty there is then you have two completely different detection techniques. And so it's not easy to compare these two. But that's what you probably want to do if you're dealing with 1% ionization or something like that. You might have to do this technique. Were there other questions? Anything online? AUDIENCE: What physically is the polarizability? Is that the electric polarizability of the medium? JACK HARE: Yeah. The question was, what physically is the polarizability? This polarizability is very strongly related to how the electron wave functions are distorted by the electric fields of the electromagnetic wave. AUDIENCE: OK. JACK HARE: Yes. Which is why when you get close to a transition and the frequency of the wave is now resonant with some atomic transition, this polarizability changes dramatically. As opposed to just going through the medium, the electromagnetic wave is absorbed. I'm saying it in a very classical way, but of course, you need to start doing quantum if you want to have absorption. AUDIENCE: It's like the wave field is inducing a small dipole moment. JACK HARE: Yeah, absolutely. The wave is inducing a dipole moment, and that is slowing down the wave. Just slowing down the phase of the wave. In a plasma, remember, it always speeds up the phase of the wave. Any other questions? Maybe I should have saved all my pictures till the end and avoided having to find this thing twice. Can you see this online? AUDIENCE: Yes. JACK HARE: OK, perfect. And it's showing up slowly here. So this was a set of experiments we did with a very sexily named, but boring device called a plasma gun, which is actually just a bit of coax cable where you've chopped off the end of it. And you pulse it with some current. And the current flows up the inner conductor across the chopped off plastic insulator and back down through the outer conductor. And as it flows across here, it sends plumes of plasma outwards. And they're moving 10 kilometers a second, but this only works in a vacuum. So it's not a very good gun. Anyway, this was a fun object to study, because there was a grad student using it for his PhD thesis. And we put it on our experiments. And we did two-color interferometry. And these interferograms were made using an Nd:YAG laser, neodymium YAG. We used the second harmonic at 532 nanometers, which shows up as green here, to do one of the measurements. And simultaneously, along the same line of sight through the same bit of plasma, we used the third harmonic, which is 355 nanometers. So that's in the ultraviolet. So you can't see it by eye. You might be asking, why does it show up as orange? And this is when you remove the ultraviolet filter from your off-the-shelf Canon DSLR camera. The pixels get confused and think that this is orange. Obviously, it can't-- it can see ultraviolet, but what we can render it as on the screen. Anyway, this is 355. And these are the fringes before the plasma was there. And these are the fringes after the plasma was there. And you can see the fringe shift is very small. There's just tiny little shifts here and here. So this wasn't a very high plasma density. But what we were able to do is to infer the phase shift for these two interferograms like this. And then we combined these two together using the simultaneous equations. We also did a novel inversion, which I'll talk about in the next lecture. And we got out the electron density. And this looks quite reasonable. We get up to about 10 to the 18 here. And it falls off nicely in various directions. But we also got a prediction of the neutral density here. And these red regions are fine. These are positive numbers. But then right in the middle here, there's a big negative number. And in fact, it's so negative we're predicting many more absences of neutrals than we had electrons. So it's like clearly completely nonsense. And so we went back to some of the textbooks that explained this technique for measuring neutrals, and we found that the example data they were showing also had negative numbers in. It's just they didn't bother to mention that this was a huge problem. We think this is a huge problem. So I'm a little bit baffled by the fact that people will in a textbook say that this technique can be used to measure neutrals. But in reality, it seems to be really, really tricky to do it properly. And I think the problem is the quality of the polarizability data that we have. So we were trying to use the polarizability data, assuming that it worked at 532 and at 355 nanometers. But it was derived in the lab by some group in the '80s who published a paper on it. And they did it like at 10.6 microns in the infrared. So there's no really good reason to believe the polarizability is the same. But it's really hard to get hold of this data in a consistent fashion. So if you're going to try and use this technique, I think you should be very skeptical about the results, especially if you start seeing negative numbers. So we've only got a couple more minutes. I'll just take any questions on this, and then we'll do [INAUDIBLE] inversion in the next lecture. So any questions? Yes. AUDIENCE: How do you go about measuring [INAUDIBLE] parameters to get rid of any disruption? JACK HARE: You could use interferometry to measure the polarizability in the absence of any electrons. Then you'd know exactly what you were measuring. So if you were able to puff some gas in-- you need some measurement of the number density as well. So that's-- you can imagine if you've got a gas cell at a certain pressure and it's at room temperature, then from the ideal gas law, you know the number density inside that gas cell. And you know the size of the volume. And you could do interferometry on that volume, for example, and that would give you a measurement of the phase shift. And then you could back out what the polarizability must be. Maybe we should have done that here, but we didn't. [LAUGHTER] Yeah. AUDIENCE: That's just like an additional calibration step? JACK HARE: Yes. Yes, exactly. So I think it's a doable calibration. It's just quite hard. Whereas for the electron density, it's like that is all in terms of fundamental parameters, like the electron charge, and the electron mass. And you're like, OK, we know what those are. So then when you're applying those formulas, you have absolute confidence when you measure the phase change what the electron density is. It's just for this, the theory is a little bit more wobbly. Other questions? AUDIENCE: If you have good data on individual species, then you know you have a certain ratio in your plasma. Is there any reason to think that you couldn't just do an average? JACK HARE: If you think beforehand you somehow know for some reason the number of-- the electron density and the neutral density, or-- AUDIENCE: Oh, sorry, like two different neutral species. And you have good-- JACK HARE: Oh, we didn't even get into that. That's a nightmare, because then you'll have two different polarizabilities, and even more transitions that you're trying to miss. So we're already like, oh god, let's stay well away from any of these transitions. But if you have two species, you'll have an even harder time finding a region with no transition. So it'll be very hard to find on its source. Any questions online? All right, thank you very much, everyone. See you on Thursday.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
20_Roths_theorem_III_polynomial_method_and_arithmetic_regularity.txt
YUFEI ZHAO: For the past couple lectures, we've been talking about Roth's theorem. And we showed-- so we saw a proof of Roth's theorem using Fourier analytic methods. And we saw basically the same proof but in two different settings. So two lectures ago, we saw a proof in F3 to the M And basically the same strategy, but with a bit more work, we were able to show Roth's theorem worth roughly comparable bounds over the integers. Today, I want to show you a very different kind of proof of Roth's theorem in the finite fieldfini setting. So first let me remind you, the bound that we saw last time for Roth's in F3 to the M gave an upper bound on the maximum number of elements in the 3-AP-free set that were of the form 3 to the n over n. And so this proof wasn't too bad. So we did it in one lecture. And then with a lot more work-- and people tried very, very hard to improve this-- and there was a paper that got it to just a little bit more. And this was a lot of work. And this was something that people thought was very exciting at the time. And then just a few years ago, there was a major breakthrough, a very surprising breakthrough, where-- you know, at this point, it wasn't even clear whether 3 should be the right base for this exponent. That was a big open problem. And then there was a big breakthrough where the following bound was proved, that it was exponentially less than the previous bound. So this is one that I want to talk about in the first part of today's lecture. So this development came first-- the history is a bit interesting. So Croot, Lev, and Pach uploaded a paper to the archive May 5 of 2016, where they showed not exactly this theorem but in a slightly different setting in this group, so in Z mod 4 instead of Z mod 3. And this was already quite exciting, getting exponential improvement in this setting. But it wasn't exactly obvious how to use their method to get F3. But that was done about a week later. So Ellenberg and Gijswijt, they managed to improve the-- use this technique to modify the Croot-Lev-Pach technique to the F2 to the n setting, which is the one that we've been interested in. So there's a small difference between these two, namely this group has elements of order 2, which makes things a bit easier to do it here with. So this is the Croot-Lev-Pach method, as it's often called in literature. And we'll see that-- it's a very ingenious use of the so-called linear algebraic method in combinatorics, in this case the polynomial method. And it works specifically in the finite field vector space. So what we're talking about in this part of the lecture does not translate whatsoever. At least, nobody knows how to translate this technique to the integer setting. So how does it work? The presentation I'm going to give follows not the original paper, which is quite nice to read, by the way. It's only about four pages long. It's pleasant to read. But there's is a slightly even nicer formulation on Terry Tao's blog. And that's the one that I'm presenting. So the idea is that if you have a subset of F3 to the n that is 3-AP-free, such a set also has a name capset, which is also used in literature in this specific setting where you have no three points on the line. In this case, then we have the following identity. So here delta is the Dirac delta. Let me write that down in a second. So the delta of a is the Dirac delta. It's either 1 if x equals to a, and 0 if x does not equal to a. So this is simply rewriting the fact that x, y, z form a 3-AP if and only if their sum is equal to 0. And because you're 3-AP-free, the only 3-AP's are the trivial ones recorded on the right-hand side. So this is simply a recording of the statement that A is 3-AP-free. And the idea now is that you have this expression up there, and I want to show that if A is very, very large, then I could get a contradiction by considering some notion of rank. So we will show that the left-hand side is, in some sense, low rank. Well, I haven't told you what rank means yet. But the left-hand side is somewhat low rank, and the right-hand side is a high-rank object. So what does rank mean. So recall from linear algebra-- so the classical notion of rank corresponds to two variable functions. So you should think of F as a matrix over an arbitrary field F. So such a function or a corresponding matrix is called rank 1 if it is nonzero and it can be written in the following form-- F of x, y is f of x g of y for some functions that are one variable each. So, in matrix language, this is a column vector times a row vector. So that's the meaning of rank 1. And to say that something is of high rank of a specific rank-- rather, the rank of F is defined to be the minimum number of rank 1 functions needed to write F as a sum or a linear combination. So this is rank 1. And if you add up r rank 1 functions, then get something that's, at most, rank r. So that's the basic definition of rank from linear algebra. For three-variable functions, you can come up with other notions of rank. So what about three-variable functions? So how do we define a rank of such a function? So you might have seen such objects as generalizations of matrices called tensors. And tensors have, already, a natural notion of rank, and this is called tensor rank. Just like how, here, F is-- we say rank 1 if it's decomposable like that, we say F has tensor rank 1 if this three-variable function is decomposable as a product of one-variable functions. The tensor rank, it turns out, this is an important notion, which is actually quite mysterious. There's a lot of important problems that boil down to us not really understanding what tensor rank, how it behaves. And it turns out, this is not the right notion to use for our problem. So we're going to use a different notion of rank. Here, rank 1 is decomposing this three-variable function into a product of three one-variable functions. But, instead, I can define a different notion. We say that F has slice rank 1-- so this is a definition that's introduced in the context of this problem, although it's also quite a natural definition-- if it has one of the following forms. So I can write it as a product of a one-variable function and a two-variable function. So one variable and the remaining two variables. But this definition should also be symmetric in the variables, so the other combinations are OK as well. So this is the definition of a rank one function, a slice rank 1. And, also, if nonzero. If it's nonzero and can be written in one of these forms. And, just like earlier, we define the slice rank of F to be the minimum number of slice rank 1 functions. Same as before, that you need to write F as a sum. So I can decompose this F into a sum of slice rank 1 functions. What's the most efficient way to do so? So that's the definition of slice rank. And, you see, you can come up with this definition for any number of variables, where slice rank 1 means decompose into two functions, where one function takes one variable, and the other function takes all the remaining variables. And, therefore, two variables, slice rank and rank correspond to the same notion. Any questions so far? All right. So let's look at the function on the right. So think of it as a matrix, a tensor. So what is it? Well, it's kind of like a diagonal matrix. So that's what it is. It's a diagonal matrix. So what is the rank of a diagonal matrix, in this case a diagonal function? Well, you know from linear algebra that if you have a matrix, then the rank of a diagonal matrix is the number of nonzero entries. So something similar is true for slice rank, although it's less obvious. It will require a proof. So if I have this three-variable function defined by the following formula. So, in other words, it's a diagonal function where the entries on the diagonals are the Ca's. So what is the rank of this function? So the slice rank of F. In the matrix case, it will be the number of nonzero entries, and it's exactly the same here. So number of nonzero diagonal entries. That turns out to be the slice rank. Let's see a proof. So we go back to the definition of slice rank. And we see that one of the directions is easy. So this less than or equal to, greater than or equal to-- so which one is easy? So, you see, the right-hand side is a sum of r-- of a-- well, this many rank 1 functions. So this direction is-- so this direction is clear, just looking at the definition. I can write F explicitly as that many rank 1, slice rank 1 functions. So the tricky part is greater than or equal to. And for the greater than or equal to, let's assume that all the diagonal entries are nonzero. So why can we do this? If it's not nonzero, I claim that we can remove this element from A. If the Ca is not 0, then I remove a from the set. And doing so cannot increase the rank. A priori, the rank might go down if you get rid of an entry. Because if you add an entry, even though the function doesn't change on the original set, if you increase your set, maybe you have more space, maybe you have more flexibility to work with. But, certainly, if you remove an element, the rank cannot go up. Now, so suppose the slice rank of F is strictly less than the size of A. So all these Ca's are nonzero. So suppose, for contradiction, that there is some different way to write function F that uses fewer terms. So what would such a sum look like? So I would be able to write this function F in a different way. Like that. And then, now, I look at these-- the other types of functions using different combination of the variables. So suppose there were a different way to write this function F that uses fewer terms. So I assume it uses exactly the size of A minus 1 terms, and always putting zero functions if you like. So now I claim that there exists a function h on the set A whose support-- so the support is the number of entries that give nonzero values. The support of F is bigger than m, such that the following sum is 0. So I claim that we can find a function F-- h such that I think of it as in the kernel of some of these f's. So this is a linear algebraic statement. Yes. AUDIENCE: What is h sub [INAUDIBLE]?? YUFEI ZHAO: Ah, sorry. It's just h. Thank you. It's a single function h such that this equation is true for all x. AUDIENCE: [INAUDIBLE] h of x minus the sum of all [INAUDIBLE]. YUFEI ZHAO: You are right. So what do I want to say here? So we want to find a function h such that the support of h is at least m. So what do we want to say? I want to say that-- yes, so you're right. This is not what I want to say. And, instead, it's something-- mm-hmm. Yes, good. So, let's see. So here we have some number of functions. Here, we have some number of functions. And for each a, I have-- or for each-- let's see. Umm, hmm. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: I'm sorry? AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: No. So I do want to show-- no, there's no induction, because I'm in three variables, and I want to get rid of-- so the point is-- so let's see where we're going eventually, and then we'll figure out what happened up there. So we want to consider-- so I would like to eventually consider the following sum. So I want to consider this sum, which comes from-- so you look at-- wait, no. That's not the sum I want to consider. So let's look at this F of x, y, z, so F being that sum. No. So take that F up there. And let me consider, basically, taking the inner product of this function viewed as a function in z. So consider this inner product. And if I-- ah. I think-- so what I want to say is not this. So what I want to say is, if I look at an inner product of h with the-- so take one of these f's-- take one of these f's and look at the bilinear form relating each in f. So I want to show that this sum vanishes for all i between m plus 1 and the size of A minus 1. So this row, I want it to vanish when being taken bilinear form with h. So that makes sense now. OK, good. So the fact that such a nonzero h exists simply is a matter of counting parameters. It's a linear algebraic statement. You have some number of freedoms. You have some number of constraints. So the set of such h satisfy all of these constraints. So there are this many constraints. Well, each one of them could carry down to one dimension less, but the set of such h is a linear subspace of dimension bigger than m, because I have A dimensions, and I have these many constraints. So the set of such h is-- there are a lot of possibilities. And, furthermore, it is also true that-- and this is a linear algebraic statement-- that every subspace of dimension m plus one has a vector whose support has size at least m plus 1. I'll leave this as a linear algebraic exercise. It's not entirely obvious, but it is true. When you put these two things together, you find that there is some vector-- so I think of the corners of the vectors as indexed by the set A-- there is some vector whose support is large enough. So we prove the claim. Let's go back to this lemma about this diagonal function having high rank. Take h from the claim. So let's take h from the claim. Then let's consider this sum over here. On one hand, what this sum is-- you can do the sum on the right-hand side. We see that it's like multiplying a diagonal matrix by a vector. So what you get, following the formula on the right-hand side, is the following. Let me rewrite this part. Sum over a of C sub a h of a delta sub a of x delta sub a of y. Just looking at the formula from the right hand side. On the other hand, if you had a decomposition up there, doing this sum and noting the claim, we see that the third row is gone. So what you would have is a sum over these z's of-- so let me write that like this. So you would have a sum that is of the form f1 of x and g tilde 1 of y, where g tilde is basically the inner product of g1 as a function of z with h. So fl of x gl of y. And then, also, functions like that. So there exists some functions g, which come from g tilde, which come from the g's up there, such that this is true. But now we're in the world of two-variable functions. So left and right-hand side are two-variable functions. And for two-variable functions, you understand what is the rank of a diagonal function. So the left-hand side has more than m diagonal entries, because h has support. So the number of diagonal entries is just the support of h. Whereas the right-hand side has rank-- so now a linear algebraic matrix rank-- at most, m. And that's a contradiction. Yes. AUDIENCE: So you can show a similar statement where [INAUDIBLE]. YUFEI ZHAO: Great. So we can show a similar statement for arbitrary number of variables by generalizing this proof and using induction on the number of variables. But we only need three variables for now. Any questions? Just to recap, what we proved is the generalization of the statement that a diagonal matrix has rank equal to the number of nonzero diagonal entries. But the same fact is true for these three-variable functions with respect to slice rank. So this is intuitively obvious, but the execution is slightly tricky. All right. So now we have the statement here. Let's proceed to analyze this function which comes from-- so this relationship here coming from set A that is 3-AP-free. So suppose now I'm in-- so let me-- so everything so far was generally with any A. But now let me think about, specifically, functions on the finite field vector space, F3 to the n. So it's a function taking value F3. And this function is defined to be the left-hand side of that equation over there. So the claim is that-- so the left-hand side claim that this function has low rank. So we claim that a slice rank of this function is, at most, 3M, where M is the sum of, essentially, this multinomial coefficient. So we'll analyze this number in a second, but this number is supposed to be small. So we want to show that this function here has small rank. So let's rewrite this function in a form explicitly as a sum of products by expanding this function after writing it in a slightly different form. So in F3, in a three-variable-- in characteristic-- so in F3, you have this equation. You can check that it's true for x equal to 0, 1, or 2. So picked that, and plug it in over here. So we find-- so now x, y, z are in F3 to the n. So we find that, applying this guy here coordinate-wise, you have this product. Great. Now let's pretend we're expanding everything. This is a polynomial in 3n variables, 3n variables. It's degrees is 2n. So if we expand, we get a bunch of monomials. And the monomials will have the following form. So the x's, which-- whose exponents I call i, the y's, whose exponents I call j, and the z's, whose exponents I call k, where-- so I get a sum of monomials like that, where all of these i, j's, and k's are either 0, 1, or 2. So I get this big sum of monomials, and I want to show that it's possible to write this sum as a small number of functions that can be written as a product, where one of the factors only involves one of x, y, z. So what we can do is to group them. So group these monomials by the-- so, for example, I'm going to group these monomials by using the following observation. So by pigeonhole, at least one of the exponents of x, or the exponents of y, or the exponents of z, at least one of these guys is, at most, 2n over 3. So I group these monomials by the-- one of x, y, z that has the smallest exponent. So the contributions to the rank or the slice rank from monomials with the degree of x being, at most, 2n over 3, well, I can write such contributions in the form like that, where this f of x is a monomial, and the g is a sum of whatever that could come up. This is a sum, but this is a monomial. So the number of such terms-- so the number of such terms is the number of monomials corresponding to choices of i's, the sum to 2n over 3, and individual i's coming from 0, 1, or 2. And that number is precisely M. So M counts the number of choices of 0, 1, 2's. There are n of them. And the sums of the i's is, at most, 2n over 3. So these are contributions coming from monomials where the degree of x is, at most, 2n over 3. And, similarly, with degree of y being 2n over 3, and also degree of z being, at most, 2n over 3. So. all the monomials can be grouped in one of these three groups, and I count the contribution to the slice rank. AUDIENCE: Do we have a good idea as to how sharp this bound is? YUFEI ZHAO: So the question is, do we have a good idea as to how sharp this bound is? That's a really good question. I don't know. Yes. Great. So that finishes the proof of this lemma. So now we have this lemma. I can compare-- so we have these two lemmas. One of them tells me the rank of the right-hand side, which is A. Let's compare ranks, the slice rank. So the left-hand side, we know it is, at most, this quantity. And the right-hand side is equal to A. So we automatically find this bound. So now we want to know how big this number M is. So there's actually-- this is a fairly standard problem to solve to estimate the growth of this function M. So let me show you how to do it, and this is basically the universal method. Notice that I can-- if I look at this number here, where if-- so now x is some real number between 0 and 1. Then I claim the following is true. And this is because if you expand the right-hand side and count your monomials-- so you can just keep track of which monomials occur, and there are M of them, where you can lower bound by this quantity here. So this is kind of related to things in probability theory on large deviations, to the Cramér's theorem. But that's what you can do. So this is true for every value of x, so you pick one that gives you the best bound. So M is, at most, the inf of this quantity here. And to show you any bound, I just have to plug in some value. So if I plug in, for example, x being 0.6, I already get a bound which is the one that I claimed. And it turns out this step here is not lossy. As in, basically, up to 1 plus little o1 in the exponent, this is the correct bound. And that follows from general results in large deviation theory. And that finishes the proof. Alternatively, you can also estimate M using Sterling's formula. But this, I think, is cleaner. Great. Any questions? Yes. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: Ah, OK. So why is this step true? So if you expand the right-hand side, you see that the right-hand side is upper bounded by all these a, b, c, as in-- same as over here, x to the b plus 2c. And because how many terms-- and, also, there's a binomial coefficient term. So, basically, I'm doing the multinomial expansion, except I toss out everything which is not part of the index. And because b plus 2c is, at most, 2n over 3, I get M times x to the 2n over 3. OK? AUDIENCE: Yes. YUFEI ZHAO: Now I want to convey a sense of mystique about this proof. This is a really cool proof. So because you're seeing a lecture, maybe it went by very quickly. But when this proof came out, people were very shocked. They didn't expect that this problem would be tackled, would be solved using a method that is so unexpected. And this is part of this power of the algebraic method in combinatorics, where we often end up with these short, surprising proofs that take a very long time to find. But they turn out to be very short. So this is very short. This was basically a four-page paper. But when they work, they work beautifully. They work like magic. But it's hard to predict when they work. And, also, these methods are somewhat fragile. So, unlike the Fourier analytic methods that we saw last time, with that method, it's very analytic. It works in one situation, you can play with it, massage it, make it work in a different situation. Here, we're using something very implicit, very special about these many variables. And if you try to tweak the problem just a little bit, the method seems to break down. So, in particular, it is open how to extend this method to other settings. It's not even clear what the results should be. So it's open to extend it to, for example, 4-AP. So we do not know if the maximum size of 4-AP-free subset of F5 to the n is less than some constant, 4.99 to the n. So that's very much open. By the way, all of this 3-AP stuff, right now I've only done it in F3, but it works for 3-AP in any finite field. It also is open to extend it to corners. So you can define a notion of corners. So, previously, we saw corners in integer grid. If I replace integer by some other group, you can define a notion of corners there. So not clear how to extend this method to corners. And, also, is there some way to extend some ideas from this method to the integers? It completely fails, so this method is not clear at all how you might have it work in a setting where you don't have this high dimensionality. I mean, the result will be different, because, integers, we know that there's no power saving, but maybe you can get some other bounds. Any questions? OK. great. Let's take a break. So in the first part of today's lecture, I showed you a proof of Roth's theorem. In F3 to the n, that gave you a much better bound than what we did with Fourier. Second part, I want to show you another proof. So yet another proof of Roth in F2 to the n, and this time giving you a much worse bound. But, of course, I do this for a reason. So it will give you the new result. So it will give you some more information about 3-AP's and F3 to the n. But the more important reason is that in this course I try to make some connections between graph theory on one hand and additive combinatorics on the other hand. And, so far, we've seen some analogies. Well, in the proof of Szemeredi's graph regularity lemma versus the proof-- the Fourier analytic proof of Roth's theorem, there was this common theme of structure versus pseudorandomness. But the actual execution of the proofs are somewhat different. Because, on one hand, in regularity lemma, you have energy increment. You have partitioning and energy increment. And, on the other hand, with Roth, you have density increment. Or you're not partitioning. You're zooming in. Take a set, find some structure, zoom in, find some structure, zoom in. You'll get density increment. So it's similar, but differently executed. So, today-- I mean, this second half, I want to show you how to do a different proof of Roth's theorem that is much more closely related to the regularity proof, so that has this energy increment element to it. And I show you this proof because it also gives you a stronger consequence. And, namely, we'll get that there is also not just 3-AP's but 3-AP's with popular difference. So here's the result that we'll see today. So it's proved by Ben Green. That for every epsilon, there exists some n0 such that every A in subset of F3 to the n with density alpha, there exists some nonzero y such that the number of 3-AP's with common difference y-- so let's think about what's going on here. So if I just give you a set A and ask you how many 3-AP's are there, and compare it to what you get from random, random meaning if A were a random set of the same density. So question is, can the number of 3-AP's be less than the random count? And the answer is yes. So, for example, you could have-- in the integers, you can have a barren type construction that has no 3-AP's. So, certainly, that's fewer 3-AP's than random. And you can do similar things here. But what Green's theorem says is that there exists some popular common difference-- so this is a popular common difference-- such that the number of 3-AP's in A with this common difference is at least as much as what you should expect in a random setting, up to a minus epsilon. So this is the theorem. So let me say the intuition again. It says that, given an arbitrary set A, provided the space dimension is large enough, there exists some popular common difference, where popular means that the number of 3-AP's with that common difference is at least roughly as many as random. In particular, this proves Roth's theorem, because you have at least some 3-AP's. But it tells you more. It tells you there's some common difference that has a lot of 3-AP's, even though, on average, if you just take an average, if you take a random y, this is false. Any questions about the statement? So Green developed an arithmetic analog of Szemeredi's graph regularity lemma in order to prove this theorem. So starting with Szemeredi's graph regularity lemma, he found a way to import that technique into the arithmetic setting, in F3 to the n. So I want to show you how, roughly, how this is done. And just like in Szemeredi's graph regularity lemma, there were unavoidable bounds which are of power type, the same thing is true in the arithmetic setting. So Green's proof shows that the theorem is true, with n0 being something like tower in-- a tower of twos. The height of the tower is a polynomial in 1 over epsilon. So just like in regularity lemma for graphs. So this was recently improved in a paper by Fox and Pham just a couple of years ago, where-- and this is the proof that I will show you today-- where you can take n0 to be slightly better but still a tower, but a tower of now height log in 1 over epsilon. So it's from a really, really big tower to slightly less big tower. But, more importantly, it turns out-- so they also showed that this is tight. You cannot do better. There exists constructions, there exist sets A for which you-- I mean, this theorem is false if you replace the big O by less than some very small constant. So many applications of the regularity lemma. That first proof, maybe using regularity, is difficult. Well, it gives you a very poor bound. But, subsequently, there were other proofs, better proofs, that give you non-tower type bounds. But this is the first application that we've seen where, it turns out, the regularity lemma gives you the correct bound. So it's really-- you need a tower-type bound. I mean, we know the regularity lemma itself needs tower-type bounds. But it turns out this application also needs tower-type bounds. That's quite interesting. So, here, the use of regularity is really necessary in this quantitative sense. So let's see the proof. So let me first prove a slightly technical lemma about bounded increments. So this is-- corresponds to the statement that if you have energy increments, you can not increase too many times, but in a slightly different form. So suppose you have numbers alpha and epsilon bigger than 0. And if you have this sequence of a's between 0 and 1, and such that a0 is at least alpha, then there exists some k, at most log base 2 of 1 over epsilon, such that 2 a sub k minus a sub k plus 1 is at least alpha cubed minus epsilon. So don't worry about this form. We'll see shorty why we want something like that. But the proof itself is very straightforward. Because, otherwise-- so you start with a0. Now, then, if this is not true for k equals to 0, then a1 is at least 2 a0 minus epsilon cubed plus epsilon. So a0 is at least alpha cubed. So if-- otherwise, you have some lower bound on alpha 1, which is at least alpha cubed plus epsilon. And, likewise, you have some lower bound on alpha 2. You have some lower bound on-- sorry-- alpha 2, and this lower bound is plus 2 epsilon. So you keep iterating. You see the next thing is 4 epsilon, and so on. So if you get to more than this many iterations, you go more than 1. So alpha k is bigger than 1 if k is ceiling of log base 2 of 1 over epsilon. And that will be a contradiction to the hypothesis. So this is a small variation on this fact that you cannot increment too many times. Each time, you go up by a bit. Whereas, we save a little bit because the number of iterations is now logarithmic. So you double in epsilon each time. If I give you a function f on F3 to the n, and U is a subspace-- so this notation means subspace. Let me write f sub U to be the function obtained by averaging f on each U coset. So you have some subspace. You partition your space into translates of that subspace, and you replace the value of f on each coset by its average on that coset. So this is similar to what we did with graphons. You're stepping. So you're averaging on each block. So now let me prove something which is kind of like an arithmetic regularity lemma. And I mean, this statement will be new to you, but it should look similar to some of the statements we've seen before in the course. And the statement is that, for every epsilon, there exists some m which is a function of epsilon. And, in fact, it will be bounded, in terms of tower of height, at most order logarithmic in 1 over epsilon. Such that for every function f on F3 to the n that are values bounded between 0 and 1, there exists subspaces W and U, where the codimension of W is, at most, m. So you should think of this as the course partition and the fine partition in the partition regularity lemma. And the codimension is-- corresponds to the number of pieces. So three ways to codimension is the number of cosets. So you have bounded many parts, and have two partitions. And what I would like is that the number-- so if I-- I want f to be pseudorandom after doing this partitioning, so to speak. And this corresponds to the statement that if I look f minus fW, then the maximum Fourier coefficient is quite small, where quite small means, at most, epsilon over the size of U complement. So size of U perp. And, also, there is this other condition which tells you that the L3 norms between f sub U and f sub W are related in this way. So we haven't seen this before. In fact, specifically, this inequality is very ad hoc to the application of popular difference in 3-AP's. But we have seen something similar, where this relationship is replaced by something that accounts for the difference between L2 norms. So if you go back to your notes, when we discussed regularity lemma in a more analytic fashion, we have that. And you should think of this-- when we discussed strong regularity lemma, this definition here, this roughly corresponds to definition that in the fine partition versus the course partition the edge densities are roughly similar, that when you do the further partitioning, you're not changing densities up by very much. So that's the arithmetic regularity lemma. And once you have the statement-- I mean, I think the hardest part is writing down the statement. Once you have the statement, the proof itself is kind of this follow your nose approach, where you first define the sequence of epsilons. Epsilon 0 is 1, and epsilon sub k plus 1-- and don't worry about this for now. You will see in a second why these numbers are chosen. Let me write R sub k to be the set of r's-- so there will be characters-- such that the Fourier coefficient fr is at least epsilon sub k. So the r's are supposed to identify how we're going to do the partitioning. Now, the size of this R is bounded. So I claim that the size of R is, at most, 1 over epsilon sub k squared. And that's because there is this parsable identity, which tells you that the L2 sum of the Fourier coefficients is equal to the L2 of the function, which is at most 1. So the number of Fourier coefficients that exceed a certain quantity cannot be too many. So let U now be the subspace defined by taking the orthogonal complement of these r's. And let's note that if we take alpha sub k to be the-- if we take alpha sub k to be the L3 norm cubed of the function derived from averaging f along the U's, and then looking at the third moment of these densities. So these alphas, we can apply the increment lemma initially to deduce that there exists-- so, in particular, this number here is at least alpha cubed by convexity. So by the previous lemma, there exists some k, no more than on the order of 1 over-- of log 1 over epsilon, such that 2 alpha sub k minus alpha sub k plus 1 is at least the density of f cubed minus epsilon. So this alpha is supposed to be the density of f. So we find this k. And we have this bound over here from satisfying that inequality. So this is the density increment argument, the energy increment argument. So we're doing the energy increment argument, basically the same argument as the one that we did when we discussed graph regularity lemma, but now presented in a slightly different form and a different order of logic. But it's the same argument. And what we would like to show is that you also have this pseudorandomness condition about having small Fourier coefficients. So what's happening here with the Fourier coefficients? Now, how is the Fourier coefficient of an average f related to the original f? So that's something you want to understand up there. And that's something that's not hard to analyze. Because if you have a function U or W-- so either one-- then the Fourier coefficients of this average version is very much related to the original function. It turns out that if you take an r which is in the orthogonal complement, then the Fourier coefficient doesn't change. And if you are not in the orthogonal complement, then the Fourier coefficient gets zeroed out. So that's something that's not too hard to check, and I urge you to think about it. So, with that in mind, let's go back to verify this over here. So what we have now is that the-- so this quantity, which measures the largest Fourier coefficient, the difference between f and U sub k plus 1, is, at most-- and what U sub k plus 1 is doing is we're looking at possible large Fourier coefficients, and we are getting rid of them. So we're zeroing out these large Fourier coefficients, so that the remaining Fourier coefficients are all quite small. But we chose our R so that if-- so this big R-- so that if your little r is not in big R, then the Fourier coefficient must be small. That's how we chose the big R. So we have this bound over here. And by the definition of the epsilon, we have that bound. And, also, we're combining with this estimate, upper bound estimate on the size of R sub k. So point being we have that. So now take W to be U sub k plus 1, and U to b U sub k, and then we have everything that we want. Question, yes. AUDIENCE: Why is the codimension of W small? YUFEI ZHAO: Question is, why is the codimension of W small? So what is the codimension of W? So we want to know that the codimension of W is bounded. So the codimension of W is-- I mean, the codimension of any of these U sub k's is, at most, 3 raised to the number of r's that produce it. And the size of R is bounded. So if we pick m so that it uniformly bounds the size of R, then we have a bound on the codimension. So that's important. So we need to know that the codimension is small. Otherwise, if you don't have the bound on codimension you can just take the zero subspace, and, trivially, everything's true. We have a regularity lemma, and what comes with a regularity lemma is a counting lemma. So let me write down the counting lemma, and I'll skip the proof. So the counting lemma tells you that if you have f and g both functions on F3 to the n, and U is a subspace F, then-- so let me define-- so the quantity that I'm interested in is-- so I'm interested in understanding 3-AP's where the common difference is in a particular subspace. So we claim that the 3-AP count of f with common difference restricted to the subspace U-- so it's similar between f and g if f and g are close to each other in Fourier. Well, not quite, because-- so something like this, we saw earlier in the proof of Roth's theorem if we don't restrict the common difference. Turns out, if you restrict the common difference, you lose a little bit. So you lose a factor which is basically the size of the complement of U. So I won't prove that. But now let me go on to the punch line. So if we start with, again, f function in your space, taking bounds between 0 and 1, and I have subspaces U and W, I claim that the-- if I look at f averaged through W, and I consider 3-AP counts with common difference restricted to U, then this quantity here is lower bounded by this difference between L3 norms. So I claim this is true. So this is just some inequality. This is some inequality. So of all the things that I did back in high school doing math competitions, I think the one skill which, I think, I find most helpful now is being able to do inequalities. And I thought I would never see these three-variable inequalities again, but when I saw this one-- so Fox and Pham, when they first showed me a somewhat different proof of an approach that didn't go through this specific inequality, I told them, hey, there's this thing I remember from high school. It's called Schur's inequality. And I thought I would never see it again after high school, but apparently it's still useful. So what Schur's inequality says-- this is one of those three-variable inequalities that you would know if you did math olympiads-- that you have-- so it's an inequality between non-negative-- actually, it's true for real numbers as well, but let's say it's non-negative real numbers. So that's Schur's equality. So if you look at the left-hand side, the left-hand side is-- it can be written as a sum in the following way. I mean, it can be written in the following way. So its expectation over x, y, z that are 3-AP's in the same U coset. So I'm counting 3-AP's with common difference restricted to U. So common 3-AP's in the same U coset. And I am looking at the product of f sub W evaluated on this 3-AP. So what I would like to do now is apply Schur's inequality to a, b, and c, being these three numbers. The point is you have this a, b, c on the left. And then everything on the right involves only a subset of a, b, c, and they simplify. So if I do this, then I lower bound this quantity by twice the expectation of x and y in the same coset, same U coset of f sub W of x squared f sub W of y. Maybe I took two other things, but they're all symmetric with respect to each other. And minus the term that corresponds to this sum of cubes. So like that. So this is a consequence of Schur's equality applied with a, b, c like this. But now you see, over here, I can analyze this expression even further. Because if I let y vary within the same U coset, then, over here, it averages out to U cosets. So U is bigger than W. So what we have is-- so what we have over here is that it is at least twice of f of f-- f of U-- fW squared fU minus the expectation of fW squared-- fW cubed. And I can use convexity on f sub W to get that, which is what we're looking for. So the last step is convexity. So I'm running through a little bit quick here because we're running out of time, but all of these steps are fairly simple once you observe the first thing you can do is Schur's inequality. And we're almost there. We're almost done. We're almost done. So from that lemma up there, I claim now that, for every epsilon, there exists some m which is tower log in 1 over epsilon, such that if f is a function on F3 to the n, taking bounds between 0 and 1, then there exists a subspace U of codimension, at most, m such that the 3-AP count, 3-AP density with common difference restricted to U, is at least the random bound minus epsilon. Why is this true? Well, we put everything together, and choose U and W as in regularity lemma. And, by counting lemma, we have that the 3-AP density of f, so it is at least-- so we're using counting lemma over here-- it is at least the 3-AP density of f sub W of U minus a small error which we can control. So this step is counting. And now we apply that inequality up there. And finally, we chose our U and W in the regularity lemma so that this difference here is controlled. So it is controlled by the random bound minus epsilon. And that's it. So you change epsilon to 4 epsilon, but we can change it back. And that's it. So we have the statement that you have this subspace of bounded codimension where you have this popular difference result. It doesn't quite guarantee you a single common difference, because, well, you don't really want it to be the case where U is just a single point because I want a nonzero common difference. But if U is large enough-- if n is large enough at bounded codimension, so, then, the size of U is large enough. So, then, there exists some nonzero common difference. You pick some nonzero element of U. On average, this should work out just fine. So I'll leave that detail to you. One more thing I want to mention is that all of this machinery involving regularity and Fourier, as with things we've done before, carries over to other settings-- general Abelian groups, and also the integers. And you may ask, well, we have this for 3-AP's. What about longer arithmetic progressions? In the integers, it turns out it is also true, that Green's statement, in the integers if you replace 3-AP by 4-AP. That's a theorem of Green and Tao involving higher-order quadratic analysis-- quadratic Fourier analysis. However, and rather surprisingly, 4-AP, it's OK. But 5-AP and longer, it is false. The corresponding statement about popular differences for 5-AP in the integers is false. There are counterexamples. So it's really a statement about 3-AP's and 4-AP's, and there's some magic cancellations that happen in 4-AP's that make it true. OK, great. So that's all for today.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
24_Structure_of_set_addition_IV_proof_of_Freimans_theorem.txt
YUFEI ZHAO: OK. I want to begin by giving some comments regarding the Wikipedia assignment. So I sent out an email about this last night. And so first of all, thank you for your contributions to this assignment, to Wikipedia. It plays a really important role in educating a wider audience what this subject is about because as many of you have experienced, the first time-- if you heard of some term, you have no idea what it is, you put it into Google, and often Wikipedia's entry is one of the top results that come up. And what gets written in there actually plays a fairly influential role in educating a broader audience about what this topic is about. And so I want to emphasize that this is not simply some homework assignment. It's something that is a real contribution. And it's something that contributes to the dissemination of knowledge. And for that, it is really important to do a good job, to do it right, to do it well, so that next time someone-- maybe even yourselves-- maybe you've forgotten what the subject is about and go back and you want to look it up again and remind yourself. You will have a useful resource to look into. But also let's say someone wants to find out what is external graph theory about? What is additive combinatorics about? You want them to land on the page that points you to the right type of places, that points you to useful resources, that opens doors so that you can explore further. And some of the contributions, indeed, serve you well in that purpose. It opens doors to many things. And part of the spirit of this assignment is for you to do your own research, do your own literature search, to learn more about a subject, more than what has been taught in these lectures so that you can write about it on Wikipedia. You can link to more references, you know, show the world what the subject is about. OK, continuing with our program, so we spent the past few lectures developing tools regarding the structure of set addition so that we can prove Freiman's theorem. So that's been our goal for the past few lectures. And today we'll finally prove Freiman's theorem. But let me first remind you the statement-- so in Freiman's theorem, we would like to show that if you are in a subset-- if you're in the integers, you have a set A that has bounded doubling-- doubling constant, constant k-- then the said must be contained in a small, generalized arithmetic progression, down to dimension and size, only a constant factor larger than a. We developed various tools the past three lectures building up to your intermediate results. But we also collected this very nice set of tools for proving Freiman's theorem. So let me review some of them, which we'll encounter again today. PlünneckeRuzsa inequality tells you that if you have a set with small doubling, then the further iterated sums are also controlled. So I want you to think of these parameters as k is a constant, so k to the some power is still a constant, but also I don't really care about polynomial changes in k. So I-- you know, we should ignore polynomial changes in k and view this constant more or less as the original k itself. So if some is-- the a plus a is around the same size as a, then further iterations also do not change the sizes very much. Ruzsa covering lemma: so this was some statement that if x plus b looks like it could be covered by copies of b, just in terms of their sizes alone, then in fact, x could be covered by a small number of translates of a slightly larger ball. But here B can be any set. We had a thing called Ruzsa modeling lemma. In particular, a consequence of it is that if a has small doubling, then there exists the prime n that's not too much bigger than the size of a, and a very large proportion-- an eighth of a subset of an eighth of a such that this subset a prime is prime and 8 isomorphic to a subset of z mod n. So even though you start with a set that's potentially very spread out, provided they have small doubling, I can pick out a pretty large piece of it and model it by something in a fairly small cyclic group. And here the modeling is 8 isomorphic, so it preserves sums up to eight term sums. We had Bogolyubovs lemma so now we're inside a small cyclic group of a large subset of a small cyclic group. Then Bogolyubovs lemma says that 2a minus 2a contains a large bore set, of large structure within the situated subset. And last time we showed that the geometry of numbers, Minkowskis second term, one can deduce that every bore set of small dimension and small width contains-- of large width contains a proper GAP that's pretty large. So putting these two together, putting the last two things together, we obtain that if you have a subset of the cyclic group and n is prime-- OK, so here in previous statement n is prime. So n is prime. And a is pretty large. Then 2a minus 2a contains a proper generalized arithmetic progression of dimension at most alpha to the minus 2 and size at least 1 over 40 to the d times n. So it's just starting from the size of a. 2a minus 2a contains a pretty large GAP. So we're going to put all of these ingredients together and show that you can now contain the original set, a, in a small GAP. So just from knowing that some subset of it, 2a minus 2a, so think 2a prime minus 2a prime, contains this large GAP. We're going to use it to boost it up to a cover. So now let's prove Freiman's theorem. Using the modeling lemma-- using the modeling lemma-- the corollary of the modeling lemma-- we find that since a plus a is size at most k kind of size of a, there exists some prime n at most 2k to the 16 times a. And so I'm just copying the consequence of this modeling lemma. So I find a pretty large subset of a such that a prime is prime and 8 isomorphic to a subset of z mod n. Now, applying the final corollary with alpha being the size of this a prime, which is at least the size of a over n, which is at least 1 over 16 to the times k to the power 16, so all constants. We see that 2a prime minus-- so let me actually-- let me change the letters and call a prime b so I don't have to keep on writing primes. So subset of a is called b. OK, so 2b minus 2b now contains a large GAP. And the GAP has dimension d bounded. So the dimension is bounded by alpha to the minus 2. So it's some constant. And the size is pretty large. So size is at least 1 over 40 d. If you only care about constants, just remember that everything that depends on k or d is a constant. OK. Because b is Freiman's 8 isomorphic, b is Freiman 8 isomorphic to-- ah, sorry. b is-- a prime is a subset of a and b is the subset of z mod-- b is a subset of z mod n that a prime is 8 isomorphic, too. So since b is 8 isomorphic to a prime, every GAP in b-- so if you think about what 8 isomorphism preserves, you find that if you look at 2b minus 2b, it must be 2 prime-- 2 isomorphic to 2a prime minus 2a prime. So the point of prime and isomorphism is that we just want to preserve enough additive structure. Well, we're doing to preserve all the additive structure, but just enough additive structure to do what we need to do. And being able to preserve an arithmetic progression, or in general or generalized arithmetic progression, requires you to preserve Freiman 2 isomorphism. And that's where the a comes in. So I want to analyze 2b minus 2b and I want that to preserve 2 isomorphisms. So initially I want b to preserve 8 isomorphisms. So 2b minus 2b is Freiman isomorphic to 2a prime minus 2a prime. So the GAP, which we found earlier in 2B minus 2B is mapped via this Freiman isomorphism to a proper GAP, which we'll call q, now setting aside 2a minus 2a and preserving the same dimension and size. So Freiman isomorphisms are good for preserving these partial additive structures like GAPs. Yes? AUDIENCE: So are we using this smaller structure to be [INAUDIBLE]? YUFEI ZHAO: Correct. So question is, we're using-- so because we have to pass the 2b minus 2b, we want 2b minus 2b to be prime and isomorphic to 2a prime minus 2a prime. So that's why in the proof I want b to be 8 isomorphic to a prime. So you see, so I'm skipping details of this step. But if you read the definition of Freiman's s isomorphism, you see that this implication holds. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: No. 2 isomorphism is a weaker condition. 2 isomorphism just means that you are preserving two y sums. So think about the definition of Freiman 2 isomorphisms. In particular, if two sets are Freiman 2 isomorphic and you have a arithmetic progression in one, then that arithmetic progression is also an arithmetic progression in the other. So it's just enough additive structure to preserve things like arithmetic progressions and generalized arithmetic progressions. OK. So we found this large GAP in 2a minus 2a. So this is very good. So we wanted to contain a in the GAP. Seems that by now we're doing something slightly in the opposite direction. We find a large GAP within 2a minus 2a. But something we've seen before, we're going to use this to boost ourselves to a covering of a via the Ruzsa covering lemma. So once you find this large structure, you can now try to take translates of it to cover a. And this is-- if there's any takeaway from the spirit of this proof is this idea. Even though I want to cover the whole set, it's OK. I just find a large structure within it and then I use translaters to cover. How do we do this? So since q is containing 2a minus 2a, we find that q plus a is containing 3a minus 2a. Therefore, by Plünnecke-Ruzsa-- by Plünnecke-Ruzsa inequality, the size of cube plus a is at most the size of 3a minus 2a, which is, at most, k to the fifth power times the size of a. And I claim that this final quantity is also not so different from the size of cube, because-- so all of these-- I mean, the point here is, we are doing all of these transformations, passing down to subsets, putting something bigger, putting-- getting to something smaller, but each time we only loose something that is polynomial in k. We're not losing much more. I am also only losing a constant factor. There is sometimes a bit more than polynomial, but any case, we're losing only a constant factor at each step. So in particular, since n upper bounds the size of a prime is here where we ended up embedding into z modern, n is larger than a prime, which is at least a constant fraction of a and the size of q is at least 1 over 40 raised d times n. So we find that this bound-- upper bound earlier on q plus a. We can write it in terms of size of cube where k prime is-- you put all of these numbers together. What it is specifically doesn't matter so much, other than that it is a constant. d is polynomial in k. So what we have here is something that is exponential, polynomial of k. OK, so now we're in a position to apply the Ruzsa covering lemma. So look at that statement up there. So what is the saying, that a plus q looks like it could be covered by q, just in terms of size. So I should expect to cover a by a small number of translates of q minus q. So by covering lemma, a is containing some x plus q minus q for some x in a where the size of x is at most k prime. I claim-- so we've covered a by something. q is a GAP. x is a bounded size set, and I claim that this is the type of object that we're happy to have. Just to spell out some details, first note that x is contained in a GAP dimension x or x minus 1 with length 2 in each direction. So add a new direction for every element of x. It's wasteful, but everything's constant. And recall that the dimension of Q as a GAP is d. So x plus q minus q is contained in a GAP of dimension. OK, so what's the dimension? So when I do q minus q, it's like taking a box and doubling its dimension-- doubling its lengths. I'm not changing the number of dimensions. So the dimension of q minus q is still d. The dimension of x is, at most, the size of x. All of these things are constants. So we're happy. But to spell it out, the constant here is-- well, k prime is what we wrote up there. So this is a constant. And the size-- so what is the size of the GAP that contains this guy here? So I'm expanding x to a GAP by adding a new direction for every element of x. And I might expand that size a little bit. But the size of this GAP that contains x is no more than 2 to the power x-- 2 raised to the size of x. What is the size of GAP q minus q? So q is the GAP of dimension d. And we know that a GAP of dimension d has doubling constant and those 2 to the d-- 2 to the d times the size of q. OK. And because q is contained in qa minus 2a, we find that q is contained in 2a minus 2a. And 2 to the x, well, I know what the site of x is bounded by, it's k prime plus the size of x is-- size of x is, at most, k prime. And then I have 2 to the d over here, so 2a minus 2a by Plünnecke-Ruzsa is at most k to the 4 times the size of a. OK, you put everything together, we find that this bound here is doubly exponential in the-- it's a polynomial in k. And that's it. This proves Freiman's theory. Now, to recap-- we went through several steps. So first, using the modeling lemma, we know that if a set a has small doubling, then we can pass a large part of a to a relatively small cyclic group. Going to work inside our cyclic group. Using Bogolyubovs lemma and its geometry of numbers corollary, we find that inside the cyclic group, the corresponding set, which we called b, is such that 2b minus 2b contains a large GAP. We pass that GAP back to the original set a because we are preserving 8 isomorphisms Freiman 8 isomorphisms in Ruzsa modeling lemma so we can pass to the original set a and find the large GAP in 2a minus 2a. Once we find this large GAP 2a minus 2a, then we're going to use the Ruzsa covering lemma to contain a inside a small number of translates of this GAP. OK. You put all of these things together and the appropriate bounds coming from Plünnecke-Ruzsa inequalities and you get a final theorem. And this is the proof of Freiman's theorem. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: OK. The question is how do you make it proper? Up until the step with q, it is still proper. So the very last step over here it is-- you might have destroyed properness. So this proof here doesn't give you properness. So I mentioned at the beginning that in Freiman's theorem, you can obtain properness of the additional arguments. So that I'm not going to show. There's some more work which is related to geometry of numbers. So for example, you can look up in the textbook by Tao and Vu, and see how to get from a GAP to contain it in a proper GAP, without losing too much in terms of size. So think about it this way-- when do you have something which is not proper? When you have-- and if some linear dependence, you have some integer linear dependence. And in that case, you kind of lost a dimension. When you have improperness, you actually go down a dimension. But then you need to salvage the size, make sure that the size doesn't blow up too much. And so there are some arguments to be done there. And we're not going to do it here. AUDIENCE: Well, I guess my [INAUDIBLE] do they change [INAUDIBLE] within the proof, like say [INAUDIBLE] q or whatever? Or do they use the same proof, but later on, say that [INAUDIBLE]? YUFEI ZHAO: OK, good. Yeah, so the question is, to get properness, do I have to modify the proof, or can I use Freiman's theorem as witness of a black box. So my understanding is that I can use the statement as a black box and obtain properness. But if you want to get good bounds, maybe you have to go into the proof, although that, I'm not sure. OK, any more questions? So this took a while. This was the most involved proof we've done in this course so far, in proving Freiman's theorem. We had to develop a large number of tools. And we came up-- so we eventually arrived at-- it's a beautiful theorem. So this is a fantastic result that gives you an inverse structure, something-- we know that GAP's have small doubling. And conversely, if something has a small doubling, it has to, in some sense, look like a GAP. So you see that the proof is quite involved and has a lot of beautiful ideas. In the remainder of today's lecture, I want to present some remarks on additional extensions and generalizations of Freiman's theorem. And while we're not going to do any proofs, there's a lot of deep and beautiful mathematics that are involved in the subject. So I want to take you on a tour through some more things that we can talk about when it comes to Freiman's theorem. But first, let me mention a few things that I mentioned very quickly when we first introduced Freiman's theorem, namely some remarks on the bounds. So the proof that we just saw gives you a bound, which is basically exponential in the dimension and doubly exponential for the size blow-up. They're all constants, so if you only care about constants, then this is just fine. But you may ask, are we losing too much here? What is the right type of dependence? So what is the right type of dependence? So we saw an example. So we saw an example where if you start with A being a highly dissociated set, where there is basically no additive structure within A, then you do need-- so this example shows that you cannot do better than polynomial-- well, actually, than linear in K, in the dimension, and exponential in the size blow-up. So in particular, you do need to blow up the size by some exponential quantity in K. So here, K is roughly the size of A over 2 in this example. And you can create modifications of the example to keep K constant and A getting larger. But the point is that you cannot do better than this type of dependence simply from that example. And it's conjecture that that is the truth. We're almost there in proving this conjecture, but not quite, although the proof that we just gave is somewhat far, because you lose an exponent in each bound. There is a refinement of the final step in the argument, so let me comment on that. So we can refine the final step or the final steps in the proof to get polynomial bounds. And to get a polynomial bounds, which is much more in the right ballpark compared to what we got. And the idea is basically over here, we used the Ruzsa covering lemma. So we started with that Q up there. So up until this point, you should think of this step as everything coming from Bogolyubov and its corollary. So that stays the same. And now the question is starting with our Q, what would you use? How would you use this Q to try to cover it? Well, what we do, we apply Ruzsa covering lemma. Remember how the proof of Ruzsa covering lemma goes. You take a maximal set of translates, disjoint translates. And if you blow everything up a factor 2, then you've got a cover. But it turns out to be somewhat wasteful. And you see, there was a lot of waste in going from x to 2 to the x. So you could do that step more slowly. So starting with Q, cover now some, not all of A. So cover parts of A by translates of 2 minus Q, say. So we do Ruzsa covering lemma, you don't cover the whole thing, but nibble away, cover a little bit, and then look at the thing that you get, which is that Q will become some new thing, let's say Q1. And now cover more by Q1 minus Q1. So apparently, if you do the covering step more slowly, you can obtain better bounds. And that's enough to save you this exponent, to go down to polynomial-type bounds for Freiman's theorem. So I'm not giving details, but this is roughly the idea. So you can modify the final step to obtain this bound. The best bound so far is due to Tom Sanders, who proved Freiman's theorem for bounds on dimension that's like K times poly log K, and the size blowup to be E to the K times poly log K. So in other words, other than this polylogarithmic factor, it's basically the right answer. And so this proof is much more sophisticated. So it goes much more in depth into analyzing the structure of set addition. So Sanders has a very nice survey article called "The structure of Set Addition" that analyzes some of the modern techniques that are used to prove these types of results. There is one more issue, which I want to discuss at length in the second half of this lecture, which is that you might be very unhappy with this exponential blowup, because if you think about what happens in these examples-- I mean, not the examples, but if you think about what happens, like the spirit of what we're trying to say, Freiman's theorem is some kind of an inverse theorem. And to go forward, you're trying to say that if you have a GAP of dimension d, then the size blowup is like 2 to the d. So we want to say some structure applies small doubling, and Freiman's theorem tells the reverse, that you have small doubling, then you obtain this structure. And seems like you are losing. Getting from here to here, there is a polynomial type of loss, whereas going from here to here, it seems that we're incurring some exponential type of loss. And it'll be nice to have some kind of inverse theorem that also preserves these relationships qualitatively. So that may not make sense in this moment, but we'll get back to it later this lecture. Point is, there's more, much more to be said about the bounds here, even though right now it looks as if they're very close to each other. One more thing that I want to expand on is, we've stated and proved Freiman's theorem in the integers. And you might ask, what about in other groups? We also proved Freiman's theorem in F2 to the m, or more generally, groups of bounded exponent or bounded portion, so abelian groups of bounded exponent. For general abelian groups, so Freiman's theorem in general abelian groups, you might ask what happens here? And in some sense what is even the statement of the theorem? So we want something which combines, somehow, two different types of behavior. On one hand, you have z, which is what we just did. And here the model structures are GAP's. And on the other hand, we have, which we also proved, things like F2 to the m, where the model structures are subgroups. And there's a sense in which these are not the GAP's and subgroups. They have some similar properties, but they're not really like each other. So now if I give you a general group, which might be some combination of infinite torsion or very large torsion elements versus very small torsion elements-- so for example, take a Cartesian product of these groups. Is there a Freiman's theorem? And what does such a theorem look like? What are the structures? What are the subsets of bounded doubling? So that's kind of the thing we want to think about. So it turns out for Freiman's theorem in general abelian groups-- so there is a theorem. So this theorem was proved by Green and Ruzsa. So following a very similar type of proof framework, although the individual steps, in particular the modeling lemma needs to be modified. And let me tell you what the statement is. So the common generalization of GAP's and subgroups is something called a "co-set progression." So a co-set progression is a subset which is a direct sum of the form P plus H, where P is a proper GAP. So the definition of GAP works just fine in every abelian group. You start with the initial point, a few directions, and you look at a grid expansion of those directions. P is a proper GAP, and H is a subgroup. And here, the direct sum refers to the fact that every-- so if P plus H equals to P prime plus H prime for some P and P prime in the set P, and H and H prime in the set H, then P equals to P prime and H equals to H prime. So every element in here is written in a unique way as some P plus some H. So that's what I mean by "direct sum." For such an object, so such a co-set progression, I call its dimension to be the dimension of the GAP, P. And its size in this case, actually, is just the size of the set, which is also the size of P times the size of H. So the theorem is that if A is a subset of an arbitrary abelian group and it has bounded doubling, then A is contained in a co-set progression of bounded dimension and size, bounded blowup of the size of A. And here, these constants D and K are universal. They do not depend on the group. So there are some specific numbers, functions you can write down. They do not depend on the group. So this theorem gives you the characterization of subsets in general abelian groups that have small doubling. Any questions? Yes? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: That's a good question. So I think you could go into their paper and see that you can get polynomial type bounds. And I think Sander's results also work for this type of setting to give you these type of bounds. But I-- yes, so you should look into Sanders' paper, and he will explain. I think in Sanders' paper he walks in general abelian groups. The next question I want to address is-- well, what do you think is the next question? Non-abelian groups, so Freiman's theorem in non-abelian groups, or rather the Freiman problem in non-abelian groups. So here's a basic question-- if I give you a non-abelian group, what subsets have bounded doubling? Of course, the examples from abelian groups also work in non-abelian groups, where you have subgroups, you have generalized arithmetical progressions. But are there genuinely new examples of sets in non-abelian groups that have bounded doubling? So think about that, and let's take a quick break. Can you think of examples in non-abelian groups that have small doubling, that do not come from the examples that we have seen before? So let me show you one construction. And this is that important construction for non-abelian groups. So it has a name. It's called a discrete Heisenberg group, which is the matrix group consisting of matrices that look like what I've written. So you have integer entries above the diagonal, 1 on the diagonal, and 0 below the diagonal. So let's do some elementary matrix multiplication to see how group multiplication in this group works. So if I have two such matrices, I multiply them together. And then you see that the diagonal is preserved, of course. But this entry over here is simply addition. So this entry here is just addition. This entry over here is also addition. And the top right entry is a bit more complicated. It's some addition, but there's an additional twist. So this is how matrix multiplication works in this group. I mean, this is how matrix multiplication works, but in terms of elements of this group, that's what happens. So you see it's kind of like an abelian group, but there's an extra twist, so it's almost abelian, so the first step you can take away from abelian. And there's a way to quantify this notion. It's called "nilpotency." And we'll get to that in a second. But in particular, if you set S to be the following generators-- so if you take S to be these four elements, and you ask what does the r-th power of S look like, so I look at all the elements which can be written by r or at most r elements from S, what do these elements look like? What do you think? So if you look at elements in here, how large can this entry, the 1, comma, 2 entry be? r. So each time you do addition, so it's at most r. So let me be a bit rough here, and say it's big O of r. And likewise, the 2, 1, 2, 3, entry is also big O of r. What about the top right entry over here? So it grows like r squared, because there is an extra multiplication term. So you can be much more precise about the growth rate of these individual entries. But very roughly, it looks like this ball over here. So the size of S, the r-th ball of S, is roughly, it's on the order of 4th power of r. So in particular, the doubling constant, if r is reasonably large, is what? What happens when we go from r to 2r? The size increases by a factor of around 16. So that's an example of a set in a non-abelian group with bounded doubling, which is genuinely different from the examples we have seen so far. So that's non-abelian. Yeah. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: The question is, is the size-- we've shown the size is-- I'm not being very precise here, but you can do upper bound and lower bound. So size turns out to be the order of r to the 4. So you want to show that there are actually enough elements over here that you can fill in, but I'll leave that to you. Can you build other examples like this one? Yeah. AUDIENCE: How do we know that this isn't similar to a co-set, the direct sum [INAUDIBLE]?? YUFEI ZHAO: Question is, how do we know this isn't like a co-set sum or a co-set progression? For one thing, this is not abelian. S, if you multiply entries of S in different orders, you get different elements. So already in that way, it's different from the examples that we have seen before. But no, you're right. So maybe we can write this a semi-direct product in terms of things we have seen before. And it is, in some sense, a semi-direct product, but it's a very special kind of semi-direct product. From that example, you can build bigger examples, of course with more entries in the matrix. But more generally, these things are what are known as "nilpotent groups." So that's an example of a nilpotent group. And to remind you, the definition of a nilpotent group is a group where the lower central series eventually terminates. In particular, inside that if you look at-- so this is the commutator of G, so look at all the elements that we recognize x, y, x inverse, y inverse-- the set of elements that can be written this way. So that's a subgroup. And if I repeat this operation enough times, I eventually would get just the identity. And you could trade on that group. If you do the commutator, so essentially you get rid of abelian-ness and you move up the whole diagonal, you create a commutator, you'd get rid of these-- all these two entries. So you get z alone. If you do it one more time, you zero out that entry. And so more generally, all of these nilpotent groups have this phenomenon, have the polynomial growth phenomenon. So if you take a set of generators and look at a ball, and look at the volume of the ball, how does the volume of the ball grow with the radius? It grows like a polynomial. And so let me define that. So given G, a finitely generated group, so generated by set S, we say that G has polynomial growth if the size S to the r grows like at most a polynomial in r. It's worth noting that this definition is really a definition about G. It does not depend on the choice of generators. You can have different choices, generators for the group. But if it has polynomial growth with respect to one set of generators, then it's the same. It also has polynomial growth with regards to every other set. So we've seen an example of groups with polynomial growth. Abelian groups have polynomial growth. So if you think of polynomial growth, think lattice or z to the m. So if you take a ball growing, so it has size growing like r to the dimension. But nilpotent groups is another example of groups with polynomial growth. And these are, intuitively at least for now, related to bounded doubling. If it's polynomial growth, then it has bounded doubling. So is there a classification of groups with bounded-- with polynomial growth? So if I tell you a group-- so an infinite group always, because otherwise if finite, then it maxes out already at some point. So I give you an infinite group. I tell you it has polynomial growth. What can you tell me about this group? Is there some characterization that's an inverse of what we've seen so far? And the answer is yes. And this is a famous and deep result of Gromov. So Gromov's theorem on groups of polynomial growth from the '80s. Gromov showed that a finitely generated group has polynomial growth if and only if it's virtually nilpotent, where "virtually" is an adverb in group theory where you have some property like "abelian," or "solvable," or whatever. So virtually P means that there exists a finite index subgroup with property P. So "virtually nilpotent" means there is a finite index subgroup that is nilpotent. So it completely characterizes groups of polynomial growth. So basically, all the examples we've seen so far are representative, so up to changing by a finite index subgroup, which as you would expect, shouldn't change the growth nature by so much. There are some analogies to be made here with, for example in geometry, you ask in Euclidean space, how fast is the ball of radius r growing? In dimension d, it grows like r to the d. What about in the hyperbolic space? Does anyone know how fast, in a hyperbolic space, a ball of radius r grows? It's exponential in the radius. So for non-negatively curved spaces, the balls grow polynomially. But for something that's negatively curvatured, in particular the hyperbolic space, the ball growth might be exponential. You have a similar phenomenon happening here. The opposite of polynomial growth is, well, super polynomial growth, but one specific example is that of a free group, where there are no relations between the generators. In that case, the balls, they grow like exponentially. So the balls grow exponentially in the radius. Gromov's theorem is a deep theorem. And its original proof used some very hard tools coming from geometry. And Gromov developed a notion of convergence of metric spaces, somewhat akin to our discussion of graph limits. So starting with discrete objects, he looked at some convergence to some continuous objects, and then used some very deep results from the classification of locally compact groups to derive this result over here. So this proof has been quite influential, and is related to something called "Hilbert's fifth problem, which concerns characterizations of Lie groups. So all of these are inverse-type problems. I tell you some structure has some property. Describe that structure. What does this all have to do with Freiman's theorem? Already you see some relation. So there seems, at least intuitively, some relationship between groups of polynomial growth versus subsets of bounded doubling. One implies the other, although not in the converse. And they are indeed related. And this comes out of some very recent work. I should also mention that Gromov's theorem has been made simplified by Kleiner, who gave an important simplification, a more elementary proof of Gromov's theorem. So let's talk about the non-abelian version of Freiman's theorem. We would like some result that says that is it true that every set, most every set of-- so previously, we had small doubling. You want to have some similar notion, although it may not be exactly small doubling, but let me not be very precise and to say, "small doubling." In literature, these things are sometimes also known as "approximate groups." So if you look this up, you will get to the relevant literature on the subject. Most every set of small doubling in some non-abelian group behaves like one of these known examples, something which is some combination of subgroups and nilpotent balls. So these combinations are sometimes known as "co-set nilprogressions." So this was something that was only explored in the past 10 years or so in a series of very difficult works. Previously, it had been known, and still was being investigated for various special classes of matrix groups or special classes of groups like solvable groups and whatnot, that are more explicit or easier to handle or closer to the abelian analog. There was important work of Hrushovski, which was published about 10 years ago, who showed using model theory techniques, so using methods from logic, that a weak version of Freiman's theorem is true for non-abelian groups. And later on, Breuillard, Green, and Tao building on Hrushovskis work-- so this actually came quite a bit later, even though the journal publication dates are the same year-- so they were able to build on Hrushovski's work, and greatly expanding on it, and going back to some of the older techniques coming from Hilbert's fifth problem, and as a result, proved an inverse structure theorem that gave some kind of answer to this question of non-abelian Freiman. So we now do have some theorem which is like Freiman's theorem for abelian groups that says in a non-abelian group, if you have something that resembles small doubling, then the set must, in some sense, look like a combination of subgroups and nilpotent balls. But let me not be precise at all. The methods here build on Hrushovski. And Hrushovski used model theory, which is kind of-- it's something where-- in particular, one feature of all of these proofs is that they give no bounds. Similar to what we've seen earlier in the course, in proofs that involved compactness, what happens here is that the arguments use ultra filters. So there are these constructions from mathematical logic. And like compactness, they give no bounds. So it remains an open problem to prove Freiman's theorem for non-abelian groups with some concrete bounds. Question. AUDIENCE: [INAUDIBLE] nilpotent ball? YUFEI ZHAO: What is nilpotent ball? I don't want to give a precise definition, but roughly speaking, it's balls that come out of those types of constructions. So you take a nilpotent subgroup. You take a nilpotent group. You look at an image of a nilpotent group into your group, and then look at the image of that ball, so something that looks like one of the previous constructions. So that's all I want to say about non-abelian extensions of Freiman's theorem. Any questions? AUDIENCE: Would you say one more time what you mean by "approximate group?" YUFEI ZHAO: So what I mean by-- you can look in the papers and see the precise definitions, but roughly speaking, it's that if you have-- there are different kinds of definitions and most of them are equivalent. But one version is that you have a set A such that A is coverable by K translates of A, so it's a bit more than just the size information, but it's actually related to size information. So we've already seen in this course how many of these different notions can go back and forth from one to the other, covering to size, and whatnot. The final thing I want to discuss today is one of the most central open problems in additive combinatorics going back to the abelian version. So this is known as the "polynomial Freiman-Ruzsa conjecture." So we would like some kind of a Freiman theorem that preserves the constants up to polynomial changes without losing an exponent. Now, from earlier discussions, I showed you that the bounds that we almost proved is close to the truth. You do need some kind of exponential loss in the blowup size of the GAP. But it turns out those kind of examples are slightly misleading. So let's look at the examples of the constructions again. So if A-- so just for simplicity in exposition, I'm going to stick with F2 to the n, at least initially. So if A is an independent set of size n, then K, being the doubling constant of A, is roughly like n over 2. And yet the subgroup that contains A has size 2 to the something on the order of K times A. So you necessarily incur an exponential loss over here. Now, you might complain that the size of A here is basically K. But of course, I can blow up this example by considering what happens if you take each element here, and blow it up into an entire subspace. So the e's are the coordinate vectors. So now I'm sitting inside F2 to the m plus n. And that gives me this set. The doubling constant is still the same as before. And yet, we see that the subgroup generated by A still has this exponential blowup in this constant, exponential in the doubling constant. But now you see in this example here, even though the subgroup generated by A can be much larger than A, so everything's still constant, so much larger in terms of as a function of the doubling constant, A has a very large structure. So A contains a very large subspace. By "subspace," I mean affine subspace. And the subspace here is comparable to the size of A itself. So you might wonder, if you don't care about containing A inside a single subspace, can you do much better in terms of bounds? And that's the content of the polynomial Freiman-Ruzsa conjecture. The PFR conjecture for F2 to the m says that if you have a subset of F2 to the m and A plus A is size at most K times the size of A, then there exists a subspace V of size at most A such that V contains a large proportion of A. And the large here-- we only lose something that is polynomial in these doubling constants. So that's the case. It's over here. So instead of containing A inside an entire subspace, I just want to contain a large fraction of A in a subspace. And the conjecture is that I do not need to incur exponential losses in the constants. AUDIENCE: So V is an affine subspace? YUFEI ZHAO: V is-- question is, V is an affine subspace. You can think of V as an affine subspace. You can think of V as a subspace. It doesn't actually matter in this formulation. There's an equivalent formulation which you might like better, where you might complain, initially, PFR is initially-- Freiman's theorem is about covering A. And now we've only covered a part of A. But of course, we saw from earlier arguments, you can use Ruzsa's covering lemma to go from covering a part of A to covering all of A. Indeed, it this the case that this formulation is equivalent to the formulation that if A is in F to the n and A plus A size at most K times A, then there exists some subspace V with the size of V no larger than the size of A, such that A can be covered by polynomial in K many co-sets of V. We see that here. Here A has doubling constant K, which is around the same as n. And even though I cannot contain A by a single subspace of roughly the same size, I can use K different translates to cover A. Any questions? So I want to leave it to you as an exercise to prove that these two versions are equivalent to each other. It's not too hard. It's something if I had more time, I would show you. It uses Ruzsa covering lemma to prove this equivalence. The nice thing about the-- so the polynomial Freiman-Ruzsa conjecture, PFR conjecture, is considered a central conjecture in additive combinatorics, because it has many equivalent formulations and relates to many problems that are central to the subject. So we would like some kind of an inverse theorem that gives you these polynomial bounds. And I'll mention a couple of these equivalent formulations. Here is an equivalent formulation which is rather attractive, where instead of considering subsets, we're going to formulate something that has to do with approximate homomorphisms. So the statement still conjecture is that if F is a function from a Boolean space to another Boolean space is such that F is approximately a homomorphism in the sense that the set of possible errors-- so if it's actually a homomorphism, then this quantity is always equal to 0-- but it's approximately a homomorphism in the sense that the set of such errors is bounded by K in size, the conclusion, the conjecture claims that then there exists an actual homomorphism, an actual linear map G, such that F is very close to G, as in that the set of possible discrepancies between F and G is bounded, where you only lose at most a polynomial in K. So if you are an approximate homomorphism in this sense, then you are actually very close to an actual linear map. Now, it is not too hard to prove a much quantitatively weaker version of this statement. So I claim that it is trivial to show upper bound of at most 2 to the K over here. So think about that. So if I give you an F, I can just think about what the values of F are on the basis, and extend it to a linear map. Then this set is necessarily a span of that set, so has size at most 2 to the K. But it's open to show you only have to lose a polynomial in K. There is also a version of the polynomial Freiman-Ruzsa conjecture which is related to things we've discussed earlier regarding Szemeredi's theorem. And in fact, the polynomial Freiman-Ruzsa conjecture kind of came back into popularity partly because of Gowers' proof of Szemeredi's theorem that used many of these tools. So let me state it here. So we've seen some statement like this in an earlier lecture, but not very precisely or not precisely in this form. And I won't define for you all the notation here, but hopefully, you get a rough sense of what it's about. So we want some kind of an inverse statement for what's known as a "quadratic uniformity norm," "quadratic Gowers' uniformity norm." So recall back to our discussion of the proof of Roth's theorem, the Fourier analytic proof of Roth's theorem. We want to say that-- but now think about not three APs, but four APs. So we want to know if you have a function F on the Boolean cube, and this function is 1 bounded, and-- I'm going to write down some notation, which we are not going to define-- but the Gowers' u3 norm is at least some delta. So this is something which is related to 4 AP counts. So in particular, if this number is small, then you have a counting lemma for four-term arithmetic progressions. If this is true, then there exists a quadratic polynomial q in n variables over F2 such that your function F correlates with this quadratic exponential in q. And the correlation here is something where you only lose a polynomial in the parameters. So previously, I quoted something where you lose something that's only a constant in delta, and that is true. That is known. But we believe, so it's conjecture, that you only lose a polynomial in these parameters. So this type of statement-- remember, in our proof of Roth's theorem, something like this came up. So something like this came up as a crucial step in the proof of Roth's theorem. If you have something where you look at counting lemma, and you exhibit something like this, then you can exhibit a large Fourier character. And in higher order Fourier analysis, something like this corresponds to having a large Fourier transform. It turns out that all of these formulations of polynomial Freiman-Ruzsa conjecture are equivalent to each other. And they're all equivalent in a very quantitative sense, so up to polynomial changes in the bounds. So in particular, if you can prove some bound for some version, then that automatically leads to bounds for the other versions. The proof of equivalences is not trivial, but it's also not too complicated. It takes some work, but it's not too complicated. The best bounds for the polynomial Freiman-Ruzsa conjecture, and hence for all of these versions, is again due to Tom Sanders. And he proved a version of PFR with quasi-polynomial bounds, where by "quasi-polynomial bounds," I mean, for instance over here, instead of K. He proved it for something which is like e to the poly log K, so like K to the log K, but K to the poly log K. So it's almost polynomial, but not quite there. And it's considered a central open problem to better understand the polynomial Freiman-Ruzsa conjecture. And we believe that this is something that could lead to a lot of important new tools and techniques that are relevant to the rest of additive combinatorics. Yeah. AUDIENCE: Using the fact that all of these are equivalent, is it possible to get a proof of Freiman's theorem using the bound of 2 to the K to be approximate [INAUDIBLE]?? YUFEI ZHAO: OK, so the question is, we know that that up there has 2 to the K, so you're asking can you use this 2 to the K to get some bound for polynomial, for something like this? And the answer is yes. So you can use that proof to go through some proofs and get here. I don't remember how this equivalence goes, but remember that the proof of Freiman's theorem for F2 to the n wasn't so hard. So we didn't use very many tools. Unfortunately, I don't have time to tell you the formulations of polynomial Freiman-Ruzsa conjecture over the integers, and also over arbitrary abelian groups. But there are formulations over the integers, and that's one that people care just as much about. And there are also different equivalent versions, but things are a bit nicer in the Boolean case. Yeah. AUDIENCE: You said [INAUDIBLE]? YUFEI ZHAO: I'm sorry, can you repeat the question? AUDIENCE: [INAUDIBLE]. Yeah, what does that mean? YUFEI ZHAO: Are you asking what does this mean? AUDIENCE: Yeah. YUFEI ZHAO: So this is what's called a "Gowers' uniformity norm." So something I encourage you to look up. In fact, there is an unassigned problem in the problem set that's related to the Gowers' uniformity norm before you U2, which just relates to Fourier analysis. But U3 is related to 4 AP's and quadratic Fourier analysis.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
25_Structure_of_set_addition_V_additive_energy_and_BalogSzemerédiGowers_theorem.txt
YUFEI ZHAO: For the past few lectures, we've been discussing the structure of set addition, and which culminated in the proof of Freiman's theorem. So this was a pretty big and central result in additive combinatorics, which gives you a complete characterization of sets with small doubling. Today, I want to look at a somewhat different issue also related to sets of small doubling, but this time we want to have a somewhat different characterization of what does it mean for a set to have lots of additive structure. So in today's lecture, we're always going to be working in an Abelian group. Let me define the following quantity. Given sets A and B, we define the additive energy between A and B to be denoted by E of A and B. So A and B are subgroups. They're subsets of this arbitrary Abelian group. So E of A and B is defined to be the number of quadruples, a1, a2, b1, b2, where a1, a2 are elements of A, and b1, b2 are elements of B, such that a1 plus b1 equals to a2 plus b2. So the additive energy is the number of quadruples of these elements where you have this additive relation. And we would like to understand sets with large additive energy. So, intuitively, if you have lots of solutions to this equation in your sets, then the sets themselves should have lots of internal additive structure. So it's a different way of describing additive structure, and we'd like to understand how does this way of describing additive structure relate to things we've seen before, namely small doubling. When you have not two sets but just one set-- slightly easier to think about-- we just write E of A. I mean E of A comma A. And these objects are analogous to 4 cycles in graph theory. Because if you about this expression here in a Cayley graph, let's say over F2, then this is the description of a 4 cycle. You go around 4 steps, and you come back to where you started from. So these objects are the analogs of 4 cycles. And we already saw in our discussion of quasi-randomness, and also elsewhere, that 4 cycles play an important role in graph theory. And, likewise, these additive energies are going to play an important role in describing sets with additive structure. Consider the following quantity. We're going to let r sub A comma B of x to be the number of ways to write x as a plus b. So x equals to a plus b. So r sub A comma B of x is the number of ways I can write x as a plus b, where a comes from big A, little b comes from big B. Then, reinterpreting the formula up there, we see that the additive energy between two sets A and B is simply the sum of the squares of A-- r sub A comma B. As x ranges over all elements of the group, we only need to take x in the sumset A plus B. So the basic question, like when we discussed additive combinatorics, in the sense of when we discussed sets of small doubling, there we asked, if you have a set A of a certain size, how big can a plus a be? Here, let's ask the same. If I give you set A of a certain size, how big or how small can the additive energy of the set be? What's the most number of possible number of additive quadruples. What's the least possible number of additive quadruples? There's some trivial bounds, just like in the case of sumsets. So what are some trivial bounds? On one hand, by taking a1 equal to a2, and b2 equal to b2, we see that the energy is always at least the square of the size of A. On the other hand, if I fix three of the four elements, then the fourth element is determined. So the upper bound is cube of the size of A. And you convince yourself that, except up to maybe a constant factors, this is the best possible general upper and lower bound. Similar situation with sumsets, where you have lower bound linear, upper bound quadratic. Which is the side with additive structure? So if you have lots of additive structure, you have high energy. So this range is when you have lots of additive structure. And we would like to understand, what can you say about a set with high additive energy? Well, what are some examples of sets with high additive energy? It turns out that if you have a set that has small doubling, then, automatically, it implies large additive energy. So, in particular, intervals, or GAPs, or a large subset of GAPs, or all these examples that we saw-- in fact, these are all the examples coming from Freiman's theorem. Also, arbitrary groups. You can have subgroups. And so all of these examples have large additive energy. So let me-- I'll you the proof just in a second. It's not hard. But the real question is, what about the converse? So can you see much in the reverse direction? But, first, let me show you this claim that small doubling implies large additive energy. Well, if you have small doubling, if a plus A is size, at most, k times the size of A, then it turns out the additive energy of A is at least the maximum possible, which is A cubed divided by k. So that's within a constant factor of the maximum. It's pretty large. If you have small doubling, then large additive energy. So let's see the proof. So you can often tell how hard a proof is by how simple the statement is, although that's not always the case, as we've seen with some of our theorems, like Plunnecke's inequality. But in this case, it turns out to be fairly simple. So we see that r sub A comma A is supported on A plus A. So we use Cauchy-Schwarz to write-- so, first, we write additive energy in terms of the sum of the squares of these r's. And now, by Cauchy-Schwarz, we find that you can replace the sum of the squared r's by the sum of the r's. But now the key point here is that we take out this factor coming from Cauchy-Schwarz, which is only A plus A. So if the support size is small, we gain in this step. But what is the sum of r's? I mean, r of x is just number of ways to write x as little a1 plus little ab-- little a2. So if I sum over all x, I'm just looking at different two ways-- we're just looking at ways of picking an ordered pair from A. So this last expression is equal to the size of A to power 4 divided by A plus A. And now we use that A has small doubling to conclude that the final quantity is at least A cubed divided by k. So we see small doubling implies large additive energy. And this kind of makes sense. If your set doesn't expand, then there are many collisions of sums. And so you must have lots of solutions to that equation up there. But what about the converse? If I give you a set with large additive energy, must it necessarily have small doubling? Oh. Let me show you an example. So, well-- so a large additive energy, does it imply small doubling? So consider the following example, where you take a set A which is a combination, is a union of a set with small doubling plus a bunch of elements without additive structure. So I take a set with small doubling plus a bunch of elements without additive structure. Then it has large additive energy, just coming from this interval itself. So the energy of A is order N cubed. N is the number of elements. What about A plus A? Well, for A plus A, this part doesn't-- that's the part that contributes, or the part of this A without additive structure. And we see that the size of A plus A is quadratic in the size of A. So, unfortunately, the converse fails. So you can have sets that have large additive energy and also large doubling. But, you see, the reason why this has large additive energy is because there is a very highly structured additively structured piece of it. And, somehow, we want to forget about this extra garbage. And that's part of the reason why the converse is not true. So we would like a statement that says that if you have large additive energy, then it must come from some highly structured piece that has small doubling. And that is true, and that's the content of the Balog-Szemeredi-Gowers theorem, which is the main topic today. So the Balog-Szemeredi-Gowers theorem says that if you have a set-- so we're working always in some arbitrary Abelian group. If you have a set with large energy, then there exists some subset A prime of A such that A prime is a fairly large proportion of A. And here, by large I mean up to polynomial changes in the error parameters. So this A prime is such that A prime has small doubling. If you have large additive energy, then I can pick out a large piece with small doubling constant, and I only loose a polynomial in the error factors. So that's the Balog-Szemeredi-Gowers theorem, and it describes this example up here. Any questions about the statement? So what I will actually show you is a slight variant, actually a more general statement, where, instead of having one set, we're going to have two sets. So here's Balog-Szemeredi-Gowers theorem version 2, where now we have two sets. Again, A and B are-- I'm not going to write any-- I'm not going to write it in this lecture, but A and B are always subsets of some arbitrary Abelian group. So A and B both have size of, at most, n, and the energy between A and B is large. Then there exists a subset A prime of A, B prime of B such that both A prime and B prime are large fractions of their parent set, and such that A prime plus B prime is not too much bigger than n. It's not so obvious why the second version implies the first version. So you can say, well, take A and B to be the same. But then the conclusion gives you possibly two different subsets, A prime and B prime. But the first version, I only want one subset that has small doubling. So, fortunately, the second version does imply the first version. So let's see why. The second version implies the first version because, if we-- so there's a tool that we introduced early on when we discussed Freiman's theorem, and this is the Ruzsa triangle inequality. So the spirit of Ruzsa triangle inequality is it allows you to relate, to sort of go back and forth between different sumsets in different sets. So by Ruzsa triangle inequality, if we apply the second version with A equals to B, then-- and we pick out this A prime and B prime, then we see that A prime plus A prime is, at most, A prime plus B prime squared over B prime. Well, actually, this uses the-- vice versa it uses a slightly stronger version that we had to use Plunnecke-Ruzsa key lemma to prove. But you can come up-- I mean, if you don't care about the precise loss in the polynomial factors, you can also use the basic Ruzsa triangle inequality to deduce a similar statement. This is easier to deduce. So you have that. And now, the second version tells you that the numerator is, at most, poly kn, and the denominator is, at most-- at least, n divided by poly k. Remember, over here, to get this hypothesis, we automatically have that the size of A and B are not two much smaller than n. Or else this cannot be true. So putting all these estimates together, we get that. So these two versions, they are equivalent to each other. Second version implies the first. The second one is stronger. The first one is slightly more useful. They're not necessarily equivalent, but the second one is stronger. Any questions? All right. So this is a Balog-Szemeredi-Gowers theorem. So the content of today's lecture is to show you how to prove this theorem. A remark about the naming of this theorem. So you might notice that these three letters do not coming in alphabetical order. And the reason is that this theorem was initially approved by Balog and Szemeredi, but using a more involved method that didn't give polynomial high bounds. And Gowers, in his proof of Szemeredi's theorem, his new proof of Szemeredi's theorem with good bounds, he required-- well, he looked into this theorem and gave a new proof that resulted in this polynomial type bounds. And it is that idea that we're going to see today. So this course is called graph theory and additive combinatorics. And the last two topics of this course-- today being Balog-Szemeredi-Gowers, and tomorrow we're going to see sum-product problem-- are both great examples of problems in additive combinatorics where tools from graph theory play an important role in their solutions. So it's a nice combination of the subject where we see both topics at the same time. So I want to show you the proof of Balog-Szemeredi-Gowers, and the proof goes via a graph analog. So I'm going to state for you a graphical version of the Balog-Szemeredi-Gowers theorem. And it goes like this. If G is a bipartite graph between vertex sets A and B-- and here A and B are still subsets of the Abelian group-- we define this restricted sumset, A plus sub G of B, to be the set of sums where I'm only taking sums across edges in g. So, in particular, if G is the complete bipartite graph, then this is the usual sumset. But now I may allow G to be a subset of the complete bipartite graph. So only taking some but not all of the-- only taking-- yes, some of this sums but not all of them. The graphical version of Balog-Szemeredi-Gowers says that if you have A and B be subsets of an Abelian group, both having size, at most, n, and G is a bipartite graph between A and B, such that G has lots of edges, has at least n squared over k edges. If the restricted sumset between A and B is small-- So here we're not looking at all the sums but a large fraction of the possible pairwise sums. If that sumset has small size, this is kind of like a restricted doubling constant. Then there exists A prime, subset of A, B prime, subset of B, with A prime and B prime both fairly large fractions of their parent set, and such that the unrestricted sumset between A prime and B prime is not too large. So let me say it again. So we have a fairly dense-- so a constant fraction edge density, a fairly dense bipartite graph between A and B. A and B are subsets of the Abelian group. Then-- and such that the restricted sumset is small. Then I can restrict A and B to subsets, fairly large subsets, so that the complete sumset between the subsets A prime and B prime is small. Let me show you why the graphical version of BSG implies the version of BSG I stated up there. But, so why do we care about this graphical version? Well, suppose we-- so we have all of these hypotheses. Let's write-- so we have all of those hypotheses up there. So let's write r to be r sub A comma B, so I don't have to carry the subscripts all around. What do you think-- so I start with A and B up there, and I need to construct that graph G. So what should we choose as our graph? Let's consider the popular sums. So the popular sums are going to be elements in the complete sumset such that it is represented as a sum in many different ways. And we're going to take edges that correspond to these popular sums. So let's consider bipartite graph G such that A comma B is an edge if and only A plus B is a popular sum. So let's verify some of the hypotheses. So we're going to assume graph BSG, and let's verify the hypothesis in graph BSG. On one hand, because each element of S is a popular sum, if we consider its multiplicity, we find that the size of S multiplied by n over 2k, lower bound be size of A times the size of B. So if you think about all the different pairs in A and B, each sum here, each popular sum, contributes this many times to this A cross B. So, as a result, because size of A and size of B are both, at most, n, we find that the size of S is, at most, 2kn. And if you think about what G is, then this implies also that the restricted sumset of A and B across this graph G-- which only requires the popular sums. So the restricted sumset is precisely the popular sums. So restricted sumset is not too large. OK, good. So we got one of the conditions, that the restricted sumset is not too large. And now we want to show that this graph has lots of edges. It has lots of edges. And here's where we would need to use the hypothesis that, between A and B, originally there is large additive energy. And the point here is that these unpopular sums cannot contribute very much to the additive energy in total, because each one of them is unpopular. So the dominant contributions to the additive energy are going to come from the popular sums, and we're going to use that to show that G has lots of edges. So let's lower bound the number of edges of G by first showing that-- so we'll show that the unpopular sums contribute very little to the additive energy between A and B. Indeed, the sums of the squares of the r's, if for x not in popular sums, is upper bounded by-- well, claim that it is upper bounded by the following quantity, that n over 2k times n squared. Because I can take out one factor r, upper bound by this number, just by definition, and the sums of the r's is n squared. So you have this additive energy between A and B. I know that it is large by hypothesis. Whereas, I also know that I can write it as a sum of the squares of the r's, which I can break into the popular contributions and the unpopular contributions. And, hopefully, this should all be somewhat reminiscent of basically all these proofs that we did so far in this course, where we separate a sum into the dominant terms and the minor terms. This came up in Fourier analysis in particular. So we do this splitting, and we upper bound the unpopular contributions by the estimate from just now. So, as a result, bringing this small error term, it doesn't cancel much of the energy. So we still have a lower bound on the sum of the squares of the r's in the popular sums. But I can also give a fairly trivial upper bound to a single r, namely it cannot be bigger than n. And so the number of edges of G-- so what's the number of edges of G? Look at that. Each x here contributes rx many edges. So the number of edges of G is simply the sums of these rx's. Which is quite large. So the hypothesis of graph BSG are satisfied. And so we can use the conclusion of graph BSG, which is the conclusion that we're looking for in BSG. Any questions? Good. So the remaining task is to prove the graphical version of BSG. So let's take a quick break, and when we come back we'll focus on this theorem, and it has some nice graph theoretic arguments. OK, let's continue. We've reduced the proof of the Balog-Szemeredi-Gowers theorem to the following graphical result. Well, it's not just graphical, right? Still-- we're still inside some an Abelian group, still looking at some set in some Abelian group, but, certainly, now it has a graph attached to it. Let me show this theorem through several steps. First, something called a path of length 2 lemma. So the path of length 2 lemma, the statement is that you start with a graph G which is a bipartite graph between vertex sets A and B. And now A and B no longer need-- they're just sets. They're just vertex sets. We're not going to have sums. And the number of edges is at least a constant fraction of the maximum possible. Then the conclusion is that there exists some U, a subset of A, such that U is fairly large. And between most pairs of elements of U-- so between 1 minus epsilon fraction of pairs of U-- there are lots of common neighbors. So at least epsilon delta squared B over 2 common neighbors. So you start with this bipartite graph A and B. Lots of edges. And we would like to show that there exists a pretty large subset U such that between most pairs-- all but an epsilon fraction-- of ordered pairs-- they could be the same, but it doesn't really matter-- the number of paths of length 2 between these two vertices is quite large. So they have lots of common neighbors. Where have we seen something like this before? There's a question? AUDIENCE: Is there a [INAUDIBLE] epsilon? YUFEI ZHAO: Ah, yes. So for every epsilon and every delta. So let epsilon, delta be parameters. Where have we seen something like this before? So in a bipartite graph with lots of edges, I want to find a large subset of one of the parts so that every pair of elements, or almost every pair of elements, have lots of common neighbors. Yes. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: Dependent random choice. So in the very first chapter of this course, when we did extremal graph theory forbidding bipartite subgraphs, there was a technique for proving the extremal number, upper bounds, for bipartite graphs of bounded degree. And there we used something called dependent random choice that had a conclusion that was very similar flavor. So there, we had every pair-- so a fairly large, but not as large as this-- a fairly large subset where every pair of elements had lots of common neighbors. For every couple, every k couple of vertices, have lots of common neighbors. So it's very similar. In fact, it's the same type of technique that we'll use to prove this lemma over here. So who remembers how dependent random choice goes? So the idea is that we are going to choose U not uniformly at random. So that's not going to work. Going to choose it in a dependent random way. So I want elements of U to have lots of common neighbors, typically. So one way to guarantee this is to choose U to be a neighborhood from the right. So pick a random element in B and choose U to be its neighborhood. So let's do that. So we're going to use dependent random choice. See, everything in the course comes together. So let's pick v an element of B uniformly at random. And let U be the neighborhood v. So, first of all, by linearity of expectations, the size of U is at least delta of A. So because the average degree from the right from B is at least delta A just based on the number of edges. If you have two vertices a and a prime in A with a small number of common neighbors, then the size of-- so sorry. Let me-- I skipped ahead a bit. So if a and a prime have a small number of common neighbors, then the probability that a and a prime both lie in U should be quite small. Because if they both had-- if a and a prime have a small number of common neighbors, in order for a and a prime to be included in this U, you must have chosen-- so suppose this were their common neighbor. Then in order that a and a prime be contained in U, it must have chosen this v to be inside the common neighborhood of a and a prime. Which is unlikely if a and a prime had a small number of common neighbors. So this probability is, at most, epsilon delta squared over 2. Just think about how U is constructed. So if we let x be the number of a and a primes in U cross U with, at most, epsilon delta squared over 2 times B common neighbors, then, by linearity of expectations, the expectation of x is-- well, by summing up all of these probabilities of a and a prime, both being in U-- so this is, at most, epsilon delta squared over 2 times size of A squared. So, typically, at least in expectation, you do not expect very many pairs of elements in U with few common neighbors. But we can also turn such an estimate into a specific instance. And the way to do this is to consider the quantity size of U squared minus x over epsilon. Well, first of all, we can lower bound this quantity, because the size of second moment of U is at least the first moment of U squared. And we also know that the size of x in expectation is not very large. So the whole expression can be lower bounded by delta squared over 2 times the size of A squared. So this is epsilon, sorry. Therefore, there is some concrete instance of this randomness resulting in some specific U such that this inequality holds. So there exists some U such that this inequality holds. And, in particular, we find that the size of U is at least-- just forget about this minus term-- is at least that right-hand side, square root. So, in particular, the size of U is at least delta over 2 times the size of A. And, just looking at the left-hand side, which must be a non-negative quantity because the right-hand side is non-negative, we find that x is, at most, an epsilon fraction of U squared. So putting these together, we arrive at the path of length 2 lemma. So let me go through it again. So this is the dependent random choice method, where we're going to-- we want to find this U, where most pairs of vertices in U have lots of common neighbors. So we start from the right side. We start from B, pick a uniform random vertex, which you call v, and let U be the neighborhood of v. And I claim that this U, typically, should have the desired property. And the reason is that, if you have a pair of vertices on the left that do not have many common neighbors, then I claim it is highly unlikely that these two vertices both appear in U. Because for them to both appear in U, your v have been selected inside the common neighborhood of a and a prime, which is unlikely if a and a prime have few common neighbors. So, as a result, the expected number of pairs in U with small number of common neighbors is small. And, already, that's a very good indication that we're on the right track. And, to finish things off, we look at this expression, which we can lower bound by convexity. And we know the size of U in expectation is large. And, also, the size of x, that we just saw, is small in expectation. So you have this inequality over here. And because there's an expectation, it implies that there's some specific instance such that, without the expectation, the inequality holds. So take that specific instance. We obtain some U such that this inequality is true, which simultaneously implies that U is large and x, the number of bad pairs, is small. So that was dependent random choice. Any questions? All right. So that was the path of length 2 lemma. So it tells us I can take a large set with lots of paths of length 2 between most pairs of vertices. Let's upgrade this lemma to a path of length 3 lemma. So, in the path of length 3 lemma, we start with a bipartite graph, as before, between A and B. So G is a bipartite between A and B. And, as before, we have a lot of edges between A and B. It's the delta fraction of all possible edges. Then the conclusion is that there exists A prime in A and B prime subset of B such that A prime and B prime are both large fractions of their parent set. And now, the-- and, furthermore, every pair between A prime and B prime is joined by many paths of length 3. So a path of length 3 means there's 3 edges. And, here, this eta is basically the original error term up to a polynomial change. So starting with this bipartite graph that's fairly dense, the lemma tells us that we can find some large A prime and large B prime so that between every vertex in A prime and every vertex in B prime, there are lots of paths of length 3 between them. Every time. So we should think about all of these constants as-- plus you only make polynomial changes in the constants, we're happy. Here, eta is a polynomial change in the delta. There's a convention which I like which is not universal, but it's often solved, unlike this convention. It's the difference between the little c and the big C is that a little c is better if you make it smaller, and a big C is better-- I mean, it's better in the sense that if this is true for little c and big C, and you make little c smaller and big C bigger, then it is still true. So big C is a sufficiently large constant, and little c is a sufficiently small constant. Just a-- So let's see the path of length 3 lemma, see it's proof. We're going to use the path of length 2 lemma, but we need a bit of preparation first. So the proof has some nice ideas, but it's also-- some parts of it are slightly tedious, so bear with me. So we're going to construct a chain of subsets A-- inside A. So A1, A2, A3. And this is just because there's a few cleaning up steps that need to be done. Let's call two vertices in A friendly if they have lots of common neighbors. And, precisely, we're going to say they're friendly if they have more than delta squared over 80 times the size of B common neighbors. Let me construct this sequence of subsets as follows. First, let A1 be all the vertices in A with degree not too small. So this is in preparation. So it will make our life quite a bit easier later on. Let's just trim all the really small degree vertices so that we don't have to think about them. So you trim all the small degree vertices. And think about how many edges you trim. You cannot trim so many edges, because each time you trim such a vertex, you only get rid of a small number of edges. So, in the end, at least half of the original set of edges must remain. And, as a result, the size of A1 is at least a delta over 2 fraction of the original vertex set. Otherwise, you could not have contained half of the original set of edges. So this is the first trimming step. So we got rid of some edges, but we got rid of fewer than half of the original edges. And because now you have a minimum degree on A1, the number of edges between A1 and B is quite large, still quite large. So think about passing down to A1 now. In the second step, we are going to apply the path of length 2 lemma to this A1. So A2 is going to be constructed from-- so using the path of length 2 lemma, specifically with parameter epsilon being delta over 10. Although, remember, now the density of the graph went from delta to delta over 2. Again, if you don't care about the specific numbers, they're all polynomials in delta. So don't worry about them. Everything's poly delta. So we're going to apply the path of length 2 lemma to find this subset A2. And it has the property that A2 is quite large, and all but a small fraction of pairs in A2 are friendly. So we passed down to, first, trimming small degree vertices, and then passed down further to A2, where all but a small fraction of elements in A2, or all but a small fraction of the pairs are friendly to each other, meaning they have lots of common neighbors. And now let's look at the other side. Let's look at B. So we're in this situation now where you have-- so we're now in a situation where you've passed down to A2 and in B, where, because of what we did initially, every vertex in here have large degree. So there's this minimum degree condition from every vertex on the left. So the average degree is still very high. As a result, the average degree from B is going to be quite high. So let's focus on the B side and pick out vertices in B that have high degree. So let's B1 denote vertices in B such that the degree from B to A2 is at least half of what you expect based on average degree. And, as before, the same logic as the A1 step. We see that B1 has large size, is a large fraction of B. And now we pass down to this B1 set. Now, finally, let's consider A3 to be vertices in A2 where a is friendly. So vertices a in A2 such that a is friendly to at least 1 over delta over-- so 1 minus delta over 5 fraction of A2. So we saw that, in A2, most pairs of vertices are friendly. So most, meaning all but a delta over 10 fraction. So if we consider vertices which are unfriendly to many other vertices in A2, there aren't so many of them. If there were many of them, you couldn't have had that. So that's why I constructed this set A3 consisting of elements in A2 that are friendly to many elements. And the size of A3 is at least half of that of A2. So we have this A3 inside. All right. And now I claim that we can take A3 and B as our final sets, and that between every vertex in A3 and every vertex in B1, I claim there must be lots of paths of length 3. But, first, let's check their sizes. I mean, the sizes all should be OK, because we never lost too much at each step. If you only care about polynomial factors, well, you already see that we never lost anything more than a polynomial factor. But just to be precise, the size of A3 is at least-- so if you count up the factor lost at each step, so it's 1/2 delta over 4 delta over 2. So it's at least delta squared over 16 fraction of the original set A. And now, if we consider a comma b to be an arbitrary pair in A3 cross B1, I claim that there must be many paths. Because by using-- so what properties do we know? We know that b is adjacent to a large fraction. So here large means at least delta over 4-- so bounded below-- a large fraction of A2. Yes. So I apologize. When I say the word large, depending on context it can mean bigger than delta, or it could mean at least 1 minus delta. So you look at what I write down. So b is adjacent to at least delta over 4 fraction of A2. At the same time, we know that a is friendly to at least 1 minus delta over 5 fraction of A2. So these two sets, they must overlap by at least a delta over 20 fraction. So let's take a vertex b. So you-- so it's adjacent to many vertices here. And if you look at a vertex in A, it's friendly to a large fraction. So, in particular, it's friendly to all these elements over here. So, to finish off, what does it mean for a-- this is-- this vertex is a. This vertex is b. What does it mean for a to be friendly to all of these shaded elements? It means that there are lots of paths from a to each of these elements. And then you can finish off the paths going back to b. Yes. AUDIENCE: The shaded stuff is allowed to be outside of A3? YUFEI ZHAO: No. the shaded-- the question is, is the shaded stuff allowed to be outside of A3? No. The shaded things are inside A3. So we're looking at intersections within A3. No, sorry. Actually, no, you're right. So the shaded things can be outside A3. So shaded things can be outside A3. I apologize. So everything now is in A2. So b is adjacent to a large fraction of A2. And a here is friendly to some part of the neighbors of b. So you can complete paths like that. Yes. So only the starting and ending points have to be in A prime and B prime. Everything else, they can go outside of the A prime and B prime. Yes, thank you. So the number of paths from a to B to A2 back to b is-- let's see if I can stay within B1-- so is at least-- yes. So it's-- sorry. This is B. So it's at least delta over 20 times A2 times delta over delta squared over 80 times B. So if you don't care about polynomial factors in delta, then you see that-- the point is there's a large fraction of-- there are a lot of paths. So there are a lot of paths between each little a and each little b by the construction we've done. So let me just do a recap. So there were quite a few details in this proof, and some of them have to do with cleaning up. Because it's not so nice to work with graphs that just have large average degree. It's much nicer to work with graphs with large minimum degree. So there are a couple of steps here to take care of vertices with small degrees. So we started with, between A and B, lots of edges. And we trim vertices from A with small degree. So we get A1. And then we apply the path of length 2 lemma to get A2. So inside A2, most pairs of vertices have lots of common neighbors, but not all. We then go back to B to get B1, which has large minimum degree to A2. And then A3 looks at vertices in A with many friendly companions in A2. And A3 is large, and I claim that between every vertex in A3 and every vertex in B, you have many paths of length 3. Because if you start with a vertex in A3, it has many friendly companions. So many here means at least 1 minus delta over 5 fraction. Whereas every vertex in B1 has lots of neighbors in A2, where lots means at least delta over 4. So there's necessarily an overlap of at least delta over 20. And for that overlap, we can create lots of paths going through this overlap from A to B. Any questions? OK, great. So let's put everything together to prove the graphical version of Balog-Szemeredi-Gowers. So we'll prove the graphical version of Balog-Szemeredi-Gowers. So by-- so, first, note that the hypothesis of Balog-Szemeredi-Gowers already implies that the size of A and the size of B are not too small. Because, otherwise, you couldn't have had n squared over k edges to begin with. So by the path of length 3 lemma, there exists A prime in A and B prime in B with the following properties. That A prime has a large fraction of-- so A prime and B prime are both large in size. And for all vertices a in A prime and vertices b in B prime, there are lots of paths of length 3 between these vertices. So there are at least k to the minus little o1-- to the minus big O1 times n squared pairs of intermediate vertices a1, b1 in A cross B, such that a b1 a1 b is a path in G. So let me draw the situation for you. So we have A and B. And so inside A and B, we have this fairly large A prime and B prime, such that for every little a and little b, there are many paths like that going to b1 and a2. Let me set-- so let me set x to be a plus b1, that sum, y to be a1 plus b1, and z to be a1 plus b. So now notice that we can write this a plus b in at least k to the minus big O1 times n squared ways as x minus y plus z by following this path, where x, y, and z all lie in the restricted sumset, because that's how the restricted sumset is defined. So if you have an edge, then the sum of the elements across on the two ends, by definition, lies in the restricted sumset. So the path of length 3 lemma tells us that every pair a and b, their sum can be written in many different ways as this combination. As a result, we see that A prime plus B prime-- so this sum, if we consider sum along with its multiplicity-- so now we're really looking at all the different sums as well as ways of writing the sum as this combination-- we see that it is bounded above by the restricted sumset raised to the third power. Because each of these choices, x, y, and z, they come from the restricted sumset. But the hypothesis of Balog-Szemeredi-Gowers, the graphical version, is that the restricted sumset is small in size. So we can now upper bound the restricted sumset by, basically, the-- within a constant, within a factor of the maximum possible. And now we are done, because we have deduced that the complete sumset between A prime and B prime is, at most, a constant factor with change in constant by a polynomial. So a constant factor more than the maximum possible. So it's, at mostly, k to the big O1 poly k times n. So that proves the graphical version of Balog-Szemeredi-Gowers. And because we showed earlier that the graphical version of Balog-Szemeredi-Gowers implies Balog-Szemeredi-Gowers, this shows the Balog-Szemeredi-Gowers theorem. So let me recap some of the ideas we saw today. And so the whole point of Balog-Szemeredi-Gowers and all of these related lemmas and theorems and variations is that you start with something that has a lot of additive structure. Well, after we passed down to graphs just a lot of edges. So you start with a situation where you have kind of 1% goodness. And you want to show that you can restrict to fairly large subsets, so that you have perfection. So you have complete goodness between these two sets. And this is what's going on in both the graphical version and the additive version. So back to the graph path of length 3 lemma. So we were able to boost the path of length 2 lemma, which tells us something about 99% of the pairs having lots of common neighbors, to 100% of the pairs having lots of path of length 3. And in the additive setting, we saw that by starting with a situation where the hypothesis is somewhat patchy, so like a 1% type hypothesis, we can pass down to fairly large sets, where the complete sumset, starting with just the restricted sumset being small, can pass down to large sets where the complete sumset is small. And this is an important principle, that, often, when we have some typicality by an appropriate argument-- and, here, it's not at all a trivial argument. So there's some cleverness involved, that by doing some kind of argument, we may be able to pass down to some fairly large set where it's not typically good, but everything's perfectly good. That's the spirit here of the Balog-Szemeredi-Gowers theorem. So, next time, for the last lecture of this course, I will tell you about the sum-product problem, where the-- there are also some graph-- very nice graph theoretic inputs.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
8_Szemerédis_graph_regularity_lemma_III_further_applications.txt
PROFESSOR: So we've been discussing Szemeredi's regularity level for the past couple of lectures. And one of the theorems that we proved was Roth's theorem, which tells us how large can a subset of 1 through N be if it has no 3-term arithmetic progressions. And I mentioned at the end of last time that we can construct fairly large subsets of 1 through N without 3-term arithmetic progressions. I want to begin today's lecture with showing you that example, showing you that construction. But first, let's try to do something that is somewhat more naive, somewhat more straightforward, to try to just construct greedily 3-AP-free sets of 1 through n. And recall from last time, we showed Roth's theorem that such a set must have size little o of N. So what's one thing you might do? Well, you can try to greedily construct such a set. It will just make our life a little bit easier if we start with 0 instead of 1. So you put one element in, and you keep putting in the next integer, as long as it doesn't create a 3-term arithmetic progression. So you keep doing this. Well, you can't put 2 in, so let's skip 2. So we go to the next one, 3. So 4 is OK. So we skip 5. We have to skip 6 as well because 0, 3, 6, that's a 3-AP. So we keep going. So what's the next number we can put in? AUDIENCE: 9. PROFESSOR: Up. Go to 9. Then the next one is 10. What's the next one we can put in? So we can play this game. Find out what is the next number that you can put in. Greedily, if you could put it in, put it in in a way that generates a subset of integers without 3-term arithmetic progressions. So this actually has a name, so this is called a Stanley sequence. And there's an easier way to see what the sequence is. Namely, if you write the sequence in base 3, you find that these numbers are 0; 0, 1; 1, 0; 1, 1. 1, 0, 0. 1, 0, 1. 1, 1, 0. 1, 1, 1. So these are just numbers whose base 3 representation consists of zeros and ones. So I'll leave it to you as an exercise to figure out why this is the case, if you generate the sequence greedily this way, this is exactly the sequence that you obtain. But once you know that, it's not too hard to find out how many numbers you generate. So up to-- suppose N is equal to 3 to the k. We get 2 to the k terms, which gives you N raised to the power of log base 3 of 2. So you can figure out what that number is, but some numbers strictly less than 1. And actually, for a very long time, people thought this was the best construction. So this construction was known even before Stanley, so it was something that is very natural to come up with. And it was somewhat of a surprise when in the '40s, it was discovered that you can create much larger subsets of the positive integers without arithmetic progressions. So that's the first thing I want to show you today. So these sets were first discovered by Salem and Spencer back in the '40s. And a few years later, Behrend gave a somewhat improved and simplified version of the Salem-Spencer construction. So these were all back in the '40s. So these days, we usually refer, at least in the additive combinatorics community, to this construction as Behrend's construction. And this, indeed, what I will show you is due to Behrend, but somehow, this Salem-Spencer name has been forgotten and I just wanted to point out that it was Salem and Spencer who first demonstrated that there exists a subset of 1 through N that is 3-AP-free and has size N to the 1 minus little o 1, that there's no power saving. So there exists examples with no power saving. And this is important because as we saw last time, the proof of Roth's theorem-- and we basically spent two full lectures proving Roth's theorem. And it's somewhat involved, right. It wasn't this one line of inequality you can just use to deduce the result. And that is part of the difficulty. So having such an example is indication that Roth's theorem should not be so easy. That's not a rigorous demonstration, but it's some indication of the difficulty of Roth's theorem. So let's see this construction due to Behrend. So let me write down the precise statement. There exists a constant, C, such that there exists a subset of 1 through N that is 3-AP-free and has size at least N times e to the minus C root log N. Perhaps somewhat amazingly enough, this bound is still the current best known. We do not know any constructions that is essentially better than Behrend's construction from the '40s. There were some small recent improvements that clarified what the C could be, but in this form, we do not know anything better than what was known back in the '40s. And the construction, I will show yow-- hopefully you should believe that it's quite simple. So I will show you what the construction is. It's clever, but it's not complicated. And it's a really interesting direction to figure out is this really the best construction out there. Can you do better? So let's see. We're going to set some parameters to be decided later, m and d. Let me consider x to be this discrete box in d dimensions. So this is the box of lattice points in d dimensions, 1 through m raised to d. And let me consider an intersection of this box by a sphere of radius root L. So namely, we look at points in this x, such that the sum of their squares is equal to exactly L. You take a bunch of these spheres, they partition your set x, so the smallest possible sum of squares, largest possible sum of squares. So in particular, there exists some L such that x of L is large, just by pigeonhole principle. You can probably do this step with a bit more finesse, but it's not going to change any of the asymptotics. The intuition here-- and we'll come back to this in a second-- is that xL lies on a sphere. So this is the set of lattice points that lies in a given sphere. And because you are looking at a sphere, it has no 3-term APs. So we're going to use that property, but right now it's not yet a subset of the integers. So what we're going to do is to take this section of a sphere and then project it to one dimension, so that it is now going to be a subset of integers. And we're going to do this in such a way that it does not affect the presence of 3-term APs. So let us map x to the integers by base expansion. So this is base 2m expansion. All right. So what's the point of this construction here so far? Well, you verified a couple of properties, the first being that this construction here is injective, this map. So if you call this map phi, this phi is injective. Well, it's base expansion, so it's injective. But also a somewhat stronger claim is that if you have three points-- so if you have three points in x, that map to a 3-AP in the integers, then the three points originally must be a 3-AP in x. Again, this is not a hard claim. Just think about it. Here, because we're using base 2m expansion and you're only allowed to use digits up to m, you don't have any wrap around effects. You don't have any carryovers when you do the addition. So combining these two observations, we find that the image of xL is a 3-AP-free set off 1 through N. So what is N? I can take N to be, for instance, 2m raised to power d, so all the numbers up there are less than this quantity. And the size is the size of x sub L, which is N to the d divided by d m squared. And now you just need to find the appropriate choices of the parameters m and d to maximize the size of x sub L. And that's an exercise. So, for instance, if you take m to be e to the root log N and d to be root log N, then you find that the size here is the bound that we claim. And that finishes the construction of Behrend, giving you a fairly large subset of 1 through N without 3-term arithmetic progressions. And the idea here is you look at a higher dimensional object, namely, a higher dimensional sphere which has this property of being 3-AP-free, and then you project it onto the integers. Any questions so far? OK. So we have some proofs, we have some examples. So now let's go on to variations of Roth's theorem. And I want to show you a higher dimensional version of Roth's theorem. So I mentioned in the very first lecture-- so you have this whole host of theorems and additive combinatorics-- Roth, Szemeredi, multi-dimensional Szemeredi. So this is, in some sense, the simplest example of multi-dimensional Szemeredi theorem. And this example is known as corners. So what's a corner? A corner is, well, we're working inside two dimensions, so a corner is simply three points, such that the two-- so they're positioned like that-- such that these two segments, they are parallel to the axes and they are the same length. So that's, by definition, what a corner is. And the question is, if you give me a subset of a grid that is corner-free, how large can this subset be? Here's a theorem. So if A is a subset of 1 through N with no corners, particular, no three points of the form, x comma y; x plus d comma y; and x comma y plus d, where d is some positive integer, then the size of A has to be little o of N squared. Question. AUDIENCE: Do we only care about corners oriented in that direction? PROFESSOR: OK, good question. So that's one of the first things we will address in this proof. So your question was, do we only care about corners oriented in the positive direction. So you can have a more relaxed version of this problem where you allow d to be negative as well. The first step in the proof is we'll see that that constraint doesn't actually matter. Yes. OK. Great. Let's get started. So as Michael mentioned, we do have this constraint in this over here that d is positive. And it's somewhat annoying because if you remember in our proof of Roth's theorem, they are positive, negative, they don't play a role. So let's try to find a way to get rid of this constraint so that we are in a more flexible situation. So first step is that we'll get rid of this d being positive requirement. So here's a trick. Let's consider the sumset A plus A. So sumset here means I'm looking at all pairwise sums as a set. So you don't keep track of duplicates, so you'll keep it as a set. And we're living inside this grid, but now somewhat wider in width. Then there exists an element in this domain of the sumset. By pigeonhole, that's represented in many different ways. So there exists a z represented as a plus b in at least size of A squared divided by size of 2N squared different ways, just because if you look at how many things come up in that representation, just use pigeonhole. And now let's take A prime to be A intersect z minus A. So what's happening here? So you have this set A. And basically what I want to do is look at the-- so suppose A looks like that. So look at minus A and then shift minus A over so that they intersect in as many elements as you can. And by pigeonhole, I can guarantee that their intersection is fairly large. So because the size of A prime, it's essentially the number of ways that z can be represented in the aforementioned manner. So it suffices to show that A prime is little o of N squared. If you show that, then you automatically show A is little o of N squared. But now A prime is symmetric. A prime is symmetric around 0 over 2. So this is centrally symmetric about z over 2, meaning that A prime equals to z minus A prime. And so you see now we've gotten rid of this d positive requirement because if A prime had a positive corner, then it has a negative corner and vice versa. So no corner in A prime with d positive implies that no corner with d negative. So now let's forget about A and A prime and just replace A by A prime and forget about this d positive condition. So let's forget about this part, but I do want d not equal to 0. Otherwise, you have trivial corners, and, of course, you always have trivial corners. All right. So let's remember how the proof of Roth's theorem went. And this relates to the very first lecture where I showed you this connection between additive combinatorics on one hand and graph theory on the other hand, or you take some arithmetic pattern and you try to encode it in a graph so that the patterns in your arithmetic set correspond to patterns in the graph. So we're going to do the same thing here. So we're going to encode the subset of the grid as a tripartite graph in such a way that the corners correspond to triangles in the graph. So I'll show you how to do this and it will be fairly simple. And in general, sometimes it takes a little bit of ingenuity to figure out what is the right way to set up the graph. So one of an upcoming homework problem will be for you to figure out how to set up a corresponding graph when the pattern is not a corner, but a square. If I add one extra point up there, how would you do that? So what does this graph look like? Let's build a tripartite graph where-- so this should be somewhat reminiscent of the proof of Roth's theorem-- where I give you three sets, x, y, and z; and x and y are both going to have N elements and z is now going to have 2N elements. x is supposed to enumerate or index all the vertical lines in your grid. So you have this grid of N by N. So x has size 4. Each vertex in x corresponds to a vertical line. y corresponds to the horizontal lines and z corresponds to these negatively sloped diagonal lines-- slope minus 1 lines. And of course you should only take lines that affect your N by N grid. So that's why there are this many of them for each direction. So what's the graph? I join two vertices if their corresponding lines meet in a point of A. So I might have-- this may be A, a point of A. So I put an edge between those two lines-- one for x, one for z-- because their intersection lies in A. And more explicitly-- I mean, that is pretty explicit-- and alternatively, you could also write this graph by telling us how to put in the edge between x and z. Namely, you do this if x comma z minus x lies in A; you put an edge between x and y if x comma y lies in A; and likewise, you put in the final edge if z minus y comma y lies in A. So two equivalent descriptions of the same graph. Any questions? All right. So the rest of the proof is more or less the same as the proof of Roth's theorem we saw last time. So we need to figure out how many edges are there. Well, every element of A gives you three edges. So that's one of the edges corresponding to that element. The other two pairs with two other edges. And most importantly, the triangles in this graph, I claim, correspond to corners. So what is a triangle? A triangle corresponds to a horizontal line, a vertical line, and a slope 1 minus line that pairwise intersect in elements of A. If you look at their intersections, that's a corner. And conversely, if you have a corner, you build a triangle. And because your graph, your set is corner-free, well, you don't have any triangles. Actually, no, that's not true. So like we saw in Roth's theorem, you have some triangles, but corresponding to trivial corners. So triangles correspond to trivial corners because your set A is corner-free. Trivial corners, meaning the same point with d close to 0, so it's not a genuine corner like that. And in particular, in this graph, every edge is in a unique triangle. And so we're in the same situation. By the corollary of triangle removal lemma, we find that the number of edges must be little o of the number of vertices. The number of edges must be subquadratic, so it must be little o of N squared. And so that implies that the size of A is little o of N squared. So that proves the Corners theorem. Any questions? So once you set up this graph, the rest is the same as Roth's theorem. So this connection between graph theory and additive combinatorics, well, you need to figure out how to set up this graph. And then sometimes it's clear, but sometimes you have to really think hard about how to do this. All right. What does corner have to do with Roth's theorem, other than that they have very similar looking proofs? Well, actually, you can deduce Roth's theorem from the Corners theorem. And to show you this precisely, so let me use r sub 3 of N to denote the size of the largest 3-AP-free set of 1 through N. So this notation, the r sub 3 is actually fairly standard. The next one's not so standard, but let's just do that. So that's not an L, but that's a corner. So that is the size of the largest corner-free subset of N squared. So we gave bounds for both quantities, but they are actually related to each other through this fairly simple proposition that if you have an upper bound for corners, then you have an upper bound for Roth's theorem. Indeed, given A a 3-AP-free subset of 1 through N, let me build for you a corner-free subset of the grid that is a fairly large subset of that grid. I can form B by setting it to be the set of pairs inside this grid whose difference, x minus y, lies in A. So what does this look like? This is the grid of size 2N. And if I start with A that is 3-AP-free, what I can do-- so over here-- is look at the lines like that, putting all of those points. And you see that this set of points should not have any corners because if they have corners, then the corners would project down to a 3-AP. So B is corner-free. To recap, if we have upper bound on corners, we have upper bound on Roth's theorem. But you also know that if you have lower bound on Roth's theorem, then you have lower bound on corners. So the Behrend construction we saw at the beginning of today extends to, you know, through exactly this way to a fairly large corner-free subset. And that's more or less the best thing that we know how to do. In fact, there aren't that many constructions in additive combinatorics that's known. Almost everything that I know how to construct that's fairly large come from Behrend's construction or some variant of Behrend's construction. So it looks pretty simple. It comes from the '40s, yet we don't really have too many new ideas besides playing and massaging Behrend's construction. Let me tell you what is the best known upper bound on the Corners theorem. This is due to Shkredov. So the proof using triangle removal lemma, it goes to Szemeredi's regularity lemma. It gives you pretty horrible bounds, but using Fourier analytic methods-- so you see if you have upper bound on Roth, it doesn't give you an upper bound on corner, so you need to do something extra. And so the best known bound so far is of the form, N squared divided by polylog log N, so log log N raised to some small constant, C. Any questions? Last time, we discussed the triangle counting lemma and the triangle removal lemma. Well, it shouldn't be a surprise to you that if we can do it for triangles, we may be able to do it for other subgraphs. So that's the next thing I want to discuss-- how to generalize the techniques and results that we obtain for triangles to other graphs, and what are some of the implications if you combine it with Szemeredi's regularity lemma. So let's generalize the triangle counting lemma. So the strategy for the triangle counting lemma, let me remind you, was that we embedded the vertices one by one. Putting a vertex and a typical vertex here should have many neighbors to both vertex sets. So these two guys should have sizes typically roughly the same as a fraction of them corresponding to the edge density between the vertex parts. And if they're not too small, then from the epsilon regularity of these two sets, you can deduce the number of edges between them. So that was the strategy for the triangle counting lemma. So you can try to extend the same strategy for other graphs. So let me show you how this would have been done, but I don't want to give too many details because it does get somewhat hairy if you try to execute it. So the first strategy is to embed the vertices of H one at a time. So my H now is going to be, let's say, a K4. And I wish to embed this H in this setting where you have these four parts in the regularity partition, and they are pairwise epsilon regular with edge densities that are not too small. Well, what you can try to do, mimicking the strategy over there, is to first find a typical image for the top vertex. And a typical image over here, minus some small bad exceptions, will have many neighbors to each of the three parts. Next, I need to figure out where this vertex can go. I'm going to embed this vertex somewhere here. Again, a typical place for this vertex, modulo some small fraction, which I'm going to throw away. So now you see you need somewhat stronger hypotheses on the epsilon regularity, but they are still all polynomial dependent, so you just have to choose your parameters correctly. So this typical green vertex should have lots of neighbors over here. So you just keep embedding. So it's almost this greedy strategy. You keep embedding each vertex, but you have to guarantee that there are still lots of options left. So embed vertices one at a time. And I want to embed each vertex so that the yet to be embedded vertices have many choices left. So epsilon regularity guarantees that you can always do this. And you do this all the way until the end and you arrive at some statement. And depending on how you do this, the exact formulation of the statement will be somewhat different, but let me give you one statement which we will not prove, but you can deduce using this strategy. And this is known as a graph embedding lemma. And again, as I mentioned when I started discussing Szemeredi's regularity lemma, the exact statements, they are fairly robust and they are not as important as the spirit of the ideas. So if you have some application in mind, you might have to go into the proof, tweak a thing here and there, but you get what you want. So the graph embedding lemma says, for example, that if H is bipartite, and with maximum degree and most delta-- so the H maximum degrees at most delta-- and suppose you have vertex sets V1 through Vr, such that each vertex set is not too small; and if these vertex sets are pairwise epsilon regular and the density is not too small. So here I'm assuming some lower bound on the density which depends on your epsilon. Then the conclusion is that G contains a copy of H. I just want to give a few remarks on the statement of this theorem. Again, we will not discuss the proof. So what is this hypothesis on H being r-partite have to do with anything? So here, as an example, when r equals to 4, instead of this K4, maybe I also care about that graph over there. Well, maybe some more vertices, some more edges, but if it's 4-partite. And the point is that I want to embed the vertices in such a way that that top vertex goes to the part that it's supposed to go into. So I'm embedding this configuration in the same way that corresponds to the proper coloring of H. So if you do this, there's enough room still to go, as long as you have some lower bound on the edge density between the individual parts, and you only depend really on not the number of edges of H, but the maximum degree. Because if you look at how many times each vertex, its possibilities can be shrunk, it's in most delta times. Now, this graph embedding lemma-- so I give you this statement here, but it's a fairly robust statement. And if you want to get, for example, not just a single copy of H, if you want to get many copies of H, you can tweak the hypotheses, you can tweak the proofs somewhat to get you what you want, again, following what we did for the triangle counting lemma. Question. AUDIENCE: Is the bound on the H edge density between partitions correct? So if the maximum degree increases, the lower bound decreases? PROFESSOR: If maximum degree increases, this number goes up. AUDIENCE: Oh, OK. PROFESSOR: I want to show you a different way to do counting that does not go through this embedding vertices one by one, but instead we will try to analyze what happens if you take out an edge of H one by one. And that's an alternative approach which I like more. It's somewhat less intuitive if you are not used to thinking about it, but the execution works out to be much cleaner. And it's also in line with some of the techniques that we'll see later on when we discuss graph limits. Let's take a quick break. Any questions so far? We will see a second strategy for proving a graph counting lemma. And the second strategy is more analytic in nature, which is to embed, well, to analytically somehow we'll analyze what happens when we pick out one edge of H at a time. So let me give you the statement first. So the graph counting lemma says that if you have a graph H with vertex set elements of 1 through k-- I also have an epsilon parameter and I have a graph G and subsets V1 through Vk of G, such that Vi Vj is epsilon regular whenever ij is an edge of H. So here, the setup is slightly different from the picture I drew up there. So what's going on here is that you have, let's say, H being this graph. And suppose you are in a situation where I know that some of these-- so I know that five of these pairs are epsilon regular, and what I really want to do is embed this H into this configuration. So 1, 2; V1, V2; and so on. And I want to know how many ways can you embed this way. The conclusion is that the number of tuples, the number of such embeddings, such that Vi, Vj is an edge of G for all ij being the edge of H-- so exactly as showing that picture, the number of such embeddings, the little v's, is within a small error, which is the number of edges of H epsilon times the total, the product of these vertex set sizes of this number, which is what you would predict the number of embeddings to be if all of your bipartite graphs were actually random. So like in the triangle counting lemma, if you look at the edge densities in this configuration and predict how many copies of H you would get, that's the number you should write down. And this counting lemma tells you that the truth is not so far from the prediction. Any questions? So we will prove the graph counting lemma in an analytic manner. It helps to-- so it will be convenient for me to rephrase the result just a little bit in this probabilistic form. So it has the equivalent to show that if you have uniformly randomly chosen vertices, little v1 and big V1, and so on-- so little vk and big VK. So they're independent, uniformly, and random, then the probability-- so basically, I am putting down a potential image for each vertex of H and asking what's the probability that you actually have an embedding of H. So the probability that little vi, little vj is an actual edge of G for all ij being an edge of H, this number here, we're saying that it differs from the prediction, which is simply multiplying all the edge densities together. So the difference between the actual and the predicted values is fairly small. So I haven't done anything, just rephrasing the problem. Instead of counting, now we're looking at probabilities. As I mentioned, we'll take out one edge at a time. So relabeling if necessary, let's assume that 1, 2 is an edge of H. So now we will show the following plane, so I'll denote star. That, if you look at this quantity over here compared to if you take out just the edge density between V1 and V2, but now you put in a similar quantity where I'm considering all of the edges of H, except for 1, 2. I claim that this difference is, at most, epsilon. So you can think of this quantity here as the same quantity as the green one, except not on H, but on H minus the edge 1, 2. To show this star claim, let us couple the two random processes choosing these little vi's. By that, I mean here you have the random little vi's and here you have the random little vi's, but use the same little vi's in both probabilities. So in both, there are two different random events, but you use the same little vi's, it suffices to show this inequality star with-- so what's this process, this random process? You pick V1, you pick V2, you pick V3, all of them independently uniformly at random. But if I show you the inequality under a further constraint of arbitrary V3 through VN, then that's even better. So with V3 through Vk, fixed arbitrary, and only little v1 and little v2 random. So you can phrase this in terms of conditional probabilities, if you like. You're comparing these two probabilities. Now I fix the V3's through Vk's, and I just let V1 and V2 be random. And if condition on V3 through Vk, you have this inequality, then letting V3 through Vk go, letting them be random as well, by triangle inequality, you obtain the bound that we're looking for. Any questions about this step? If you're confused about what's happening here, another way to bypass all the probability language is to go back to the counting language. So in other words, we're trying to count embeddings. And I'm asking if you arbitrarily fix V3 through Vk, how many different choices are there to V1 and V2 in the two different settings. OK. So let A1 be the set of places where little v1 can go, if you already knew what V3 through Vk should be. OK. So you look at all the neighbors of V1, neighbors of 1 and H, except for 2. And I want to make sure that V1, Vi, as i ranges over all such neighbors, is indeed a valid H in G. I'll draw a picture in a second. And A2, likewise, is the same quantity, but with 2 instead of 1. So for example, if you're trying to embed a K4, what's happening here is that you have this V1, this V2, and somebody already arbitrarily fixed the locations where V3 and V4 are embedded. And you're asking, well, how many different choices are now left for little v1. It's the common neighborhood of V3 and V4. So that's for A1. And V3 and V4, also their common neighborhood in B2 is A2. OK. So with that notation, what is it that we're trying to show over here? So if you rewrite this inequality with the V3's through Vk's fixed, you find that what we're trying to show is the following. So we claim-- and this claim implies the star, that if you look at what the first term should be-- so this is the number of edges between A1 and A2 as a fraction of the product of V1 and V2. And then the second guy here is if you use the prediction d of V1 and V2. So this is each of them, each of these two factors, there's a probability that little v1 lies in A1, little v2 lies in A2, and then you tack on this extra constant, namely, this constant here. So we're trying to show that this difference is small. And the claim is that this difference is, indeed, always small. There's always, at most, epsilon for every A1 in V1 and A2 in V2. And here, in particular, so this statement looks somewhat like the definition of epsilon regularity, but there's no restrictions on the sizes of A1 and A2, and they don't have to be big. And as you can imagine, this statement, we're not really using all that much. All we're assuming is the epsilon regularity between B1 and B2. So we will deduce this inequality from the hypotheses of epsilon regularity between B1 and B2. So let's check. So we know that B1 and B2 is epsilon regular by hypotheses. So if either A1 or A2 is too small, so if A1 is too small or A2 is too small, then we see that both of these terms here are, at most, epsilon. So if the A's are too small, then neither of these terms can be too large. Here it's bounded by-- if you took the product of A1 and A2 and likewise over there. So in this case, their difference is, at most, epsilon, and we're good to go. Otherwise, if A1 and A2 are both at least an epsilon fraction of their [INAUDIBLE] sets, then we find that-- so what happens? So here, so by the hypothesis of epsilon regularity, we find that d of V1 and V2 differs from the number of edges between A1 and A2, divided by the product of their sizes. So that's just d of A1 and A2. So this difference is, at most, epsilon, which then implies the inequality up there. So here we're using that the size of A is, at most, the size of V. So we have this claim. And that claim proves this inequality in star. And basically, what we've done is we showed that if you took out a single edge, you change the desired quantity by, at most, an epsilon, essentially. So now you do this for every edge of H. Alternatively, you can do induction on the number of edges of H. So to complete a proof of the counting lemma, so we do induction on the number of edges of H. And when H has exactly one edge, well, that's pretty easy. But now if you have more edges, well, you apply induction hypothesis to the graph, which is H minus the edge 1, 2. And you find that this quantity here differs from the predicted quantity by the number of edges of H minus 1 times epsilon. In other words, you run this prove that we just did one edge at a time. So each time you take out an edge, you use epsilon regularity to show that the effect of taking that edge out from H does not have too big of an effect on the actual number of embeddings. Do this for one edge at a time, and eventually you prove the graph counting lemma. So this is one of those proofs which may be less intuitive compared to the one I showed earlier, in the sense that there's not as nice of a story you can tell about putting in one vertex at a time. But on the other hand, if you were to carry out this proof to bound each time how big the sets have to be, it gets much hairier over here. And here, the execution is much cleaner, but maybe less intuitive, unless you're willing to be comfortable with these calculations. And it's really not so bad. And the strength of these two results are somewhat different. So again, it's not so much the exact statements that matter, but the spirit of these statements, which is that if you have a bunch of epsilon regular pairs, then you can embed and kind of pretending that everything behaved roughly like random. Any questions? So now we have Szemeredi's graph regularity lemma, we have the graph counting lemma, embedding lemmas, we can use it to derive some additional applications that don't just involve triangles. So when we only had a triangle counting lemma, we can only do the triangle removal lemma, but now we can do other removal lemmas. So in particular, there's the graph removal lemma, which generalizes the triangle removal lemma. So in the graph removal lemma, the statement is that for every H and epsilon, there exists a delta, such that every N vertex graph with fewer than delta N to the vertex of H number of copies of H-- so it has very few copies of H-- such graph can be made H-free by removing a fairly small number of edges. All right, so same statement as the triangle removal lemma, except now for general graph H. And as you expect, the proof is more or less the same as that of a triangle removal lemma, once we have the H counting lemma. So let me remind you how this goes. So it's really the same proof as triangle removal, where there was this recipe for applying the regularity lemma from last time. So what is it? You first-- so what's the first step when you do regularity? You partition. So let's do partition. OK. So apply the regularity lemma to do partitioning. And what's the second step? So we clean this graph. And you do the same cleaning procedure as in the triangle removal lemma, except maybe you have to adjust the parameters somewhat, so remove edges, remove low density pairs, irregular pairs, and small vertex sets. And the last step-- OK, so what do we do now? OK. So you can count. So if there were any H left, then the counting lemma shows you that you must have lots of copies of H left. So now let me show you how to use the strategy. Now that we have this general graph counting lemma, we'll prove the Erdos-Stone-Simonovits theorem, which we omitted, the proof that we omitted from the first part of the course. So remind you, the Erdos-Stone-Simonovits theorem says that if you have a graph H, then the extremal number of H is equal to this quantity which depends only on the chromatic number of H. The lower bound comes from taking the Turan graph. If you take the Turan graph, you get this lower bound, so it's really the upper bound that we need to think about. All right. So what's the strategy here? The statement really is that if you have a graph G that's N vertex whose number of edges is at least that much-- OK, so I fixed an epsilon bigger than zero, fixed a positive epsilon. So the claim, what we're trying to show with Erdos-Stone-Simonovits, is that if you have a graph G with too many edges-- too many meaning this many edges-- then G contains a copy of H if N is sufficiently large. OK. So let's use the regularity method, so applying this three-step recipe. First, we partition. So partition the vertex set of G into m pieces, and in such a way that it is eta regular-- and for some parameter eta that we'll decide later. The second step is cleaning. The cleaning step, again, it's the same kind of cleaning as we've done before. So let's remove an edge from Vi cross Vj, if any of the following hold-- if Vi Vj is not A to regular, if the density is too small, or either the two sets is too small. So same cleaning as before. And we can check that the number of edges removed is not too small. So in the first case, so again, it's the same calculation as last time. In the first case, the number of edges removed is, at most, eta N squared. And we'll choose eta to be less than epsilon over 8, although it will be actually significantly smaller, as you will see in a second. In the second step, same as what happened in the triangle removal stage, the number of edges removed in the second type is, at most, that amount, still very small number. And the third one here is also a very small number. So the third type, they start with one of these sets, the m possible-- so it's a very small number. And so the total is, at most, an epsilon over 2 N squared. So maybe I want epsilon over-- so let's say epsilon over 2 N squared number of edges. And I would like that to be strictly bigger than. So now after removing these edges from G, we have this G prime, which has strictly more than 1 minus 1 over r. Yeah, 1 minus 1 over r times N squared over 2 edges. So now what do we do? So we knew from Turan's theorem that if your graph has strictly more than this number of edges, you must have a K sub r plus 1. So even after deleting all these edges, G still has lots of edges left, in particular, Turan's theorem implies that G prime contains a clique on r plus 1 vertices. So here I should say that r is chromatic number of H minus 1. So I find one copy of this clique, but what does that copy look like? I find this one copy of a clique. Let's say r equals to 4. And the point, now, is that the counting lemma will allow me to amplify that clique into H. So it will allow me to amplify this clique into a copy of H. So, for example, if H were this graph over here, so then you would find a copy of H in G, which is what we want. So why does the counting lemma allow you to do this amplification? So it's this point, the ideas are all there, but there's a slight wrinkle in the calculations. I mean, there's, in the executions, I just want to point out, just in case some of the vertices of H end up in the same vertex in G. But that turns out not to be an issue. So by counting lemma, the number of homomorphisms from H to G prime, where I'm really only considering homomorphisms that map each vertex of H to its assigned part. It's at least this quantity where I'm looking at the predicted density of such homomorphisms, and all of these edge densities are at least epsilon over 8. So it's at least that amount minus a small error that comes from the counting lemma. And, well, all of the vertex parts are quite large, so all of the vertex parts are of size like that. So that's the result of the counting lemma combined with information about the densities of the parts and the sizes of the parts that came out of cleaning. So setting eta to be an appropriate value, we see that for sufficiently large N, this quantity here, is on the order of N to the number of vertices of H. But I'm only counting homomorphisms, and so it could be that some of the vertices of H end up in the same vertex of G. And those would not be genuine subgraphs, so I shouldn't consider those as subgraphs. Because otherwise, if you were to allow those, then if you found this K4, then you found all four chromatic graphs. So you shouldn't consider copies that are degenerate, but that's OK because the number of maps from the vertex set of H to the vertex set of G that are non-injective is of a lower order. The number of non-injective maps from the vertex set, well, you have to pick two vertices of H to map to the same vertex, and then the number of choices, you have one order less. So there are negligible fraction of these homomorphisms. And the conclusion, then, is that G prime contains a copy of H, which is what we're looking for. If G prime contains copy of H, then G contains a copy of H, and that proves the Erdos-Stone-Simonovits theorem. You get a bit more out of this proof. So you see that not only does G contain one copy of H, but the counting lemma actually shows you it contains many copies of H. And this is a phenomenon known as supersaturation, which you already saw in the first problem set, that often when you are beyond a certain threshold, an extremal threshold, you don't just gain one extra copy, but you often gain many copies. And you see this in this proof here. So to summarize, we've seen this proof of Erdos-Stone-Simonovits, which comes from applying regularity and then finding a single copy of a clique from Turan's theorem, and then using counting lemma to boost that copy from Turan's theorem into an actual copy of H. So in the second homework, one of the problems is to come up with a different proof of Erdos-Stone-Simonovits that is more similar to the proof of Kovari-Sos-Turan, more through double-counting like arguments. And that is closer in spirit, although not exactly the same as the original proof in Erdos-Stone. So this regularity proof, I think it's more conceptual. You get to see how to do this boosting, but it gets a terrible bound. And the other proof that you see in the homework gives you a much more reasonable bound on the dependence between how N grows versus how quickly this little o has to go to zero.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
1_A_bridge_between_graph_theory_and_additive_combinatorics.txt
YUFEI ZHAO: OK, let's, get started. Welcome to 18.217. So this is combinatorial theory, graph theory, and additive combinatorics. So course website is up there. So all the course information is on there. So after around the middle of the class, I'll say a bit more about various course information, administrative things. But I want to jump directly into the mathematical content. So this course roughly has two parts. The first part will look at graph theory, in particular problems in extremal graph theory. In the second part, we'll transition to additive combinatorics. But these are not two separate subjects. So I want to show you this topic in a way that connects these two areas and show you that they are quite related to each other. And many of the common themes that will come up in one part of the course will also show up in the other. So the story between graph theory and additive combinatorics began about 100 years ago with Schur, the famous mathematician, Isaai Schur. Well, he was like many mathematicians of his era trying to prove Fermat's Last Theorem. So here's what's Schur's approach. He said, well, let's look at this equation that comes up in from Fermat's Last Theorem. And, well, one of the methods of elementary number theory to rule out solutions to an equation is to consider what happens when you mod p. If you can rule out for infinitely many values p, possible non-trivial solutions to this equation mod p, then you will rule out possibilities of solutions to Fermat's Last Theorem. OK, so this was Schur's approach. As you can guess, unfortunately, this approach did not work. And Schur proved that this method definitely doesn't work. So that's the starting point of our discussion. So it turns out that for every value of n, there exists non-trivial solutions for all p sufficiently large. So thereby, ruling out the strategy. So let's see how Schur proved his theorem. So that will be the first half of today's lecture. So this seems like a number theory question. So what does it have to do with graph theory? So I wanted to show you this connection. Now, Schur deduced his theorem from another result. That is known as Schur's Theorem, which says that if be positive integers is colored using finitely many colors, then there exists a monochromatic solution to the equation x plus y equals to z. So if you give me 10 colors and color the positive integers using those 10 colors, then I can find for you a solution to this equation where x, y, and z are all of the same color. Now, this statement-- OK, so it's a perfectly understandable statement. But let me rephrase it in a somewhat different way. And this gets to a point that I want to discuss where many statements in additive combinatorics or just combinatorics in general have different formulations, one that comes in an infinitary form, which is more qualitative so to speak and another form that is known as finitary. And that's more quantitative in nature. So Schur's Theorem is stated in a infinitary form. So it tells you if you color using finitely many colors, then there exists a monochromatic solution. So many, but not all, statements of that form have an equivalent finitary form that is sometimes more useful. And also, once you stay the right finitary form, you can ask additional questions. So here's what Schur's Theorem looks like in the equivalent finitary form. You give me an r. For every r, there exists some N as a function of r, such that if the numbers 1 through N-- so throughout this course, I'm going to use this bracket N to denote integers up to N-- so if these numbers are colored using our colors, then necessarily, there exists a monochromatic solution to the equation x plus y equals to z, where x, y, and z are in the set that is being colored. So it looks very similar to the first version I stated. But now, there are some more quantifiers. So for every r, there exists an N. So why are these two versions equivalent to each other? So it's not too hard to deduce their equivalence. So let me do that now. The fact that the finitary version implies the infinitary version claims should be fairly obvious. So once you know the finitary version, if you give me a coloring of the positive integers, well I just have to look far enough up to this N and I get the conclusion I want. But now, in the other direction-- so in the other direction-- suppose I fix to this r. So, OK, so I assume the infinitary version. I wanted to deduce the finitary version. So I start with this r. And let's suppose the conclusion were false. So supposed the conclusion were false, namely for every N there exists some coloring-- so for every N there exists some coloring-- which we will call phi sub N, that avoids monochromatic solutions to x plus y equals to z. So I'm going to use this Chi for shorthand for monochromatic. So suppose there exists such a coloring. And now, I want to take this collection of colorings and produce for you a coloring of the positive integers. And you can do this basically by a standard diagonalization trick. Namely, we see that by taking an infinite subsequent, such that-- so let me call this infinite sub-sequence phi of-- phi sub-- well, so it's infinite sub-sequence of this phi sub N, such that phi sub N of k stabilizes along the sub-sequence for every k. OK, so you can do this simply by diagonalization trick. And then, we see that along the sub-sequence phi N converges point-wise to some coloring of the entire set of positive integers. And this coloring avoids monochromatic solutions to x plus y equals to z, because if there were monochromatic solutions in this coloring of the entire integers, then I can look back to where that came from. And that would have been the monochromatic solution in one of my phi N's. So this is an argument that shows the equivalence between the finitary form and infinitary form. But now, when we look at the finitary form, you can ask additional questions, such as, how big does this N have to be as a function of r. It turns out those kind of questions in general are very difficult. And we know some things. For this type of questions, we know some bounds usually. But the truth is usually unknown. And there are major open problems in combinatorics of this type. So there's still a lot that we do not understand. OK, so now we have Schur's Theorem in this form. Let me show you how to deduce his conclusion about ruling out this approach to proving Fermat's Last theorem. The claim is the following that if you have a positive integer n, then for all sufficiently large primes p, there exists x, y, and z, all belonging to integers up to p minus 1, such that their n-th powers add up like this. So it's a solution to Fermat's equation mod p. All right, so how can we deduce this from what we said about coloring? So what is the coloring? OK, so here's what Schur did, so proof assuming for now Schur's theorem. So let's look at the multiplicative group of non-zero residues, mod p. So we know it's a cyclic group because there's a generator. So there's a primitive root generator. Let H denote the subgroup of n-th powers. Well, H is a pretty big subgroup. So what's the index of H in this multiplicative group? It's at most M. So think about representing this as a cyclic group using a generator. So H then would be all the elements whose exponent is divisible by M. So this the index is at most M. It could be smaller. But it's at most M. And so in particular, I can use the H cosets to partition the multiplicative group of non-zero residues. And this is a color. Virtual partition is the same thing as a coloring. There is a bounded number of colors. But I let peek at large. So by Schur's theorem if p is sufficiently large, then one of my cosets should contain a solution to x plus y equals to z. What does that look like? So that one coset, one H coset, course that contains x, y, z, such that x plus y equals to z as integers. They belong to the same coset. So x, y, and z belong to some coset of H, which means then that x equals to a times n-th power with a y equals to a times some n-th power and little z equals to a times some n-th power. You have this equation. Put them together. So that is true. So now mod p, I can cancel the a's. And this produces a non-trivial solution to Fermat's equation, mod p. OK, so this was the proof of this claim that this method does not work for solving Fermat's Last Theorem. But, you know, we assumed this claim of Schur's theorem that every finite coloring of the positive integers contains a monochromatic solution to x plus y equals to z. So we still need to prove that claim. So we still need to prove this combinatorial claim. And so that's what we're going to do now. This is where graph theory comes in. So let me state a very similar-looking theorem about graphs. And this is known as Ramsey's theorem, although Ramsay's theorem actually historically came after Schur's theorem, but Ramsey's theorem, here, we're going to use it specifically in the case for triangles. So what does it say? That if you give me an r, the number of colors, then there exists some large N such that if the edges of the complete graph, K sub N, along N vertices are colored using r colors, then there exists a monochromatic triangle somewhere. Any questions so far about any of these statements? So let's see how Ramsay's theorem for triangles is proved. By the way, I want to give you a historical note about Frank Ramsey. So he's someone who made significant contributions to many different areas, not just in mathematics. So he contributed to seminal works in mathematical logic where this theorem came from, but also to philosophy and to economics before his untimely death at the age of 26 from liver-related problems. So he's someone whose very short life contributed tremendously to academics. So let's see how Ramsay's theorem, in this case, is proved. We'll do induction on r, the number of colors. So for every r, I need to show you some N, such that the statement is true. In the first case, when r equals to 1, there's not much to do. Just one color, if I just have three vertices, that already is OK. Three vertices, that's already a monochromatic triangle. So from now on, let r be at least 2. And suppose the claim holds for r minus 1 colors, with N prime being the corresponding number of vertices with r minus 1 colors. So now let me pick an arbitrary vertex. So pick an arbitrary vertex v and look what happens. So here's v. And let me look at the outgoing edges. So we'll show that N being r crimes N prime minus 1 plus 2 works. So now, we have a lot of outgoing edges. In particular, we have r times N prime minus 1 plus 1 outgoing edges. So by the pigeonhole principle, some color-- so there exists at least N prime outgoing edges with the same color, let's say, yellow. So suppose yellow is the outgoing color. And let me call the set of vertices on the other end of these edges v0. So now let's think about what happens in v0. So in v0, either v0 contains a yellow edge, in which case you get a yellow triangle. Or we lose the color inside v0. So the number of colors goes down. Else v0 has at most r minus 1 colors. And v0 has at least N prime number of vertices. So by induction, v0 has a monochromatic triangle in the remaining colors. So that completes the proof of Ramsay's theorem, in this case, for triangles. And if you wish to find out what is the bound that comes out of this argument, well, you can chase to the proof and get some bound. The remaining question now is, what does this all have to do with Schur's theorem? So so far, we've talked about some number theory. We've talked about some graph theory and how to link these two things together. And I think this is a great example. It's a fairly simple example, which I'm about to show you of how to link these two ideas together. And this connection, we'll see many times in the rest of this course. I don't want to erase Schur's theorem. So let me-- So let's prove Schur's theorem. So let's start with a coloring. So let's start with the coloring of 1 through N. And I want to form a graph with colors on the edges that are somehow derived from this coloring on these integers. And here's what I'm going to do. So let's color the complete graph-- let's color the edges of the complete graph on the vertex set having N plus 1 vertices, labeled at integers up to positive integers up to N plus 1. But by the Ramsey result we just proved, if N is large enough, then there exists a monochromatic triangle. So what does it look like? So let me draw for you a monochromatic triangle. Suppose it-- so I haven't told you what the coloring is yet. So the coloring is that I'm going to color the edge between i and j, using the color derived by applying phi to the number j minus i, namely the length of that segment if I lay out all the vertices on the number line. So now have an r coloring of this complete graph. So Ramsey tells us that there exists a monochromatic triangle. The triangle sits on vertices i, j, and k. And the rule tells us that the colors are phi of k minus i, phi of j minus i, and phi of k minus j. So these three numbers, they have the same coloring. But, look, if I set these numbers to be x, y, and z-- so x being j minus i, for instance-- then x plus y equals to z. And they all have the same color. So this monochromatic triangle gives us a monochromatic equation, 2x plus y equals to z, thereby concluding the proof of Schur's theorem. OK, so this rounds out the discussion for now of-- well, we started with some statement about number theory. And then we took this detour to graph theory, looking at Ramsay's theorems of monochromatic triangles, and then go back to number theory and proved the result that Schur did. So how does go to graphs help? So why was this advantageous? What do you guys think? So I claim that by going to graphs, we added some extra flexibility to what we can play with. For example, we started out with a problem where there were only N things being colored. And then we moved to graphs where about-- well, N choose 2 or N squared objects are being colored. And then we did an induction argument. So remember in the proof of Ramsey's theorem up there, there was an induction argument taking all vertices. And that argument doesn't make that much sense if you stayed within the numbers. Somehow moving to graphs gave you that extra flexibility allow you to do more things. And this is one of the advantages of moving from problem about numbers to a problem about graphs. And we'll see this connection later on as well. Yeah? AUDIENCE: Sort of related to that. Are there better bounds known for this specific, like Schur's result of that power on e, because the N's here would be pretty bad. YUFEI ZHAO: Right, so Ashwan asked, so what about bounds? So what do we know about bounds? So I don't know off the top of my head the answers to those questions. But in general, they're quite open. So there are exponential gaps between lower and upper bounds on our knowledge of what is the optimal N you can put in the theorem. Any more questions? All right so, I think this is a good point for us to-- so usually when I give 90-minute lectures, I like to take a short 2-minute break in between. So I want to do that. And then in the second half, I want to take you through a tour of additive combinatorics. So tell you about some of the modern developments. Now, this is an exciting field where it started out, I think, roughly with Schur's theorem that we just discussed. That started about 100 years ago. But a lot has taken place in the past century. And there's still a lot of ongoing exciting research developments. So in the second half of this lecture, I want to give you a tour through those developments and show you some of the highlights from additive combinatorics. So let's take a quick 2-minute break. And feel free to ask questions in the meantime. So another part of the writing assignment in addition to course notes is a contribution to Wikipedia, which is, you know, nowadays, of course, you know, if you hear some word like Szemeredi 's regularity lemma the first thing you do is type into Google. And more often than not the first link that comes up is Wikipedia. And, you know, some of the articles, they are all right, and some of them are really not all right. And it would be fantastic for future students and also for yourselves if there were better entry points to this area by having higher quality Wikipedia articles or articles that are simply missing about specific topics. So one of the assignments-- again, this can be collaborative. So I'll give you more information how to do that later-- is to contribute to Wikipedia and roughly contribute one high quality article or edit some existing articles so that they become high quality. Yep. AUDIENCE: Can we something similar to LMDB with creating a website that has all the information needed in combinatorics? YUFEI ZHAO: So we can talk about that. So if there are other ideas about how to do this, we can definitely open the chatting about that. So the other thing is that instead of holding the usual office hours, what I like to do is-- so this class ends at 4:00 PM. So after 4:00, I'll go up to the Math Common Room, which is just right upstairs and hang out there for a bit. If you have questions, you want to chat, come talk to me. I'd be happy to chat about anything related or not related to the course. And before homeworks are due, I will try to set up some special office hours for you in case you want to ask about homework problems. And if you want to meet with me individually, please just send me an email. Oh, one more thing about the course notes. So because I want to do quality control, so here is the process that will happen with the course notes. So the first lecture is already online. So you can already see. So I've written up the lecture notes for the first lecture. And you can use that as an example of what I'm looking for. So I'm looking for people to sign up starting from the next lecture, and I will send out a link tonight. For future lectures, so whoever writes the lecture, I'll the lecture, and then within one day, so by the end of the day after the lecture, it will be good if they were already at least some sketch, some rough draft at least containing the theorem statements and whatnot from the day's lecture. So that the next person can start writing afterwards. But once you are done, once you feel that you have a polished version of the lecture, write up, ideally within four days of the lecture-- so that in terms of expectations and timelines, again all of this information is online-- so you're finished with polishing your lecture notes, within four days send me an email, so both co-authors if there are two of you, and I will schedule an appointment, about half an hour, where I will sit down with you to go through what you've written and tell you some comments. So you can go back and polish it further. And hopefully, that will just be a one round thing. If more rounds are needed, well, it's not ideal, but we'll make it happen until the notes are ready to use for future generations. OK, any questions about any of the course logistics? All right, so in the second half of today's lecture, I want to take you through a tour of modern additive combinatorics. And this is an area of research which I am actively involved in. And it's something that I am quite excited about. And part of the reason why I teach this course-- this course is something that I developed a couple years ago when I taught for the first time then-- because I want to introduce you guys to this very active and exciting area of research. Now, what is added combinatorics? The term itself is actually fairly new. So the term, additive combinatorics, I believe was coined by Terry Tao back in the early 2000s as somewhat of a rebranding of an area that already existed, but then got a lot of exciting developments in the early 2000s. It's a deep and far reaching subject with many connections to areas like graph theory, harmonic analysis, or Fourier analysis, ergodic theory, discrete geometry, logic and model theory, and has many connections all over the place, and also has many deep theorems. So let me take you through a tour historically of, I think, some of the major milestones and landmarks in additive combinatorics. So after Schur's theorem, which we discussed in the first half of today's lecture, the next big result I would say is Van der Waerden's theorem, which was 1927. Van der Waerden's theorem says that every coloring of the positive integers using finite many colors contains arbitrarily long arithmetic progressions. So we'll see arithmetic progressions come up a lot. So from now on we'll abbreviate this word by AP. So AP stands for Arithmetic Progressions. So instead of Schur's theorem where you just find a single solution to x plus y equals to z, so now, we're finding a much bigger structure. Keep in mind, so a novice mistake people make is to confuse arbitrarily long arithmetic progressions with infinitely long. So these are definitely not the same. So you can think about. I'll leave it to you as an exercise, well, also homework exercise, that you can color the integers with just two colors in a way that destroys all possible infinitely long monochromatic arithmetic progressions. So arbitrarily long is very different from infinitely long. Now, so this was a great result, but it provokes more questions. So Erdos-Turan in the '30s, they asked-- well, they conjectured that the true reason in Van der Waerden's theorem of having long arithmetic progressions, it's not so much that you're coloring. It's just because if you use finitely many colors, then one of the color classes must have fairly high density. So one of the classes if you use r colors has density at least 1 over r. And they conjectured that every subset of the positive integers, or the integers with positive density, contains long-- so arbitrarily long arithmetic progressions. You may ask, what does it mean, density? So you can define density in many different ways. And it doesn't actually really matter that much which definition you use. But let me write down one definition. So you can define given a subset of integers the upper density, or rather, let me just say that it has positive upper density, if when we take the lim sup as n goes to infinity and look at we'll take a scaling window and look at what fraction of that window is a, then this number, this limit sup is positive. So that's one definition of positive density. There are many other definitions, sometimes known as the Banach density. And you can take variations. I mean, for the purpose of this discussion, they're all roughly equivalent. So let's not worry too much about which definition of density we use here. All right, so Erdos and Turan conjectured that the true reason for Van der Waerden's theorem is that one of the color classes has positive density. And this turned out to be an amazingly prescient question and that one had to wait several decades. So this conjecture was made in the '30s, in 1936. So you had to wait several decades before finding out what the answer is. So in a foundational theorem, in the subject known as Roth's theorem-- so Roth proved it in the '50s. I think '53-- Roth proved that, I think, '53, in the '50s-- that k equals to 3 is true. So if I say that it contains k term, arithmetic progressions for every k. And Roth proved that every positive density subset contains a 3-term arithmetic progression. And already, Roth introduced very important ideas that we will see in this course in two different forms. So in the first half the course, we'll see a graph theoretic proof that was found later in the '70s of Roth's theorem. And then in the second half, we'll see Roth's original proof that used Fourier analysis. So Fourier analysis in number theory is also known as the Hardy-Littlewood circle method. It's a powerful method in analytic number theory. But there are very interesting new ideas introduced by Roth as well in developing this result. The full conjecture was settled by Szemeredi. It took another couple of decades. So in the late '70s, Szemeredi proved his landmark theorem that confirmed the Erdos-Turan conjecture. Szemeredi's theorem is a deep theorem. So this theorem is the proof, what the original combinatorial proof is a tour de force. And you can look at the introduction of his paper, where there is an enormously complex diagram-- so you can see this in the course notes-- that lays out the logical dependencies of all the lemmas and propositions in his paper. And even if you assume every single statement is true, looking at that diagram, it's not immediately clear what is going on because the logical dependencies are so involved. So this was a really complex proof. But not only that, Szemeredi's theorem actually motivated a lot of subsequent research. So later on, researchers from other areas came in and found also sophisticated proofs of Szemeredi's theorem from other areas and using other tools, including-- and here are some of the most important perspectives, later perspectives, of Szemeredi's theorem. So there was a proof using ergodic theory that followed fairly shortly after Szemeredi's original proof. This is due to Furstenberg. And initially, it wasn't clear, because all of these proofs were so involved. It wasn't clear if the ergodic theoretic proof was genuinely something new, or it was a rephrasing of Szemeredi's combinatorial proof. But then very quickly it was realized that there were extensions of Szemeredi's theorem, other combinatorial results that the ergodic theorists could establish using their methods, so using the same methods or extensions of the same methods that combinatorialists did not know how to do. And to this date, there are still theorems for which the only known proofs use ergodic theory, so extensions of Szemeredi's theorem. And I will mention one later on today. So that's one of the perspectives. The other perspective that was also quite influential there is something known as higher order Fourier analysis, which was pioneered by Tim Gowers' in around 2000. So Gowers won the Fields Medal, party for his work on Banach spaces but also party for this development. So higher order Fourier analysis is in some sense an extension of Roth's theorem. So anyway, Roth also won a Fields Medal, although this is not his most famous term. I'll say his second most famous theorem. So Roth used this Fourier analysis in the sense of Hardy-Littlewood to control 3-term arithmetic progressions. But it turns out that that method for very good fundamental reasons completely fails for 4-term arithmetic progressions. So we'll see later in the course why that's the case, why is it that you cannot do Fourier analysis to control 4-term APs. But Gowers managed to find a way to overcome that difficulty. And he came up with an extension, with a generalization of Fourier analysis, very powerful, very difficult to use, actually. But that allows you to understand longer arithmetic progressions. Another very influential approach is called hypergraph regularity. So the hypergraph regularity method was also discovered in the early 2000s independently by a team led by Rodl and also by Gowers. So the hypergraph regularity method is an extension of what's known as Szemeredi's regularity, Szemeredi's graph regularity method. And this is the method that will be a central topic in the first half of this course. And it's a method that is quite central, or at least some of the ideas quite central, to Szemeredi's method. And he gave an alternative proof. He and Ruzsa gave an alternative proof of Roth's theorem using graph theory. And for a long time, people realized that one could extend some of those ideas to hypergraphs. But working out how that proof goes actually took an enormous amount of time and effort and resulted in this amazing theorem on hypergraph. Let me mention these are not the only methods that were used to extend Szemeredi's theorem or give alternate proofs. There are many others. For example, you may have heard of something called the polymath project. Raise your hand if you heard of the polymath project. OK, great. So maybe about half of you. So this is an online collaborative project started by Tim Gowers and also famous people like Terry Tao. And they were all quite involved in various polymath projects. And the first successful polymath project produced a combinatorial proof of something known as the density Hales-Jewett theorem. So I won't explain what it this here. So it's something which is related to tic tac toe. But let me not go into that. So it's a deep combinatorial theorem that had they known earlier using ergodic theoretic methods, but they gave a new combinatorial proof, in particular gave some concrete bounds on this theorem and that in particular also implies Szemeredi's theorem. So this gave a new proof. And as a result, they-- it's an online collaborative project-- so they published this paper under the pseudonym DHJ Polymath, where DHJ stands for Density Hales Jewett. And they kept the same name for all of the subsequent papers published by the polymath project. So as you see through all of these examples that there a lot of work that were motivated by Szemeredi's theorem. This is truly a foundational result, a foundational theorem that gave way to a lot of important research. And Szemeredi himself received an Apple Prize for his seminal contributions to combinatorics and also theoretical computer science. We still don't understand in some sense completely what Szemeredi's theorem-- you know, for example, we do understand the optimal bounds. And also more importantly, conceptually, we don't really understand how these methods are related to each other. So there's some vague sense that they all have some common things. But there is a lot of mystery as to what do these methods coming from very different areas-- ergodic theory, harmonic analysis, you what do they all have to do with each other but there is central theme. And this is also going to be a theme in this course, which goes under the name-- and I believe Terry Tao is the one who popularized this name-- the dichotomy between structure and randomness, structure and pseudo randomness. Somehow it's a really fancy way of saying signal versus noise. So I give you some object, I give you some complex object, and there is some mathematical way to separate the structure from some noisy aspects, which behave random-like. So there will be many places in this course where this dichotomy will play an important role. Any questions at this point? I want to take you through some generalizations and extensions of Szemeredi's theorem. So first, let's look at what happens if we go to higher dimensions. Suppose we have a subset in D dimensions, d-dimensional lattice. So we can also define some notion of density. Again, it doesn't matter precisely what is the notion you use. For example, we can say that it has a positive upper density if this lim sup is positive. So Szemeredi's theorem in one dimension tells us that if you have some sort of positive density, then I can find arbitrarily long arithmetic progressions. So what should the corresponding generalization in higher dimensions? Well, here's a notion that I can define, namely that we say that a contains arbitrary constellations to mean that-- so what does that mean? So a constellation, you can think of it as some finite pattern, so a set of stars in the sky, so some pattern. And I want to find that pattern somewhere in a, where I'm allowed to dilate. So I'm allowed to do to multiply pattern by some number and also translate. So on the finite pattern-- so what I mean precisely is that for every finite subset of the grid, there exists some translation and some dilation, such that once I apply this dilation and translation to my pattern F, meaning I'm looking at the image of this F under this transformation, then this set lies inside a. So you see that arithmetic progressions is the constellation, just numbers 1 through k. So that's a definition. And the multi-dimensional Szemeredi's theorem-- so the multi-dimensional generalization of Szemeredi's theorem says that for every subset-- so every subset of the d-dimensional lattice of possible density contains arbitrary constellations. You give me a pattern, and I can find this pattern inside a, provided that a has positive density. So in particular, if I want to find a 10 by 10 square grid, so meaning suppose I want to find a pattern which consists of something like that, a 10 by 10 square grid, where all of these lengths are equal, but I don't specify what they are. But as long as they are equal, then the theorem tells me that as long as a has positive density, then I can find such a pattern inside a. So this theorem was proved by Furstenberg and Ketsen. So you see that it is a generalization of Szemeredi's. So the one-dimensional case is precisely Szemeredi's theorem. So Furstenberg and Ketsen, using ergodic theory showed that one can generalize Szemeredi's theorem to the multi-dimensional setting. However, the combinatorial approaches employed by Szemeredi did not easily generalize. So it took another couple of decades at least for people to find a combinatorial proof of this result. And namely that happened with the hypergraph regularity method. So this was one of the motivations of this project. And you say, OK, what's the point of having different proofs? Well, for one thing it's nice to know different perspectives to important theorem. But there's also concrete objective. In particular, it turns out that if you prove something using ergodic theory, because-- we will not discuss ergodic theory in this course. But roughly, one of the early steps in such a proof applies compactness. And that already destroys any chance of getting concrete quantitative bounds. So you can ask if I want to find a 10 by 10 pattern and I have density 1%, how large do I need to look? How far do I have to look in order to find that pattern? So that's a quantitative question that is actually not at all addressed by ergodic theory. So the later methods using combinatorial methods gave you concrete bounds. And so there are some concrete differences between these methods. So this theorem reminds me of the scene from the movie a Beautiful Mind, which is one of the greatest mathematical movies in some sense. And so there's a scene there where Russell Crowe playing John Nash-- so there were at this fancy party. And Nash was with his soon to be wife, Alicia. And he points to the sky and tells her, pick a shape. Pick a shape and I can find for you among the stars. And so this is what the theorem allows you to do it. So give me a shape and I can find that constellation inside a. Let's look at other generalizations. So far, we are looking at linear patterns. So we're looking at linear dilations and translations. But what about polynomial patterns? So here's a question. Suppose I give you a dense subset, a positive density subset of integers. Can you find two numbers whose difference is a perfect square? So this question was asked by Lovasz. And a positive answer was given in the late '70s by Furstenberg and Sarkozy independently. So Furstenberg and Sarkozy, they showed using different methods-- so one ergodic theoretic and the other is more harmonic analytic-- that every subset of the integers, so every subset of positive integers, with positive density contains two numbers differing by a perfect square. So in other words, we can always find the pattern x plus y squared. So what about other polynomial patterns? Instead of this y squared, suppose you just give me some other polynomial or maybe a collection of polynomials. So what can I say? Well, there are some things for which this is not true. Can you give me an example where if I putting the wrong polynomial it's not true? What if the polynomial is the constant 1? If you take the even numbers, has density 1/2, but it doesn't contain any patterns of x and x plus 1. So I need to say some hypotheses about these polynomials. So a vast generalization of this result, so known as polynomial Szemeredi theorem, says that if A is a subset of integers with positive density, and if we have these polynomials, P1 through Pk with integer coefficients and zero constant terms, then I can always find a pattern. So there exists some x and positive integer y such that this pattern, x plus P1 of y, x plus P2 of y, and so on x, plus Pk of y, they all lie in A. So in other words, succinctly, every subset of integers with positive density contains arbitrary polynomial patterns. So this was proved-- so this was an important result proof by Bergelson and Liebman using ergodic theory. And so far for this general statement, the only known proof uses ergodic theory. So there was some recent developments, recent pretty exciting developments that for some specific cases where if you have some additional restrictions on the P's, then there are other methods coming from Fourier analytic, harmonic analytic methods that could give you a different proof that allows you to get some bounds. Remember, the ergodic proof gives you no bounds. But so far, in general, the only method known is ergodic theoretic. And actually, Bergelson and Liebman proved something which is more general than what I've stated. So this is also true in a multidimensional setting. I won't state that precisely, but you can imagine what it is. Let me mention one more theorem that many of you I imagine have heard of. And this is the Green Tao theorem. So the Green Tao theorem says that the primes contain arbitrarily long arithmetic progressions. So this is a famous theorem. And it's one of the most celebrated results of the past couple of decades. And it resolved some longstanding folklore conjectures in number theory. The Green Tao theorem, well, you see that in form it looks somewhat like Szemeredi 's theorem. But it doesn't follow from Szemeredi 's theorem. Well, the primes, they don't have positive density. The prime number theorem tells us that density decays like 1 over log n. So what about quantity versions of Szemeredi 's theorem? It is possible. Although we do not know how to prove such statement, it is possible that a density of primes alone might guarantee the Green Tao theorem in that it is possible that Szemeredi 's theorem is true for any set whose density decays like the prime numbers, like 1 over log n. But no we're quite far from proving such a statement. And that's not what Green and Tao did. Instead, they took Szemeredi 's theorem as a black box and applied it to some variant of the primes and showed that inside this variant, Szemeredi 's theorem is also true, and that the primes sit inside this variant of the primes, known as pseudo primes, as a set of relatively positive density, but somehow transferring Szemeredi 's theorem from the dense setting to a sparser setting. So this is a very exciting technique. And as a result, Green-Tao proved not just that the primes contain arbitrarily long arithmetic progressions, but every relatively dense, so relatively positive density subset, of the primes contains arbitrarily long arithmetic progressions. To prove this theorem they incorporated many different ideas coming from many different areas of mathematics, including harmonic analysis, some ideas coming from combinatorics, and number theory as well. So there were some innovations at the time in number theory that were employed in this result. So this is certainly a landmark theorem. And although we will not discuss a full proof of the Green-Tao theorem, we will go into some of the ideas through this course. And I will show you bits and pieces that we will see throughout the course. So this is meant to be a very fast tour of what happened in the last 100 years in additive combinatorics, taking you from Schur's theorem, which was really about 100 years ago, to something that is much more modern. But now, instead of being up in the stars, let's come back down to Earth. And I want to talk about what we'll do next. So what are some of the things that we can actually prove that doesn't involve taking up 50 pages using a complex logical diagram, as Szemeredi did in his paper. So what are some of the simple things that we can start with? Well, so first, let's go back to Roth's theorem. So Roth's theorem, we stated it up there. But let me restate it in a finitary form. So Roth's theorem is the statement that every subset of integers 1 through n that avoids 3-term arithmetic progressions must have size little o of N. So earlier we gave an infinitary statement that if you have a positive density subset of the integers that it contains a three AP, this is an equivalent finitary statement. Roth's original proof used Fourier analysis. And a different proof was given in the '70s by Rusza and Szemeredi using graph theoretic methods. So how does graph theory have to do with this result? And this shouldn't be surprising to this point, given that we already saw how we used Ramsay's theorem, graph theoretic result, to prove Schur's theorem, which is something that is number theoretic. So something similar happens. But now, the question is what is the graph theoretic problem that we need to look at? So for Schur's theorem it was Ramsey's theorem for triangles. But what about for Roth's theorem? A naive guess is the following. So what's the question that we should ask? Here's a somewhat naive guess, which turns out not to be the right question, but still an interesting question, which is what is the maximum number of edges in a triangle-free graph on n vertices? Now, this is not totally a stupid guess, because as you imagine from what we said with Schur's theorem, somehow you want to set up a graph so that the triangles correspond to the 3-term arithmetic progressions. And you want to set it up in such a way that this question about what's the maximum size subset of 1 through n without 3 APs translates into some question about what's the maximum number of edges in a graph that has some property? So what is that property? So this is not a totally stupid guess. But it turns out this question is relatively easy. Still it has a name. So this was found by Mantel about 100 years ago, so known as Mantel's theorem. And the answer, well, we'll see a proof. So the first thing we'll do in the next lecture is prove Mantel's theorem, but I do want to hold suspense. I mean the answer, it turns out to be fairly simple to describe. Namely that you split the vertices into two basically equal halves. And you join all the possible edges between the two halves. So this complete bipartide graph with two equal-sized parts. And it turns out this graph, you see this triangle-free and also turns out to have the maximum number of edges. Yeah, question. AUDIENCE: What are asymptotics for three arithmetic progression of-- YUFEI ZHAO: Let me get to that in a second. OK, so I'll talk about asymptotics in a second. So it turns that this is not the right graph theoretic question to ask. So what is the right graph theoretic question to ask? I'll tell you what it is. I mean it shouldn't be clear to you at this point. It still seems like an interesting question, but it's also somewhat bizarre to think about if you've never seen this before. So what is the maximum number of edges in an n vertex graph, where every edge lies in exactly one triangle? So I want a graph with lots and lots of edges where every edge sits in exactly one triangle. Now, you might have some difficulty even coming up with good graphs that have this property. And that's OK. These are very strange things to think about. But we'll see many examples of it later on. We'll also see how Roth's theorem is connected to this graph theoretic question. Just to give you a hint, you know, where does exactly one triangle come from, it's because even if you avoid 3-term arithmetic progressions, there are still these trivial 3-term arithmetic progressions, where you keep the same number three times. And in graph theoretic world, that comes to the unique triangle that every edge sits on. So to address the question about quantitative bounds, for Roth's theorem, it turns out that we have upper bounds and lower bounds. And it is still a wide open question as to what these things should be. And roughly speaking, the best lower bound comes from a construction, which we'll see later in this course, the higher size around n divided by e to the c root log n. And the best upper bound is of the form roughly n over log n. That's maybe a little bit hard to think about how these numbers behave. So if you raise both sides to-- the denominator to e to the something, then it's maybe easier to compare. But it's still a pretty far gap. So still a pretty big gap. There's a famous conjecture of Erdos some of you might have heard of, that if you have a subset of the positive integers with divergent harmonic series, then it contains arbitrarily long automatic progressions. That's a very attractive statement. But somehow I don't like the statement so much, because it seems to make a too pretty. And the statement really is about what is the bounds on Roth's theorem and on Szemeredi 's theorem. And having divergent harmonic series is roughly the same as trying to prove Roth's theorem slightly better than the bound that we currently have, somehow breaking this logarithmic barrier. So that conjecture, that having divergent harmonic series, implies 3-term APs is still open. That is still open. So where the bound's very close to what we can prove, but it is still open. For this question, we will see later in this course, once we've developed Szemeredi 's regularity lemma that we can prove an upper bound of o to the n squared, so little n. And that will suffice for proving Roth's theorem. It turns out that we don't know what the right answer should be. We don't know what is the best such graph. And it turns out the best construction for this graph there comes from over here, the best lower bound construction of a set, of a large set without 3-term arithmetic progressions. So I'm giving you a preview of more of these connections between additive combinatorics on one hand and graph theory on the other hand that we'll see throughout this course. Any questions? OK. So just to tell you what's going to happen next, so the next thing that we're going to discuss is basically extremal graph theory. And in particular, if you forbid some structure, such as a triangle, maybe a four cycle, maybe some other graph, what can you say about the maximum number of edges? And there are still a lot of interesting open problems, even for that. I forbid some H. What's the maximum number of edges? So the next few lectures will be on that topic.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
17_Graph_limits_IV_inequalities_between_subgraph_densities.txt
PROFESSOR: We spent the last few lectures developing the theory of graph limits. And one of the motivations I gave at the beginning of the lecture on graph limits was that there were certain graph inequalities. Specifically, if I tell you that your graph has edge density one half, what's the minimum possible C4 density? So for those kinds of problems, graph limits gives us a very nice language for describing what the answer is, and also sometimes for solving these problems. So today, I want to dive more into these types of problems. Specifically, we're going to be talking about homomorphism density inequalities. Homomorphism. So trying to understand what is the relationship between possible subgraph densities or homomorphism densities within a large graph. We've seen these kind of problems in the past. So one of the very first theorems that we did in this course was Turan's theorem and Mantel's theorem. So specifically, for Mantel's theorem, it tells us something about the possible edge versus triangle densities in the graph, which is something that I want to spend the first part of today's lecture focusing on. So what is the possible relationship? What are all the possible edge versus triangle densities in the graph? Mantel's theorem tells us something-- namely, that if your edge density exceeds one half, then your triangle density cannot be zero. So that's what Mantel's theorem tells us. And let me write it down like this. So the statement I just said, the one about Mantel's theorem, and more generally for Turan's theorem tells us that if the Kr plus 1 density in W is 0, then necessarily the edge density is at most 1 minus 1 over r. So this is what it tells us. It gives us some information about what are the possible densities. But I would like to know more generally, or a good complete picture, of what is a set of edge versus triangle density inequalities. So let me draw a picture that captures what we're looking for. So here on the x-axis, I have all the possible edge densities, and on the vertical axis, I have the triangle density. And I would like to know what is a set of feasible points in this box. Mantel's theorem tells us already something-- namely, when can you-- so this region, the horizontal line at zero extends at most until the halfway point. Beyond this point, it's not a part of the feasible region. So far that's the information that we know. Our discussion about graph limits, and in particular-- so let me first write down what is the question. So if you look at the set of possible edge versus triangle densities, so there is this region here. What is this region? It's a subset of this unit square. We would like to understand what is the set of all possibilities. The compactness of the space of graphons tells us that this region is compact. So let me call this region D23 for edge versus triangle. So D23 is compact because the space of graphons is compact under the cut metric, and densities are continuous under cut distance. So in particular, if you have some limit point of some sequence of graphs, that limit point's achieved by a corresponding limit graphon. So you really have a nice closed region over here. So we don't have to-- I should be able to tell you the answer. This is the region. There should not be any additional quantifiers. There's no optimizer zero, missing this point and missing that point. It's a closed region. So what is this closed region? Equivalently, we can ask the following question. Suppose I give you the edge density. In other words, look at a particular horizontal place in this picture. What is the maximum and minimum possible triangle densities? So I tell you that the edge density is 0.75. What is the upper and lower boundaries of this region? I want you to think about why this region is-- the vertical cross-section is a line segment. You cannot have any hulls. So that requires an argument, and I'll let you think about that. So I want to complete this picture, and I'll show you some proofs. And at the end of-- well, by the middle of today's lecture, we'll see a picture of what this region looks like. All right. First, let me do the easier direction, which is to find the upper boundary of this region. So what is the maximum possible triangle density for a given edge density? And the answer-- so it turns out to be what I will-- the result I will tell you is a special case of what's called Kruskal-Katona. Think about it this way. Suppose I give you a very large number of vertices and I give you some large number of edges, and I want you to put the edges into the graph in a way that generates as many triangles as possible. Intuitively, how should you put the edges in to try to make as many triangles as you can? AUDIENCE: Clique. PROFESSOR: In a clique. So you put all the edges as closely together as possible, try to form a clique. So maximize number of triangles by forming a clique. And that is indeed the answer. And this is what we'll prove, at least in the graph densities version. So we will show that the upper boundary is given by the curve y equals to x to the 3/2. So don't worry about the specific function. But what's important is that the upper bound is achieved by the following graphon. Namely, this graphon corresponding to a clique. For this graphon here, the edge density is a squared, and the triangle density is a cubed. And it turns out this graphon is the best that you can do with a given edge density in order to generate as many triangles or the most triangle density possible. In other words, what we'll prove is that the triangle density throughout W will be a graphon. So W is always a graphon. So values between 0 and 1. Then you have the following inequality. So let's prove it. First let me draw you what this shape looks like. Because of the relationship between graphs and graph limits, any of these inequalities about graph limits, about graphons, it's sufficient to prove the corresponding inequality for graphs because the set of graphs is dense within the space of graphons according to the topology-- namely, the cut metric that we discussed. So it suffices to show the corresponding inequality about graphs-- namely, that the K3 density in a graph is at most the K2 density in a graph raised to the power 3/2. So let me belabor this point just a little bit more. This inequality is a subset of those inequalities up there because graphs sit inside a space of graphons. But because they sit inside in a dense subset, if you know this inequality and everything is continuous, then you know that inequality. So these two are equivalent to each other. Now, with graphs-- and specifically, these counts here, so triangle densities and edge densities-- they correspond to counting closed walks in the graph. So in particular, if we're interested in the number of K3 homomorphisms in a graph, this is the same as counting closed walks of length 3. And there was important identity we used earlier when we were discussing the proof of quasi-random graphs, that for counting closed walks you should look at the spectral moment. So that's a very important tool to look at the spectral moment-- namely, the third power of the eigenvalues of the adjacency matrix of this graph. This is the eigenvalues of the adjacency matrix of G. I claim that this sum here is upper bounded by a corresponding sum of squares raised to the power that normalizes. The first time I saw this I was a bit confused because I remember, power means inequality. Shouldn't go the other way. But actually, no, this is the correct direction. So let me remind you why. So if you have a positive t, then claim that-- and you have a bunch of non-negative reals, then the claim is that this t-th power sum is less than or equal to the t-th power of the sum. Now, there are several ways to see why this is true. You can do induction. But let me show you one way which is quite neat. Because it's homogeneous in the variables, I can assume that the sum is 1, in which case the left-hand side is equal to this sum of t-th powers. So because I assumed that everything is non-negative, all these a's are between 0 and 1. So now, this sum is less than or equal to the same sum without the t's because you're using it like that. And that's equal to 1, which is the right-hand side. So this is true. And now we have the sum of the squares of the eigenvalues, which is also a moment of the eigenvalues-- namely, corresponding to K2. So the same inequality is true for graph homomorphisms. And to get to the inequality for densities, we just divide by the number of vertices raised to the third power from both sides, and we get the inequality that we're looking for. So that's the proof of the upper bound. Any questions? There is something that bothers me slightly about this proof. Look, it's a correct proof. So there is nothing wrong with this proof. Everything is kosher. Everything is correct. You might ask, is there a way to do this spectral argument in graphons without passing to graphs? And yes, you can, because for graphons you can also talk about spectrum. It turns out to be a compact operator, so that spectrum makes sense. You have to develop a little bit more theory about the spectrum of compact operators, but everything, more or less, works exactly the same way. It's just easier to talk about graphs. But what bothers me about this proof is that we started with what I would call a physical inequality, meaning that it only has to do with the actual edges and subgraph densities. But the proof involved going to the spectrum. And that bothers me a little bit. There's nothing incorrect about it, but somehow in my mind a physical inequality deserves a physical proof. So I use the word physical in contrast to frequency, which is coming from Fourier analysis. And that's the next thing we'll do in this course. But this proof goes to the spectrum. It goes to something beyond the physical domain. OK. It's neat. But I want to show you a different proof that stays within the physical domain. And this other proof-- I mean, it's always nice to see some different proofs because you can use it to apply to different situations. And there are some situations where you might not be able to use this spectral characterization. For example, what if your K3 is now K4? A similar inequality is true, but this proof doesn't show it, at least not directly. You have to do a little bit of extra work. So let me show you a different proof of the upper bound. And we'll prove a slightly stronger statement. Namely, that for all not just graphons-- it's not so important but for all symmetric measurable functions, from the unit square to r, one has the following inequality-- namely, that the K3 density in W is upper bounded by the K2 density of W squared raised to power 3/2. Here, the square is meant to be a point-wise square. So a couple of things. If your graph is a graphon or a graph, then-- if it's a graph, then it's 0 comma 1 value. So taking this point by square doesn't do anything. If you're a graphon, you can always put one more inequality that replaces it by the thing that we're looking for because W is always bounded between 0 and 1. So it's a slightly stronger inequality. Let me show it to you by writing down a series of inequalities and applying the Cauchy-Schwarz inequality repeatedly. So it's, again, an exercise in using Cauchy-Schwartz. And we will apply three applications of Cauchy-Schwarz. Essentially, three applications-- one corresponding to every edge of this triangle. So let me begin by writing down the expression in graphons corresponding to the K3 density. I'm going to apply Cauchy-Schwarz by-- so I'm going to apply Cauchy-Schwarz to the variable x, holding all the other variables constant. So hold dy and dz constant. Going to apply to dz. You see there are two factors that involve the variable x. So apply Cauchy-Schwarz to them, you split each of them into an L2. So one of these factors become that. By the way, all of these are definite integrals. I'm just omitting the domain of integrations. All the integrals are integrated over from 0 to 1. So the second application-- sorry, the second factor becomes like that. And the third factor is left intact. So that's the first application of Cauchy-Schwarz. You apply it with respect to dx to these two factors. Split them like that. AUDIENCE: There's a normalization missing. PROFESSOR: Thank you. There is a normalization missing. OK. Guess what the second step is? Going to apply Cauchy-Schwarz again, but now to dy, to one more variable. Cauchy-Schwarz with respect to dy. There are two factors now that involve the letter y. So I apply Cauchy-Schwarz and I get the following. The first factor now just becomes the L2 norm of W. The second factor does not involve y, so it is left intact. And the third factor is again integrated with respect to y after taking the square. And there's now dz that remains. Last step. You can guess, you integrate with respect to dz and apply Cauchy-Schwarz. Apply Cauchy-Schwarz to the last two factors. And there, actually, the outside integral goes away. OK. So you get this product. And you see every single term is just the L2 norm of W. So you have that, which is the same as what I wrote over here. Any questions? Yeah. AUDIENCE: Where do you use the fact that W is symmetric? PROFESSOR: Great question. So where do I use the fact that W is symmetric? So let's see. In some sense, we're not using the fact that W is symmetric because there is a slightly more general inequality you can write down. And actually, the question gives me a good chance to do a slight diversion into how this inequality is related to Holder's inequality. So this is actually one of my favorite inequalities for this kind of combinatorial inequalities on graphons. So many of you may be familiar with Holder's inequality in the following form. If I have three functions, if I integrate them, then you can upper bound this integral by the product of the L3 norms. And likewise, if you have more functions. So if you apply just this inequality directly, you get a weaker estimate. So you don't get anything that's quite as strong as what you're looking for over there. So what happens is that if you know-- so if f, g, and h each depends only on a subset of the coordinates in the following way that f depends on only x and y, g depends only on x and z, and h depends only on y and z, then if you repeat that proof verbatim with three different functions, you will find that you can upper bound this product, this integral, by the product of the L2 norms. So L2 norms are in general less than or equal to the L3 norms. So here we're inside a probability measure space. So the entire space has volume 1. So this is a stronger inequality, and this is the inequality that comes up over there. Yeah. AUDIENCE: Is there an entirely graph theoretic proof of this-- say, for graphs instead of graphons-- that doesn't involve going to spectrum? PROFESSOR: Great. So the question is, is there entirely graph theoretic proof of this? So the reason why I mentioned that this result is a special case of Kruskal-Katona-- so Kruskal-Katona actually is a stronger result, which tells you precisely how you should construct a graph. So given exactly m edges, what's the maximum number of triangles? And the statement that there is actually-- it's a very precise result. It tells you, for example, if you have K choose 2 edges, you have at most K choose 3 triangles. It's not just at the density level but exactly. And even if the number of edges is not in the form of K choose 2, it tells you what to do. And actually, the answer is pretty easy to describe. It's almost intuitive so if I give you a bunch of matchsticks and ask you to construct a graph with as many triangles as you can, what should you do? You start with one, two, filling a triangle. Start filling a triangle. Another vertex. 1, 2, 3, 4. You keep going. And that's the best way to do it. And that's what Kruskal-Katona tells you. So that's a more precise version of this inequality. And the Kruskal-Katona, the combinatorial version, is proved via a combinatorial shifting argument, also known as a compression argument. Namely, if you start with a given graph, there are some transformations you do to that graph to push your edges in one direction that saves the number of edges exactly the same but increases the number of triangles at each step. And eventually, you push everything into a clique. So it's something you can read about. It's a very nice result. Other questions? So we've solved the upper bound. So from examples and from this upper bound proof, we see that it's the upper bound. Now let me tell you a fairly general result that says something about graph theoretic inequalities but for a specific kind of linear inequalities. So here's a theorem due to Bollobas. I'm interested in an inequality of the form like-- so I'm interested in inequality of this form, where I have a bunch of real coefficients, and I'm looking at a linear combination of the clique densities. I would like to know if this inequality is true. So somebody gives us this inequality, whatever the numbers may be. You can also have a constant term. The constant term corresponds to r equals to 1. So the point density. That's the constant term. And asks you to decide is this inequality true. And if so, prove it. If not, find a counterexample. So the theorem tells you that this is actually not hard to do. So this inequality holds for all G if and only if it holds whenever G is a clique. Maybe somebody gives you this inequality about-- it's a linear inequality about clique densities. Then, to check this inequality, you only have to check over all cliques G, which is much easier than checking for all graphs. For each clique G this is just some specific expression you can write down, and you can check. So I want to show you the proof of Bollobas' theorem. It's a quite nice result. But before that, any questions about the statement. All right. So the reason I say that this is very easy to check if I actually give you what the numbers are is because this inequality for cliques-- so the inequality is equivalent to just the statement of inequality that I'm writing down now, where I tell you precisely what the r clique density is in an n clique. Because that's just some combinatorial expression. So to check whether this inequality is true for all graphs, I just have to check the specific inequality for all integers n, which is straightforward. All right. So let's see how to prove that inequality up there. And here we're-- I mean, we're not going to exactly use the theorems about graphons, but it's useful to think about graphons. So if and only if one of the directions is trivial-- so let's get that out of the way first. But also-- so the only if is clear. So for the if direction, first note that this is true for all graphs if and only if it is true for all graphons and where I replaced G by W. By the general theory of graph limits and whatnot, this is true. So in particular, there is one class of class that I would like to look at-- namely, I want to consider the set of node weighted-- so I want to consider the set of node weighted simple graphs. So node weighted simple graphs, by this I mean a graph where some of the edges are present and I have a node weight-- a weight for each node. And to normalize things properly, I'm going to assume that the nodes' weights add up to 1. Now, you see that each graph like that, you can represented by a graphon where-- so you can have a graphon. So they're not meant to be the same picture, but you have some graphon like this, which corresponds to a node weighted graph. And the set of such node weighted graphs is dense in the space of graphons. In particular, as far as graph densities are concerned, they include all the simple graphs. So it suffices-- I mean, it's equivalent to-- the inequality is equivalent to it being true for all node weighted simple graphs. But for this space of graphs, suppose that the inequality fails. Suppose that inequality is false. Then there exists a node weighted simple graph. I'm going to actually drop the word simple from now on. So node weighted graph H, such that f of H being the above sum is less than zero. And there could be many possibilities for such an H. But let me choose. So among all the possible H's, let's choose one that is minimal in the sense that it has the smallest possible number of nodes. So with this minimum-- has a minimum number of nodes. And furthermore, among all H with this number of nodes, choose the node weights, which we'll denote by a1 through a n, summing to 1. Chooses node weights so that this expression, the sum, is minimized. And by compactness-- and now we're not even talking about compact in the space of graphons. You have a finite number of parameters. It's a continuous function. So just by compactness, there exists such an H for which the minimum is achieved. This is minimizing over integers. And here, minimizing over a finite set of bounded real numbers. So the name of the game now is we have this H, which is minimizing. And I want to show that H has certain properties. If it doesn't have these properties, I can decrease those values. So let's see what properties this H must have if it has the minimum number of nodes and f of H is minimum possible. So first I claim that all the node weights are positive. If not, I can delete that node and decrease the number of nodes. I would like to claim that H must be a complete graph because if some ij is not edge of H-- so here i is different from j. I do not allow loops. It's just simple. Then let's think about what this expression f of H should be. So I don't want to write this down, but I want you to imagine in your head. So you have this graphon H. I'm Looking at the clique density. It's some polynomial. In fact, it's some multilinear-- it's some polynomial in these node weights. So I want to understand what is the shape of this polynomial as a function of the node weights. And I observe that it has to be multilinear in-- has to be multilinear in particular in alpha i and alpha j. It's a polynomial. That should be clear. It is multilinear because, well, you have-- why is it multilinear? Why do I not have alpha i squared? Either of you. AUDIENCE: It says the 0 is not [INAUDIBLE].. PROFESSOR: So we're forbidding-- so here's alpha 1, alpha 2, alpha 3, alpha 4, alpha 1, alpha 2, alpha 3, alpha 4. So understand what the triangle density-- if you write down the triangle density as an expression in terms of the parameters, think about what comes out, what it looks like. And they essentially consist of you choosing a subgraph, which you cannot have repeats. So it's multilinear. So it's multilinear in particular in alpha i and alpha j. So no term has the product alpha i alpha j in it because ij is not an edge. So here's where we're really using that we're only considering clique densities. So the theorem is completely false without the assumption of clique densities. If we have a general inequality, general linear inequality, then the statement is completely false. So it's multilinear. So if we now fix all the other variables and just think about how to optimize, how to minimize f of H by tweaking alpha i and alpha j, well, it's linear, so you should minimize it by setting one of them to be zero. And that would then decrease the number of nodes. So can shift alpha i and alpha j while preserving alpha i plus alpha j and not changing-- so not increasing f of H. And then we get either alpha i to go to zero or alpha j to go to zero, in which case we decrease the number of nodes, thereby contradicting the minimality assumption. So this argument here then tells you that H must be a clique. So hence, H is complete. And if H is complete, then as a polynomial in these alphas, what should f look like? Well, it has to be symmetric with respect to all these alphas. So in particular, it has to be-- so since H is complete, we see that, in fact, now you can write down exactly what f of H is in terms of the parameters described in the problem. Namely, it's Cr times r factorial times Sr, where Sr is a symmetric polynomial where you look at-- you choose r of the terms, r of these alphas for each term in this sum. It's just elementary symmetric polynomial. And I would like to know, given such a polynomial, how to minimize this number by choosing the alphas. But if you think about what happens if you fix again everything but two of the alphas-- so by fixing all of, let's say, alpha 3 to alpha n, we find that-- so as a function in just alpha 1 and alpha 2, f of H has the following form. And because it's symmetric, these two B's are actually the same. So if we now vary alpha 1 and alpha 2 but fixing everything else, because alpha 1 plus alpha 2 is constant, I can even get rid of this linear part. So that linear part is fixed as a constant. I want to minimize this expression with alpha 1 plus alpha 2, how it's fixed. So there are two possibilities depending on whether C is positive or negative or, I guess, 0. So now you're here. So depending if C is positive or negative, it's minimized by either the two alphas equal to each other or one of the two of alphas should be zero. The latter cannot occur because we assume minimality. So the first must occur. And hence, by symmetry, if you apply the same argument to all the other alphas, all the alphas are equal to each other, which means that H is a simple clique. It's basically an unweighted clique. So in other words, if this inequality fails for some H, some node weighted H, then it must fail for a simple clique H. And that's the claim above. Yeah? AUDIENCE: So in the statement, there are two n's, are those two n's different n's then? PROFESSOR: OK. Question. There are two n's. Yeah. Thank you. So these are two-- yeah. So these are two different n's. Great. yeah. AUDIENCE: I have a question. Which is the node weight such that f of H [INAUDIBLE]?? PROFESSOR: Question is, why can we assume that you can choose H so that f of H is minimized? Its because once-- OK. So you agreed the first thing you can minimize because the number of nodes is a positive integer. So if there's a counterexample, choose the minimum counterexample. Now, you fixed that number of vertices, and the number of-- then this is an optimization problem. It's minimizing continuous function with a finite number of variables. So it has a minimum just by compactness of a continuous function. So I choose that minimizer. Any more questions? So we have this rather general looking theorem. So in the second part of today's lecture, after taking a short break, I want to discuss what are some of the consequences and also variations of that statement up there. And I want to also show you what the rest of this picture looks like. So let's continue to deduce some consequences of this theorem up there that tells us that it is pretty easy to decide linear inequalities between clique densities. Namely, to decide it, just check the inequalities on cliques. So as a corollary for each n-- yes, for each n, the extremal points-- so the extremal points of the convex hull of this set where I record the clique densities overall graphons W. So think about this set as the higher dimensional generalization of that picture I drew up there. But no previously we had n equals to 3, and we're still interested in n equals to 3. But in general, you have this set sitting in this box. And so it's some set. And if I take the convex hull of the set, what that theorem tells us-- and it requires maybe one bit of extra computation. But what it tells us is that the extremal points are precisely the points given by W equals to Km for all m equal to 1. So evaluate, find what this point is for each m, and you have a bunch of points. And those are the convex hull. So I'll illustrate by drawing what the points are for the picture over there. But it essentially follows from Bollobas' theorem with one extra bit of computation to make sure that all of these are actually extremal points of the convex hull. None of them is contained in the convex hull of the other points. So for example, we can also deduce very easily Turan's theorem. So what does Turan's theorem tell us? It tells us that if the r plus 1 clique density is zero, then the K2 density is at most 1 minus r. So why does Turan's theorem follow from the above claims? It should follow because all the data here has to do with clique densities. And everything we saw so far says that if you just want to understand linear inequalities between clique densities, it's super easy. Maybe I'll draw the picture for triangles, and then you'll see what it's like. So the corollary tells us for this picture, corresponding to n equals to 3, what the points, the extreme point of the convex hull are. So let me let me draw these points for you. So one of these points is this 1/2 comma 0. So that corresponds to Mantel's theorem. Now, if you go to the other values of m, you find that those points-- so the extreme points-- they are of the form m minus 1 divided by m, m minus 1 m minus 2 divided by m squared for positive integers m. So for m equals to 2, that's the point that we just drew. And the next point-- so next two points, one third and one fourth, they are at, if you plug it in-- thank you. 2/3 and 3/4. They correspond to 2/9 and 3/8. So let me show you where these points are. So they are at here and over there. And you have this sequence of points going up. So this is the convex hull. And from that information, you should already be able to deduce Mantel's theorem because this right half is not part of this convex hull. So that's what Mantel's theorem. And similarly, the deduction to Turan's theorem also follows by a similar logic. OK. So you have this sequence of points. Now, it happens that all of these points lie on a curve. So let me try to draw what this extra curve is. So there is some curve, like that. So there's some curve like that. The equation of this curve happens to be x 2x minus 1. And because the regions is contained in the convex hull, the yellow points, it certainly lies above this convex red curve. You've seen this red curve before. From where? So what is that saying? It's saying that if your edge density is beyond above one half, then you have some lower bound on the triangle density. Where have we seen this before? Problem set one. There was a problem on problem set one that says exactly this inequality. So go back and compare what it did. But of course, the convex hull result tells you even a little bit more-- namely, that you can draw line segments between these convex hull points. So you have some polygonal reason that lower bounds the actual region. So what is the actual region? So leaving you in suspense. So let me tell you what the actual region is now. So it turns out to be actually-- it's beautiful and it's quite deep, that the region is now completely understood. And it's a fairly recent result. It's only about 10 years ago roughly that there are some concave curves. The sequence of scallops going up to the top right corner. And this is now understood to be the complete region between these lower and upper curves. So this is the complete set of feasible regions for edge versus triangle densities. So this lower curve is a difficult result due to Razborov. And I want to give you a statement what this curve is. And Razborov came up with this machinery, this technique, known by the name of flag algebra. So actually, he came up with this name. So I won't really tell you what flag algebra is, but it's kind of a computerized way of doing Cauchy-Schwarz inequalities. So many of our proofs for this graph through inequalities, they go through some kind of Cauchy-Schwarz or sum of squares equivalently. But there are some very large or difficult inequalities you can also prove this way. But it may be difficult to find exactly what is the actual inequality-- the chain of Cauchy-Schwarz or the sum of squares that you should write down. So this machinery, flag algebra, is a language, is a framework for setting up those sum of squares inequalities in the context of proving graph theoretic inequalities. So it can be used in many different ways. And notably, a lot of people have used serious computer computations. If I want to prove something is true, I plug it into what's called a semidefinite program that allows me to decide what kinds of Cauchy-Schwarz inequalities I should be applying to derive the result I want to prove. So that's what flag algebra roughly is. So what Razborov proved is the following. So Razborov's theorem, which is drawn up there-- that's the lower curve-- is that for fixed-- so for a fixed value of edge densities, if it lies between two specific points, drawn above, the minimum value of triangle density with a fixed value of edge density is attained via the following construction. It's attained by the step function of the graphon corresponding to a K clique. So a complete graph on K vertices with node weights alpha 1 through alpha K summing to 1, and such that the first K minus 1 of the node weights are equal. And the last one is Smaller All right. And the point here is that if you are given a specific edge weight, edge density, then there is a unique choice of these alphas that achieve that edge density. And that is the graphon you should use that minimizes the triangle density-- describes the lower curve. So you can write down specific equations for the lower curve, but it's not so important. This is a more important description. These are the graphs that come out. And what is something that is actually quite-- I mean, why you should suspect this theorem is difficult is that unlike Turan's theorem-- so Turan's theorem, which corresponds to all those discrete points. In Turan's theorem, the minimizer is unique. I tell you the number-- I tell you that the edge density is 2/3, and I want you to minimize the number of triangles. Not from Turan's theorem, but it turns out that this extremal point is unique. Essentially corresponds to a complete three partite graph. But for the intermediate values, the constructions are not unique. So unless the K2 density is exactly of this form, the minimizer is not unique. And the reason why it is not unique is that you can replace-- so what's going on here? So you have this graphon. Alpha 1, alpha 2, alpha 3. I can replace this graphon here by any triangle free graphon of the same edge density. And there are lots and lots of them. And the non-uniqueness of the minimizer makes this minimization problem much more difficult. So Razborov proved this result for edge versus triangle densities. And this program was later completed to K4, and more generally, to Kr So K4 is due to a result of Nikiforov, and the Kr result of Reiher So a similar picture. It's more or less that picture up there but with the actual numbers shifted. Instead of edge versus triangle, it is now edge versus Kr. I should say that it's worth-- so this is a picture that I drew up there, and this is roughly the picture that you see in textbooks-- how they draw these scallops. I once plotted what this picture looks like in Mathematica, just to see for myself where the actual graph is. And it doesn't actually look like that. The concaveness is very subtle. If you draw it on a computer, they look like straight lines. So in some sense, that's a cartoon. So the concaveness is caricatured. So it's not actually as concave as it is drawn, but I think it's a good illustration of what's happening in reality. Questions? So on one hand, every polynomial graph inequality-- so what do I mean by a polynomial graph inequality? So something like-- suppose I have some inequality of this form. And I want to know, is this true? It turns out that I don't actually need these squares in some sense because I can always replace them by what happens if you take disjoint unions. So all I'm trying to say is that every polynomial graph inequality can be written as a linear graph inequality of densities. But nevertheless, this still captures a very large class of graph inequalities. And if I just give you some arbitrary one that is not of that form, it can be often very difficult to decide whether it is true or not. So over here it's not so hard. You just plug it in, and then you can decide whether it is true. I mean, it turns out to decide whether this inequality is true, it's really a polynomial. And then you just check. It's not too hard to do. But in general, suppose I give you an inequality of this form. So some generalized version of a linear inequality, like that. It's even decidable if the inequality holds. Decidable in the sense of Turing halting problem. So is there some computer program give you this inequality is true? I wonder, can you write a computer program that decides the truthfulness? It turns out-- OK. So before telling you what the answer is, let me just put it in some context. What about more classical questions before we jump into graph theory? If I give you some polynomial p over the real numbers and I want to check is that true-- so this is not too hard. So this is not too hard. But what if you have multivariate for all real? Does anyone know the answer? Is this decidable? So as you can imagine, these things were studied pretty classically. And so it turns out that every first word or theory over the real numbers is decidable. So this is a result of Tarski. In particular, such questions are decidable. And in fact, there is a very nice characterization of-- so there's a result called Artin's theorem that tells you that every such polynomial, if it is non-negative, then if and only if, it can be written as a sum of squares of rational functions. So there's a very nice characterization of positiveness of polynomials over the reals. But now I change the question and I ask, what about over the integers? So if I give you a polynomial, is it always non-negative if I have integer entries? Is this decidable? So turns out, this is not decidable. And this is related. So it's more or less the same as the undecidability of Diophantine equations, which is also known as Hilbert's tenth problem. So there is no computer program where we give you a Diophantine equation and solves the question or even tells you whether the equation has a solution. And this is part of what makes number theory, makes Diophantine equations interesting. So it's undecidable, but we talk about it. So undecidability is a famous result due to Matiyasevich. So what about graph theoretic inequalities? So is a graph homomorphism inequality decidable? I mean, the question you should ask yourself is, which one is it closer to? Is it closer to deciding the positiveness of polynomials over reals or over integers? On one hand, you might think that it is more similar to the question of polynomials over real. So first of all, why it's similar to polynomials, I hope that's at least intuitively-- nothing's a proof, but intuitively it feels somewhat similar to polynomials. And all of these guys you can write down as polynomial-like quantities. And we saw this earlier in the proof of Bollobas' theorem. So you might think it's similar to reals because, well, for graphons, you can take arbitrary real weights. So it feels like the reals. So it turns out, due to a theorem of Hatami and Norine, that the answer is no. It is not decidable. And roughly the reason has to do with this picture. Even though the space of graphons is not discrete, it's a very continuous object, even if you just look at this picture here, you have a bunch of discrete points along this scallop. So here's a potential strategy for proving the undecidability of graph homomorphism inequalities. I start by just restricting myself to this curve. I restrict myself to the red curve. If you restrict yourself to the red curve, than the set of possibilities-- it's now a discrete set, which is like the positive integers. And now I start with-- I can reduce the problem to the problem of decidability of integer inequalities. I start with an integer inequality. I convert it to an inequality about points on this red curve. And that turns into a corresponding graph inequality, which must then be undecidable. So this undecidability result is related to the discreteness of points on this red curve. So general undecidability results are interesting. But often, we're interested in specific problems. So I give you some specific inequality and ask, is it true? And there are a lot of interesting open problems of that type. My favorite one, and also a very important problem in extremal graph theory, is known as Sidorenko's conjecture. So the main cause conjecture-- it's a conjecture-- says that if H is bipartite, then the H density in G or W is at least the edge density raised to the power of the number of edges of H. So we saw one example of this inequality when H is the fourth cycle. So when we discussed quasi-randomness we saw that this is true. And in the homework, you'll have a few more-- so in the next problem homework, you'll have a few more examples where you're asked to show this inequality. It is open. We don't know any counterexamples. And the first open example, it's known as something called a Mobius strip. So the Mobius strip graph, which is a fancy name for the graph consisting of taking a K55 and removing a 10 cycle. So that's the graph. It is open whether this inequality holds for that graph there. And this is something of great interest. So if you can make progress on this problem, people will be very excited. Now, why is this called a Mobius strip? This took me a while to figure out. So there are many different interpretations. I think the reason why it's called a Mobius strip is that if you think about the usual simplicial complex for a Mobius strip. And then this is the face vertex incidence bipartite graph. So five vertices, one for each face. Five vertices, one for each vertex. And if you draw the incident structure, that's the graph. I'm not sure if this topological formulation will help you improving Sidorenko's conjecture or disprove it, but certainly that that's why it's called a Mobius strip. And there are some people believe who believe that it may be false. So it's still open. It's still open. The one last thing I want to mention is that even though the inequality written up there in general is undecidable, if you only want to know whether this inequality is true up to an epsilon error, then it has decidable. In fact, there is an algorithm that I can tell you. So there exists an algorithm that decides, for every epsilon, that resides-- so I just want to know whether that inequality is true. But I allow an epsilon error, meaning it decides correctly this inequality is true up to an epsilon error for all G or outputs a G such that the sum here is negative. So up to an epsilon of error, I can give you an algorithm. And the algorithm follows-- I mean, it's not too hard to describe. Basically, the idea is that if I take an epsilon regular partition, then all the data about edge densities can be encoded in the epsilon regular partition. So apply even the weak regularity lemma is enough. And then we can test the bounded number of possibilities with some fixed number of parts. And by the counting lemma, you lose some epsilon error if I check over all weighted graphs on some bounded number of parts whose edge weights are multiples of epsilon, let's say, whether this is true. If it's true, then it is true with this epsilon. If it is false, then I can already output a counterexample. So there is only finitely many possibilities as a result of weak regularity lemma. And therefore, this version here is decidable. So today, we saw many different graph theoretic inequalities and some general results. And there are lots of open problems about graph homomorphism inequalities. So this concludes roughly the extremal graph theory section of this course. So starting from next lecture, we'll be looking at Roth's theorem. So looking at the Fourier analytic proof of Roth's theorem.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
3_Forbidding_a_subgraph_II_complete_bipartite_subgraph.txt
YUFEI ZHAO: All right. We are going to continue our discussion of extremal graph theory. Last time, we discussed what happens when we exclude a triangle or more generally a clique. And we wish to find a graph that maximizes the number of edges. And at the end of the lecture, I stated the theorem of Erdos-Stone-Simonovitz, which says, recall, that if you have a fixed H and I wish to understand the maximum number of edges in an n vertex graph that is H-free-- so remember this definition, this is the maximum number of edges in an n vertex H-free graph. So we are going to be looking at this quantity in the next few lectures. So the Erdos-Stone-Simonovitz theorem tells us that, perhaps, quite surprisingly, this quantity is largely covered by the chromatic number of H, even though H itself might be quite involved. So if you knew the chromatic number, you already know a lot of information about the growth rate of this function. And in particular, as long as it's just not bipartite, so that the chromatic numbers at least 3, we already know the first order asymptotics from the Erdos-Stone-Simonovitz theorem. However, when H is bipartite so that the chromatic number is exactly 2, then the theorem tells us only that this quantity is little o of n squared. Which is some useful information, but it doesn't tell us the whole story. And in the next several lecturers, I want to explore what more can we say about this quantity here for bipartite graphs H. And it turns out that there is a lot that we do not know, that there are lots of open problems in this area having to do with trying to pin down the growth rate of this function. And, in particular, for bipartite graphs, there's a bipartite graph that places somewhat special rule, namely the complete bipartite graph. So K st, being the complete bipartite graph, with s vertices on one side and t vertices on the other side-- so this is a very nice bipartite graph. And just to understand, the extremal number for this graph is a famous open problem in this area, and it has the name of Zarankiewicz problem, which is to determine or estimate the extremal numbers for these complete bipartite graphs. So I'll tell you pretty much all we know about this problem. And there are some interesting things. But we do not know all that much. Now, every bipartite graph is a subset. It's a subgraph of such a complete bipartite graph. So every bipartite H it is a soft graph of some K st. And we know that if H is a subgraph of this K st, then the extremal number for H if your graph is H-free, then automatically it has to be K s,t-free. So there is this bound between these two extremal numbers. So in particular, if you have some upper bound on K st, then you have some upper bound on bipartite graphs. Although, for specific by bipartite graphs H, maybe we can do better than using this bound. And we'll see examples of that later in the course as well. So what can we say about the extremal numbers of these K s,t's? So the most important theorem in this area of this problem is the result due to Kovari-Sos-Turan. So the Kovari-Sos-Turan theorem tells us that for every fixed integers s and t-- s at most t-- there exists some constant C, such that the extremal number is upper-bounded by something which is on the order of n to the 2 minus 1 over s. So it gives you some upper bound, showing you that it's not only subquadratic, but there is a gap from 2 in the exponent. Now, in combinatorics, as is with many fields of mathematics, it can be somewhat intimidating to newcomers when you see a lot of names. So every theorems seems to be named after a string of mathematicians, some of whom you may have heard of, some you may not. And I agree, it may be difficult to remember who is who and what is what, but I think this theorem has a very nice way to remember, which is that it's a theorem about K st, and this is the K s,t theorem. [LAUGHTER] OK. So this is the K s,t theorem. I want to begin by showing you how to prove the K s,t The proof is via a nice and not too difficult double-counting argument. And I show you some applications. Let's prove this theorem, and it's by double-counting argument. What are we going to count? Well, let's start with the object that we're working with. So there's going to be a graph G that has n vertices, m edges, and K s,t-free. And let's count the number of stars, K s,1 in this graph G. So we're counting configurations like that. I'll do an upper bound and a lower bound. So let's start with the upper bound. On one hand, every subset of s vertices in the graph G has at most t minus 1 common neighbors. Because if they had t common neighbors, then you get a K st. So that's one down. On the other hand, let's see what happens to the number of common neighbors if you knew that this graph has a lot of edges. So the number of stars-- well, I can calculate this quantity explicitly by running over all the vertices of G. For each vertex, look at its neighborhood and choose s vertices from its neighborhood. So for each v I need to find a subset of s vertices from its neighborhood. By convexity, I can lower bound the sum by the average, in some sense, because-- let me first write down the expression. So I can do a convexity argument that gives me this lower bound. Here, I'm abusing notation somewhat and writing binomial coefficients with a real entry on top, where I mean the expression as you would expect, treating this guy as a polynomial in x. And the key fact we're using here is that this function here is convex for x at least and minus 1. So, In particular, if you think about this function here, you have a lot of 0's, and then it becomes convex afterwards. So you can even think of extending this function as 0 to the left of n minus 1. So this is a convex function, and you can apply convexity to deduce this inequality. But you know the sum of the degrees. That's just essentially twice-- I mean, that is the twice the number of edges. So we have this expression right here. So you have an upper bound on the number of s stars coming from K s,t-freeness and lower bound on the number of s stars coming from just having lots of edges and applying convexity. Putting these two things together, we find that there's this inequality here. Here, we are thinking of s and t as fixed, and we are trying to understand how m and n depend on each other as they get large. So it will be helpful to use the asymptotics that n choose s grows like n to the s divided by s factorial for a fixed s. So looking at that expression and applying this asymptotics to both sides, we have this inequality here, So I've eliminated s factorial from both sides. So now rearrange, clean things up. We find the following upper bound-- m the number of edges in G. And that's the expression. So for fixed s and t it grows like n to the 2 minus 1 over s. Any questions? Yeah. AUDIENCE: Where does the right side of the inequality come from? Are you counting that from the different cycles [INAUDIBLE]?? YUFEI ZHAO: So the question is where the right side of inequality-- which inequality? This one here? AUDIENCE: Yeah. YUFEI ZHAO: Right. So here we are counting the number of K s1's. AUDIENCE: Oh, OK. YUFEI ZHAO: So here I am upper and lower-bounding the number of K s1's using what we derived earlier. Yes. AUDIENCE: Do you actually care that s is less than or equal t in the argument? YUFEI ZHAO: The question is, do we care that s is less than or equal to t in the argument? The argument doesn't care. But, of course, if you want the better asymptotics for fixed s and t as n gets large, you should take s to be less than or equal to t. Question? AUDIENCE: What happens when t equals 1? YUFEI ZHAO: What happens when t equals to 1. Well, that's a great question. I'll leave you to think about it. If you know that your graph has maximum degree n most t, what can you tell me about the number of edges in the graph? [LAUGHTER] OK. Any more questions? We'll come back to this theorem. In fact, this will occupy us for at least a couple of lectures. Basically, is this theorem tight? And it is conjectured to be, although that is a major open problem extremal graph theory. We only know a small number of values. Well, for most values of s and t, in particular when s and t are both equal to 4, we do not know if this bound is tight. But there are some values of s and t for which we do know that it is tight. For example, 2 and 2, 3 and 3, 4 and 7, s and if t is really, really large. And I will show you some constructions later on, creating graphs G that are K s,t-free for those parameters that matches this K st bound up to a constant factor. Yes? AUDIENCE: For a fixed s, what's the bound on t-- if there a bound on t for t with equality cases? YUFEI ZHAO: Right. So the question is, for a fixed value of s, is there some bound on t for which we get equality cases? There is a conjecture that this bound is sharp up to a constant factor for every s and t. But we only know how to prove that conjecture-- and I will tell you much more about it later on-- when t is much larger compared to s. So there is a lot of unexplored territory on this problem. Any more questions? Before diving into the Kovari-Sos-Turan further, I just want to show you some neat applications. And I want to begin with a geometric application. There is a classic problem asked by Erdos, back in the '40s, called the unit distance problem. Which asks, what is the maximum number of unit distances formed by n points in the plane. Let me give you some examples. If you have three points, you put them in equilateral triangle, and all three distances are unit. Great. If you have four points, you cannot place them so that all six distances are units if you're staying in the plane. So the best thing you can do is something like this-- so I'm drawing all the edges that are unit distances. If you have one more point, it turns out that's the best thing you can do with five points. With six points there are some more possibilities. Let me draw for you some possibilities. You can extend the previous configuration by adding more triangles. And there are many different ways that you can attach an extra triangle. There's actually one more way to do it with six points. Namely, I can put them like a projection of a prism. So all of these configurations have the maximum number of unit distances obtained if you draw six points. If you draw seven points, it turns out this is the best way to do it. So you can go on for a while. And people have. So you can try to tabulate for every n what's the maximum number of unit distances you can generate having n points in the plane. And the question is, what is the answer for n points? And it turns out, for this problem, and for many problems like it in combinatorics, you can have a lot of fun with playing with small examples, but they are often misleading. It doesn't really tell you what the overall structure should be like. And it turns out this is the major opium problem for which we do not understand what the structure is like for large values of n. So think n large. What are some possible ways to generate many unit distances? Yep. AUDIENCE: Drawing triangles? YUFEI ZHAO: Great. So one way is to draw lots of triangles. So extend this figure forward. So let's do that. Well, actually, let me give you something even simpler to begin with. I can just put the n points on a line, equal spaced, OK, I get n minus 1 distances-- n minus 1 unit distances. If I put them like that, how many unit distances do I get? AUDIENCE: 2 times n minus 1. YUFEI ZHAO: Good. 2 times n minus 1. So each new point gives you two extra new distances. And you can try to think about how to do better. Well, if you follow that sequence of examples, maybe you're limited to such ideas. Yes? AUDIENCE: Are we allowed to put points in the same place? YUFEI ZHAO: You are not allowed to put points in the same place. That's a great question. But you're not allowed to put points on the same place. Yes? AUDIENCE: Is for something to keep doubling the number of points, the degrees get big? YUFEI ZHAO: So I didn't quite understand-- so you say if you put-- AUDIENCE: [INAUDIBLE] something. YUFEI ZHAO: Yeah. AUDIENCE: Like keep translating in different directions. YUFEI ZHAO: So you want to take this example and keep translating in some direction I think you'll run out of room pretty quickly. AUDIENCE: No, I mean just copy it and put it over [INAUDIBLE].. YUFEI ZHAO: I see. You want to take this configuration and translate it in some unit direction. Well, then you double the number of points, but you don't actually increase the number of units distances all that much. AUDIENCE: Don't we get each point having one extra unit distance added? YUFEI ZHAO: So the suggestion is we take some configuration, let's say this graph G, and I form two copies of G by translating G in some unit directions-- some generic unit direction. OK, great. So what happens to the number of vertices? So n goes to 2m and the number of edges goes from m to 2m plus n. Is that right? OK, great. So if you do that, what do you get? AUDIENCE: Log n. YUFEI ZHAO: n log. OK. That's much better than before. OK, good. Good. Very nice. Yes? AUDIENCE: That picture [INAUDIBLE].. Let's start with the construction for n over 3 and then replace each point with a triangles-- equilateral triangle. YUFEI ZHAO: Right. So the suggestion is to start with some graph G and then replacing-- so let's, as an example, look at that one-- each vertex here by an equilateral triangle. But I want to maintain the same unit distance. So the-- uh-huh. AUDIENCE: So you just choose [INAUDIBLE].. Or you choose [INAUDIBLE] YUFEI ZHAO: Is this similar to taking this graph G and then translating it in two different directions that form an equilateral triangle? I think it's-- yeah. So it's a very similar idea. And maybe you do a little bit better in terms of constant. All great suggestions. Actually, this is really nice. AUDIENCE: [INAUDIBLE] two times [INAUDIBLE].. YUFEI ZHAO: Two times-- you want to make a constant correction? OK, let me just do that. [LAUGHTER] Any more suggestions? OK. Yeah, all of this is really nice. So let me tell you what is the best construction that people are aware of, and this is A construction due to Erdos. And the idea is to think big, not build from small examples. So what Erdos did is to consider a square grid of root n by root n. When I see something that's root n that's non-integer, I just think round down to the nearest integer. I have a square grid. Now, if you take these distances as unit distances, well, you get something which is linear in m. So you don't gain that much. But you can take any specific distance as your unit distance. So what we can do is take a distance that is represented many times in this grid. So let's take the unit distance to be a distance root r where r is some integer that can be represented as a sum of two squares in many different ways. So, for example, if we-- so we can take some r so that it has many appearances. And if you know some elementary number theory, then you might know that the best way to do this is to take r to be a product of primes that are 1 mod 4. In any case, you can do this and you can use some analytic number theory to calculate if you choose the best possible value of r, how many distances do you get. So that calculation was done, and it turns out you get n raised to the power of 1 plus some constant C over log log n unit distances. So this is better than the constructions we've seen before. So what can we say about this problem? So this is a construction. Well, then you want to understand some upper bounds. What can we say about the upper bounds to this problems? And let me show you a fairly easy upper bound, which can be deduced very quickly from the Kovari-Sos-Turan theorem, that every set of n points in the plane has at most on the order off n to the 3/2 unit distances. So not quite this bound, but it's some bound. So the trivial upper bound is n choose 2. So it's much better than the trivial upper bound. So here's the proof. So let's consider the unit distance graph, which is basically the graphs I've been drawing, where you have the vertices, the points, and I join an edge between two vertices if and only if their distance is exactly 1. I claim this graph is K 2,3 free Why is that? Because if you have two points, what are their common neighbors? Their common chambers must all be at distance 1 from each of the two points. So their common neighbors must land in the intersections of the unit circles centered at these two points, and they meet at most two points. So it's K 2,3 free. Therefore, the Kovari-Sos-Turan theorem tells us that the number of edges in this G is upper bounded by n to the 3/2. And that's it. That gives you some upper bound. Any questions? So it's an application of Kovari-Sos-Turan, where we design the graph that has to be K 2,3, free and the extremal number tells us some information about this graph. What is the best bound that we know? So it turns out we do know how to do a little bit better, but not anywhere close to what we believe to be the truth. So the current best bound is on the order of n to the 4/3. And we'll actually see a proof of this later in the course, once we've developed something called the crossing numbers inequality. It's also a very short, very neat proof. I want to tell you about another problem which looks superficially similar and also a really interesting geometric problem. And this is the Erdos distinct distances problem, which asks for the maximum number of distinct distances among n points in the plane. The minimum-- thank you. The minimum number of distinct distances. So maximum would be very easy. Again, you have lots of examples. In fact, you can look at basically all the examples that we've given earlier I mean, these two problems are very much related to each other. If you want lots of repeat distances or if you want very few distinct distances, whether they seem like they are very much related. And, of course, you can also write down the inequality that relates the two of them, because the number of distinct distances is at least n choose 2 divided by the maximum number of unit distances-- in other words, the number of distances that can be repeated in any single configuration. So each of the examples that I just erased-- so, for example, if you put points on the line, you have n minus 1 distinct distances. And we already saw the square grid, so that might be a good candidate to look at as well. And there it takes some more number theory to figure out how many distinct distances there are, and this roughly corresponds to the number of integers that can be written as the sum of two squares. So this has been calculated, and the result is n divided by square root of log n. So that's the order. And Erdos conjectured that this example should be more or less the best you can do. So these problems look very similar. But, actually, this problem here was solved in a spectacular fashion about a decade ago by an important paper of Guth and Katz. So Larry Guth is in our department. And they showed that every set of n points in the plane generates at least on the order of n over log n distinct distances. So not quite what Erdos conjectured, but nearly there. And, in fact, all the previous results were much worse. So they were off in the exponent. But this one more or less got to the truth. So this is a very sophisticated result that used lots of amazing techniques, ranging from the polynomial method in combinatorics to some algebraic geometry. And that's-- I just wanted to mention it for cultural value. OK. Any questions? Yes. AUDIENCE: For the unit distance problem, what is believed to be the bound? YUFEI ZHAO: Yeah. So for the unit distance problem, and actually for the distinct distances problem, the question is, what was is believed to be the bound? And it's the square grid. So maybe you can do slightly better, but it is conjecture that you cannot do much better, let's say, by more than a constant factor compared to the square grid. Yes. AUDIENCE: What is Katz's first name? YUFEI ZHAO: Sorry? AUDIENCE: What is Katz's first name? YUFEI ZHAO: So this is Nets Katz. He's at CalTech. I want to take a short break now. And when we come back, I want to show you some ways to generate lower bounds to the extremal number. In the Kovari-Sos-Turon theorem, we understood some upper bounds for the extremal number. So I want to turn our attention now to lower bounds. Which means constructing, in some sense, graphs that have lots of edges, at the same time being K s,t-free. There are many classes of construction, so there are several techniques for doing this. And I want to introduce you to some of these techniques. And the first one comes from the probabilistic method, where we use a randomized construction by picking a graph at random, modifying it a little bit in a way that's to our desire. This method is very powerful. It is very general. It is applicable in a lot of situations. But, unfortunately, for the problem of getting tight bound, it really allows you to get tight bounds. So it is very robust, but somehow it's often not sharp. And another class of constructions that are algebraic-- all the sharp examples come from algebraic constructions. And there, we're able to use some nice ideas from algebra or from algebraic geometry to get tight constructions. And you can wonder, is there some way to combine the best of both worlds? And it turns out-- this was an important reason development that was found just several years ago-- where one can combine ideas from the two and obtain a randomized algebraic construction. And that leads to a new source of constructions for large H-free graphs with many edges. So the plan is to show you how these constructions work. First, let's begin with randomized constructions. As I mentioned, this construction is very general, it's very robust, and we can use it to obtain H-free graphs for every graph H. Of course, when H is not bipartite, then we saw from the Turon graph that that's pretty much the right thing to do. So you should think of bipartite graphs H. So fix a graph H with at least two edges. Otherwise, the problem is trivial. The claim is that there is some constant C such that for every n-- you should think of and this being large-- there exists an H-free graph on n vertices and having lots of edges. And we will show that you can obtain the following number of edges. So 2 minus number of vertices of H minus 2 divided by the number of edges of H minus 1. Don't worry about this expression in the exponent, it will come out of the proof. In other words, the extremal number is at least this quantity here. How good is this now? So it's some scary looking expression in the exponent. So let me just give you some special cases for comparison to the Kovari-Sos-Turon theorem. For K s,t's, the bound, this construction gives you a lower bound which is of the form m to the 2 minus s plus t minus 2 divided by s,t minus 1. So I used this symbol, this bigger tilde to denote dropping constant factors. In particular, setting s and t to be the same, we find that the K s,s extremal number is lower-bounded by this quantity here. So how does that compare to Kovari-Sos-Turon? In Kovari-Sos-Turon we saw that K s,s is at most 2 to the n to the 2 minus 1 over s. So there's a bit of a gap, and in particular even for 2 and 2. These results tell you a lower bound on the order of n to the 4/3 and the upper bound on the order of 3/2. So I'm just doing this explicitly to show you that even in sometimes the simplest case there is a gap between these two bounds. Later on, we'll see a construction that shows that the right-hand side is tight. So this is the truth. Randomized construction gives you something, but it doesn't give you the truth. On the other hand, I want you to notice that if t gets very large as a function of s, this exponent here approaches 1 over s as t goes to infinity. So for very large values of t, at least in the exponent, it's not that far off. Although, you never get the right exponent for any specific essential. So this is some limitation of the randomized method, but it's variable robust. And, in fact, you can use this result to bootstrap it to a slightly better one for some graphs H. So, for example, one might do better by replacing this H by a subgraph. Because if you have a subgraph, H prime, and you construct your G to be H prime-free, then it is automatically H-free. But maybe this theorem actually gives you a better construction when restricted to H prime. And what can you do better? Well, you can do better if h prime is such that whatever the quantity that comes up in the exponent-- so let me write it like this-- is superior to the exponent you would obtain by just looking at H. So let me give the name to this notion here. So let me call it the 2-density of the graph to be whatever quantity here that would be the maximum if you allow to pass down to subgraphs. So the 2-density of a graph H-- so denoted in literature often as m sub 2-- to be the maximum where you are allowed to look at subgraphs, let's say, on at least three vertices to avoid degeneracies of this ratio that comes up in the expressions. So then the theorem-- that construction there-- implies that the extremal number is at least not just the expression I've written, but maybe sometimes you can do slightly better by passing to a subgraph. So let me give you a concrete example. Suppose my H is this graph here. So you can run the calculation and find what these numbers are. So the number of vertices, edges, and this ratio here to be-- so 5 vertices, 8 edges, and 7/3. And the idea here is that it's more helpful. So I want to create something which is H-free, but dense things are easier to avoid. So if I have something which has a fairly dense core, that's easier to avoid. So maybe it's better to, instead of looking at the whole H, look at this K 4. So if you can avoid K 4, of course, you avoid H, and maybe it's easier to just avoid K 4 and not worry about some of the extra decorations. So if you look at this H prime and go through the parameters, you find that it is like that. And it is somewhat denser in this sense, compared to looking at the whole graph H. So you can improve on this theorem for some graphs H where you can pass to a denser core. Any questions so far? Yes. AUDIENCE: Why is this method called the 2-density? YUFEI ZHAO: The questions is, why is this measure called a 2-density? So that's a name given in literature. It partly has to do with these extra ratios. So there are other notions of densities that actually we'll see later on. So this term we'll only see today. So it's more of an ad hoc term for the purpose of this course. Later on, we'll see notions of density that are more relevant for our discussions. Any more questions? So let me show you how to prove this theorem. The proof is very intuitive. The idea is you take something at random and then you fix it. That's it. Let's consider a random graph. The Erdos-Rényi random graph-- so whenever I say random graph, I almost always refer to this one here. And the Erdos-Rényi random graph is obtained by considering n vertices and each possible edge appearing independently and uniformly with probability p. And we're going to decide this p later on. So let me not tell you what p is for now. I'm interested in avoiding H. So this random graph may have some copies of H. Let me count the number of copies of H. We can compute the expected number of copies of H by linearity of expectations. For every possible placement, look at what's the probability that that placement generates an H. Namely, look at all different possibilities for choosing the possible vertices of H. I need to divide it by a factor that accounts for the number of automorphisms of H. But just a constant factor-- don't worry about it. And for each of these possible placements, H appears with probability exactly p to the number of edges of H, just by linearity of expectations. And I can upper-bound this quantity very crudely by e to the number of edges of H times n, which is the number of vertices of G raised to the number of vertices of H. On the other hand, I also want a graph G that has lots of edges, because that's what we're trying to do. We're trying to generate a graph with lots of edges that's H-free. So the number of edges of G, that's also easy to compute. It's a binomial distribution, and it has expectation p times n choose 2. And, basically, I want this quantity to be much larger than that quantity. So I choose an appropriate p. Namely, by comparing these two quantities, we can choose p to be, let's say 1/2-- the 1/2 is not important-- times n to 2 the v of H minus 2 divided by e of H minus 1. So the exponent comes out of comparing these two expressions. So you see a 1 and a 2. Once you have this p, see that the number which is the difference-- so take the number of edges of G minus the number of copies of each. I know the expectation of both. I can look at their difference with this value of p. We find that it is at least 1/2 of the number of edges. So on expectation, you don't lose too much. So p is chosen so that this inequality is true. So you set what p is. We find that this quantity here is at least some constant times n to the 2 minus v of H minus 2 divided by e the H minus 1. We're still working with a random graph, and we know that this quantity here is on expectation at least that number-- so not too small. Therefore, there exists some instance in this randomness, some G such that the quantity above for the specific random instance is at least its expectation. So this gives us a graph G which on one hand has lots of edges, but also has very few copies of H relative to the number of edges. So we can now get rid of all the copies of H by removing one edge from each copy of H in G to remove all copies of H. And now, we obtain an H-free graph. How many edges are there in this graph? Well, we removed at most one edge for each copy of H. So the number of edges is at least this quantity here, which is what we wrote just now. And that's it. So now we've obtained our graph on n vertices with lots of edges with the claimed bound that's H-free. So this is the probabilistic method. You start with something random. You try to fix it. And this method sometimes has the name of the alteration method. And this is a very important idea and one of the key ideas in the probabilistic method, which I encourage you to go and learn more about. We'll also see this method later on when we discuss the randomized algebraic construction. AUDIENCE: Just to clarify, none of the copies of H [INAUDIBLE].. YUFEI ZHAO: So the questions is-- yes-- what do I mean by the number of copies of H? I mean every instance of H you see. So there could be intersectings. I'm not asking for destroying copies. AUDIENCE: Sure. YUFEI ZHAO: So a complete graph on n vertices has n choose 3 triangles. Any more questions? OK. So now we've seen that the probabilistic method gives you know some bound. And it's not too hard to apply, but it doesn't give you the right bound. It doesn't give you the truth. So now, I want to show you a different type of constructions, namely algebraic constructions that do allow you to get the truth, but they work in only a small number of cases. And so it's more magical, but they work better when the magic happens. So let's discuss algebraic constructions. In particular, I want to show you how to obtain the type bound on the extremal number for K 2,2, namely a fourth cycle. So this is a result due to Erdos-Rényi-Sos, and it tells us that the extremal number for K 2,2 is at least 1/2 basically up to asymptotics times n to the 3/2. Actually, if you look at the constant that came out of the proof of the Kovari-Sos-Turon theorem, It is also 1/2. So as a corollary, we see that the extremal number is like that. So this is one of extremely few cases where we know the extremal number so well. So if you go back to the proof of Kovari-Sos-Turon, you see that the constant actually there is 1/2. So I want to construct for you a graph that has no fourth cycles and has lots of edges. So that's the name of the game. And I'll describe this graph for you explicitly. So this graph has a name. It's called a polarity graph. Let's suppose that n is a number such that 1 bigger than this number is a square of a prime. So our construction will use some finite fields. I'll explain a bit. If n is not of this form, then you can change n to a number of very close to of this form, and everything will be OK. The graph will be constructed as follows-- the vertex set will be the plane over Fp. Let's remove the origin. So it has n points exactly. And the edges are such that I put an edge between x,y and a,b, if and only if the equation ax plus by equals to 1 holds. And this equation is meant to be read in Fp. So that's the graph. That's an explicit description of this graph. So I need to show you two things-- one, that has lots of edges, the claimed number of edges. And two, it has no fourth cycles. So let's start with not having fourth cycles. So y is a K 2,2-free So what would the K 2,2, be? So let's consider two points. And I want to understand the number of common neighbors of these two points. Well, look at the description for the edges. What are the neighbors? The neighbors correspond to solutions to this system of equations. And the basic claim is that there is at most one solution-- x,y. So it's basic fact linear algebra. You have to be slowly careful in case a,b is a multiple of a prime b prime. But, actually, in that case, you have no solutions anyway. The second claim then is that this graph has lots of edges. Well, actually, that's not too hard to show either. So I claim that every vertex has degree-- so how many edges come out of every vertex? I give you a common b, so how many x comma y satisfy that equation up there? Basically, for-- so 1 of x and a and b is non-zero. So let's say a is non-zero. Therefore, whatever value of y you set, I can find a unique x that solves the equation. I have to be slightly careful, because I don't allow loops in my graph. So I might lose one edge because of that. In any case, every vertex has to agree exactly P or P minus 1. Just solve that equation in x and y. So the P minus 1 comes from no loops. Therefore, the number of edges is equal to the claimed bound. So this finishes the proof in case when n has that special number theoretic form. But we can extend to all values of n like this-- if n doesn't have that form, then I can take a prime P. It may not necessarily be exactly satisfying that inequality, but I can always take a prime pretty close to it. So I can always take a prime which is up to a negligible multiplicative error what I want. And then we use-- and such that P squared minus 1 is at most n. And use the above construction and add isolated vertices to finish the job to get exactly n vertices. And the reason that I can always take a prime very close to it is because there's a theorem in number theory that tells us that for n large enough, I can always find the prime which is slightly less than n but no more than a negligible multiplicative factor of n. The best result of this form is-- I'm just telling you something in number theory for cultural reasons-- due to Baker-Harman-Pintz. And so, this is the question regarding how large can gaps between primes be. So you might know the [Bertrand's postulate]] theorem that tells you there is always a prime between n and 2n. So what about between n and n plus root n. Actually, we don't know that. So the best result of the form is that there is always a prime-- so for n sufficiently large there exists a prime between n minus n to the exponent 0.525 and n. In any case, this number here, whatever it is is little n, and that's enough for our purpose. So it suffices to look at n of a special number theoretic form where you're allowed to use primes. So that's the construction there. Let me show you a interpretation of that construction which I think is may be helpful to think about, and that's that you can view it as the incidence graph between points and lines in projective space-- in projective plane. So I start with a projective plane. So I can view a bipartite version of that construction. It can be viewed as the point-line incidence graph of a projective plane over a finite field. And by this, I mean put as one vertex set the points of the projective plane and on the other side the lines. And I put in an edge between a point and a line if and only if the point lies on the line. So you can do this more explicitly in coordinates if you view points and lines as coordinates. And so the equation for getting a point to be on the line is like that. So now why is there no fourth cycle? A fourth cycle would correspond to two points lying on two different lines, which is not possible in this geometry. So that's the reason for that construction up there. So no two points in two lines. Any questions about this polarity constructions? AUDIENCE: Why is it called a polarity construction. YUFEI ZHAO: The question is, why is it called a polarity construction? So it relates points and their polars, which are lines. Yeah. AUDIENCE: Why does this not have-- like, on your two P squared vertices, looking at one vertex for every [INAUDIBLE],, one vertex for [INAUDIBLE]? YUFEI ZHAO: OK. So the question has to do with the number of vertices here. It's true. Here, I double the number of vertices, and so I don't get that constant there. But what that graph up there-- that's not a bipartite graph. It is identifying the points and the lines and overlaying the two parts into one. But if you don't care about the constants, this graph here may be conceptually easier to think about. Yes. AUDIENCE: [INAUDIBLE] to generalize the polarities bound to K [INAUDIBLE]. YUFEI ZHAO: Great. The question is, can you generalize this polarity graph to K 3,3 and higher? So that's what we're about to do next. So for K 3,3, what can you do? So the main observation here is that two lines intersecting at most one point. But there are other geometric facts of that form. So we're going to use one of them to get K 3,3-free graph. And this construction is due to Brown, that the extremal number for K 3,3 is also at least a factor 1/2 minus total 1 times-- so now, what's the exponent? What is predicted by Kovari-Sos-Turon is 2 minus 1/3, and Brown obtains the correct exponent. It turns out this is also the right constant, this 1/2, although it doesn't follow from the Kovari-Sos-Turon theorem I stated. One needs to do a little bit extra work. But it turns out it is true that this is also the correct constant. And that's actually pretty much all the cases where we know the correct constant. And there are other cases where we know the correct exponent, but these things tend to be hard to come by. So let me show you how to construct this graph. So it's based on a similar idea as the polarity graph. It has some more technicalities. So I'm not going to do the full proof and just give you the sketch. As earlier, I'm using the same trick. We can assume that n has a special form. Here, let me assume the n is a cube of a prime. I'm going to put s edges. So first, the vertices of my graph are going to be points in the affine plane over Fp. Previously, the edges had to do with lines. And now, let's use spheres. So the edges of the form where I join two vertices like this if and only if they're-- well, it's not really a distance, but it's something that looks like the equation of a sphere, where u is some fixed non-zero element of Fp. You may have to be somewhat careful in choosing this u, but let me not worry too much about it. So you fixed some so-called distance, even though it's not a distance, and I join the vertices whenever they satisfy that equation having that not distance. What's the intuition here? The intuition is that I want to avoid-- so how do I know that this graph has no K 3,3? Well, first, let's think about what happens in real space. So intuition in the real space-- well, here, I have this graph that, let's say, the unit distance graph in r3. So the neighborhood of each point is a unit sphere. And what I want to know is that if you have three spheres, three unit spheres, how many common intersection points can they have? Two spheres intersecting a circle. And that circle cannot lie on the third circle. That you should think about. So that circle intersects the third circle in at most two points. So three unit spheres have at most two common points. And so the unit distance graph in r3 is K 3,3-free. That entire argument, even though I expressed it geometrically, it's an algebraic argument. You can write down equations on intersections between two spheres. It's the intersection of the sphere with its coaxial plane. I have a couple of these colossal planes. They get me a coaxial line. That line has to intersect the sphere in almost two points. You should actually do this algebra if you want to do the proof, because there are funny things that can happen you find fields. For example, maybe the sphere contains a line. But you choose your parameters correctly, and these things don't happen. And that's the intuition. And if you actually work this out, you'll find that this graph here is indeed K 3,3-free. So I'm skipping the details. But you should do the algebra if you want to have a proof. On the other hand, it also has lots of edges. And that's basically the same reason as before. But I can count the number of edges by fixing some x, y, and z, and look at how many abc's satisfy that equation. And that's, again, something that needs to be checked, but the point is that this graph-- so every vertex has either a P square-- so it has close to P squared degree. So lots of edges, and combining with basically the same idea as before, you get the construction. Any questions? So where can we go from here? To construct the K 2,2-- by the way, if you construct K 2,2-free, the same construction works for K 2,3-free. If it's K 2,2-free, then, it's also K 2,3-free or K 2,4-free. Here, likewise, this is also K 3,4-free. So now, what about higher K s,t's? And you might think, well, let's take these geometric objects and try to extend them further. But that actually seems kind of difficult. We do not really know how to do it. We do not know how to obtain a construction which is of this form that works for K 4,4. In fact, there are even some evidence that that might be even impossible. As I mentioned, K 4,4 is a major open problem. It is an open problem to determine the order of the extremal number of K 4,4. But in any case, this construction, this idea of using algebraic constructions, is very enlightening, that we should look at ways to get large K s,t-free graphs by coming up with clever algebraic constructions. And next, time I will show you a couple of very nice ideas where you can get-- you can come up with a different kind of algebraic construction which has some superficial similarities to what we've seen today but that's really of a different nature. So, next time, we will see the following theorem, which is obtained in a sequence of two papers, union of authors, Alon, Rényi and Sazbó, that shows that if t is much larger than s-- so minus 1 factorial plus 1-- then the extremal number for K s,t is on the same order as the upper bound determined in Kovari-Sos-Turon theorem. Just to be more explicit about what these s and t are, if you plug in what's the smallest t that this theorem gives for various values of s, find 2,2, and 3,3. And the next one is 4, 7. And then, it gets worse from there. So these constructions are based on-- I mean, they are algebraic constructions. So we'll see next time what happens.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
7_Szemerédis_graph_regularity_lemma_II_triangle_removal_lemma.txt
PROFESSOR: I sent out a survey this morning about how the class is going, what you thought of the problem set. And I would appreciate if you provide me some feedback-- so things you like or don't like about the class or about the problem set that was just due last night. So I can try to adjust to make it more interesting and useful for all of you. Last time we talked about Szemerédi's graph regularity lemma. So the regularity lemma, as I mentioned, is an extremely powerful tool in modern combinatorics. And last time we saw the statement and the proof of this regularity lemma. Today, I want to show you how to apply the lemma for extremal applications. In particular, we'll see how to prove Roth's theorem that I mentioned in the very first lecture, about subsets of integers lacking three-term arithmetic progressions. First, let me remind you the regularity lemma. We're always working inside some graph, G. We say that a pair of subsets of vertices is epsilon regular if the following holds-- for all subsets A of X, B of Y, neither too small, we have that the edge density between A and B is very similar to the edge density between the ambient sets X and Y. So we had this picture from last time. You have two sets. Now, they don't have to be disjoint. They could even be the same set. But for illustration purposes, it's easier to visualize what's going on if I draw them as disjoint subsets. So there is some edge density. And I say they're epsilon regular if they behave random-like in the following sense-- that the edges are somehow distributed in a fairly uniform way so that if I look at some smaller subsets A and B, but not too small, then the edge densities between A and B is very similar to the ambient edge densities. So by most, epsilon difference. Now I need that A and B are not too small because if you allow to take, for example, single vertices, you can easily get densities that are either 0 or 1. So then it's very hard to make any useful statement. So that's why these two conditions are needed. And here, the edge density is defined to be the number of edges with one endpoint in A, one endpoint in B, divided by the product of the sizes of A and B. And we say that a partition of the vertex set of the graph is epsilon regular if, by summing over all pairs i, j, such that vi, vj is not epsilon regular if we sum up the product of these part sizes, then this sum is at most an epsilon fraction of the total number of pairs of vertices. And the way to think of this is that there are not too many irregular parts. At least in the case when all the parts are equitable. So we should really think about all of them having more or less the same size than saying that at most an epsilon fraction of them are irregular. And the main theorem from last time was Szemerédi's regularity lemma. And the statement is that for every epsilon, there exists some M-- so M depends only on epsilon and not on the graph that we're about to see-- such that every graph has an epsilon regular partition into at most M parts. In particular, the number of parts does not depend on the graph. For every epsilon, there is some M. And no matter how large the graph, there exists a bounded size partition that is epsilon regular. So the proof last time gave us a bound M that is quite large as a function of epsilon. So the last time we saw that this M was a tower of twos of height essentially polynomial in 1 over epsilon. And I mentioned that you basically cannot improve this bound. So this bound is more or less the best possible up to maybe changing the 5. And so in some sense, the proof that we gave last time for Szemerédi's graph regularity lemma was the right proof. So that was the sequence of steps that were the right things to do. Even though they give a terrible bound, it's somehow the bound that should come out. What I want to talk about today is, what's a regularity partition good for? So we did all this work to get a regularity partition, and it has all of these nice definitions. But they are useful for something. So what is it useful for? And here is the intuition. Remember at the beginning of last lecture I mentioned this informal statement of regularity lemma-- namely that there exists a partition of the graph so that most pairs look random-like. So what does random-like mean? So random-like, there is a specific definition. But the intuition is that in many aspects, especially when it comes to counting small patterns, the graph in the random-like setting looks very similar to what happens in a random graph-- in a genuine random graph. In particular, if you have three subsets-- x, y, and z-- and suppose that the three pairs are all epsilon regular, then you might be interested in the number of triangles with one vertex in each set. Now, if this were a genuine random tripartite graph with specified edge densities, then the number of triangles in such a random graph is pretty easy to calculate. You would expect that it is around the product of the sizes of these vertex sets multiplied by their edge densities. And what we will see is that in this case, in that of an epsilon regular setting, this is also a true statement. It's a true, deterministic statement. That's one of the consequences of epsilon regularity. Yes, question? AUDIENCE: Why are we only multiplying the sizes [INAUDIBLE]? PROFESSOR: Asking, why are we only multiplying the sizes of x, y, and z? So you're asking-- OK. So we're trying to find out how many triangles are there with one vertex in x, one vertex in y, and one vertex in z. So if I put these vertices in there, one by one, then if this were a random graph, I expect that pair to be an edge with probability dxy and so on. So if all the edge densities were one half, then I expect one eighth of these triples to be actual triangles. And what we're saying is that in an epsilon regular setting, that is approximately a true statement. So let me formalize this intuition into an actual statement. And this type of statements are known as counting lemmas in literature. And in particular, let's look at the triangle counting lemma. In the triangle counting lemma-- so we're using the same picture over there-- I have three vertex subsets of some given graph. Again, they don't have to be disjoint. They could overlap, but it's fine to think about that picture over there. And suppose that these three pairs of subsets-- so these three subsets-- they are mutually epsilon regular. Then, for abbreviation, let me write the sub xy to be the edge density between x and y, and so on for the other two pairs. The conclusion is that the number of triangles-- where I'm looking at triangles and only counting triangles with one specified vertex in x, one in y, and one in z-- is at least some quantity. So there is a small potential error loss but otherwise the product, as I mentioned earlier. So it is at least this quantity I mentioned earlier up to a potential small error because we're looking at epsilon regularity. So there could be some fluctuations in both directions. A similar statement is also true as an upper bound. But the lower bound will be more useful, so I will show you the proof of the lower bound. But you can figure out how to do the upper bound. And later on we'll see a general proof what happens, instead of triangles, if you have other subgraphs that you wish to count. So here's the intuition. So you have a random-like setting, and we'll formalize it in the setting of epsilon regular pairs. Yeah? AUDIENCE: Where does the 1 minus 2 epsilon come from? PROFESSOR: OK. The question is, where does 1 minus 2 epsilon come from? You'll see in the proof. But you should think of this as essentially a negligible factor. Any more questions? All right. So here's how this proof is going to go. Let's look at x and think about its relationship to y. It's epsilon regular. And they claim, as a result of them being epsilon regular, fewer than epsilon fraction of x. So fewer than this many vertices in x have very small number of neighbors in y. Because if this were not the case, then you can violate the condition of absolute regularity. So if not, then let's look at this subset, which has size at least epsilon x. And all of them have fewer than that number of neighbors to y. So these two sets-- so this set, x prime and y, would witness non-epsilon regularity. So you cannot have too many vertices with small degrees going to x-- going to y. OK. Great. And likewise, fewer than epsilon x vertices have a small number of neighbors to z. So what does the picture now look like? So you have this x and then these two other sets, y and z where I'm going to throw out a small proportion of x less than 2 epsilon fraction of x that have the wrong kinds of degrees. And everything else in here have lots of neighbors in both y and in z. And in particular, for all x up here it has lots of neighbors to y, lots of neighbors to z. How many? Well, we have at least d sub xy minus epsilon y neighbors to y and at least d sub xz minus epsilon times z neighbors to z. OK. So now I realize I'm missing a hypothesis in the counting lemma. Let me assume that none of these edge densities are too small. They're all at least 2 epsilon. So now these guys are at least epsilon fractions of y and z. So I can apply the definition of epsilon regularity to the pair yz to deduce that there are lots of edges between these two sets. So over here, the number of edges is at least the products of the sizes multiplied by the edge density between them. And by the definition of epsilon regularity, the edge density between these two small or these two red sets is at least d of yz minus epsilon. So putting everything now together, we find that the total number of triangles, looking at all the possible places where x can go-- so at least 1 minus 2 epsilon times the size of x. And then multiply by this factor over here. And so we find the statement up there. So this calculation formalizes the intuition that if you have epsilon regular pairs, then they behave like random settings when it comes to counting small patterns-- namely that of a triangle. So what can we use this for? The next statement I want to show you is called a triangle removal lemma. So this is a somewhat innocuous looking statement that is surprisingly tricky to prove. And part of the development of this regularity lemma was to prove Szemerédi's-- to the triangle removal lemma. This was one of the early applications of the regularity lemma. So it's due to Ruzsa and Szemerédi back in the '70s. Here's the statement. For every epsilon there exists a delta, such that every graph of n vertices with a small number of triangles-- so a small number of triangles means a negligible fraction of all the possible triples of vertices are actual triangles. So fewer than delta n cubed triangles. So if you have a graph with a small number of triangles, the question is, can you make it triangle free by getting rid of a small number of edges? So actually, there was already a problem on the first homework set that is in that spirit. So if you compare what I'm doing here to the homework set, you'll see that there are different scales. So fewer than delta n cubed triangles can be made triangle free by removing epsilon n squared edges. So if you have a small number of triangles, you can get rid of all the triangles by removing a small number of edges. If I put it that way, it actually sounds kind of trivial. You just get rid of all the triangles. But if you look at the scales it's not trivial at all, because there are only a subcubic number of triangles. So if you take out one edge from each triangle, maybe you got rid of a lot of edges. So this is a very innocent looking statement, but it's actually incredibly deep and tricky. Before jumping to the proof, let me first show you an equivalent reformulation of the statement that also helps you to think about what this statement is trying to say. So the triangle removal lemma can be equivalently stated as saying that every n vertex graph with a subcubic number of triangles-- so little o of n cubed triangles-- can be made triangle free by removing a subquadratic-- namely, little o of n squared-- number of edges. So this is an equivalent statement to what I wrote above, although it actually takes some thought to figure out what this is even saying because everybody loves using asymptotic notation, but there is also ambiguity with, what do you mean by asymptotic notation, especially if it appears in the hypothesis of a claim? So what do you think this statement means? Can you write out more of a full form? I think of this as a lazy version of trying to say something. So what do you mean by having little o of n cubed triangles? Yes. AUDIENCE: The sequence of the graph. [INAUDIBLE] AUDIENCE: [INAUDIBLE] function has n and only n. [INAUDIBLE] PROFESSOR: OK. Great. So I have a sequence of graphs. And also, we can put some functions in. So I'll write down the statement here, but that's kind of the idea. We're looking at not just a single graph, but we're looking at a sequence. Another way to say this is that for every function fn, that is subcubic. So for example, if f of n is n cubed divided about log n, there exists some function g, which is subquadratic, such that if you replace the first one by f of n and the second thing by g of n, then this is a true statement. And I'll leave it to you as an exercise in quantified elimination, let's say, to explain why these two statements are equivalent to each other. I want to explain a recipe for applying Szemerédi's regularity lemma. How does one use the regularity lemma to prove, well, statements in graph theory? The most standard applications of regularity lemma generally have the following steps. Let me call this a recipe. And we'll see it a few times. The first step is we apply Szemerédi's regularity lemma to obtain a partition. So let me call the first step partition. In the second step, we look at the partition that we obtained, and we clean it up. So in the partition, you have some irregular pairs that are undesirable to work with. And there are some other pairs that we'll see. So in particular, if your pair involves edges that are fairly sparse or subsets of vertices that are fairly small, then maybe we don't want to touch them because they're kind of not so good to deal with. So we're going to clean the graph by removing edges in irregular pairs and low density pairs. And unless you're using the version of regularity lemma that allows you to have equitable parts, you also want to get rid of edges where one of the parts is too small. And the third step, I'll call this count. Once you've cleaned up the regularity partition, say, well, let's try to find some patterns. If you find one pattern in the cleaned graph-- and we can use the counting lemma to find lots of patterns. Here, for the purpose of triangle removal lemma and what we've been doing so far, pattern just means a triangle. So we're going to use the triangle counting lemma to find us lots of triangles. So we'll see the details in a bit. But if we run through the strategy-- you give me a graph. Let's say, starting from the triangle removal lemma, it has a small number of triangles. You apply the partition, clean it up, and I claim this cleaning removes a small number of edges. And it should result in a triangle free graph because if it did not result in a triangle free graph, then there's some triangle. And from that triangle I can apply the triangle counting lemma to get lots of triangles. And that would violate the hypothesis of the triangle removal lemma. So that's how the proof is going to go. So I want to take a very quick break. And then when we come back, we'll see the details of how to apply the irregularity lemma. Are there any questions so far? Yeah? AUDIENCE: So when we're removing edges in one of the [INAUDIBLE],, is that too small? Can we do that for every vertex, or is it too small? PROFESSOR: So you're asking about what happens when we remove vertexes that are too small. You will see in the details of the proof. So hold on to that question for a bit. More questions. OK. So let's see the proof of the triangle removal lemma. So the first step is to apply Szemerédi's regularity lemma and find a partition. So we'll find a partition that's epsilon over 4 regular. So here, epsilon is the same epsilon in the statement-- in the top statement-- of the triangle removal lemma. In the second step, let's clean the graph by removing all edges in-- so we are going to get rid of edges between-- OK. So let me do it this way. So all edges between the vi and the vj whenever vi and vj is not epsilon regular. Get rid of the edges between irregular parts. AUDIENCE: Epsilon over 4 regular. PROFESSOR: Sorry? AUDIENCE: Epsilon over 4 regular. PROFESSOR: Epsilon over 4 regular. Thank you. Also, between parts where the edge density is too small-- if the edge density is less than epsilon over 2, get rid of those edges. And if one of the two vertex sets has size too small-- and here, too small means epsilon over 4M times the size of n. So here-- OK. So let me use big M for the number of parts. So that's the M that comes out of Szemerédi's regularity lemma. If you like, some of the vertex sets can be empty. It doesn't change the proof. And n is the number of vertices in the graph. And this step, you don't really need the step if your regular partition is equitable. So let's see how many vertices-- how many edges have we gotten rid of. We want to show that we're not deleting too many edges. In the first step-- so the number of deleted edges. In the first step, you see that the number of edges deleted is at most the sum of product of vi vj when you sum over ij such that this pair is not epsilon regular or epsilon 4 regular. Epsilon over 4 regular. By the definition of an epsilon regular partition, the sum here is at most epsilon over 4 times n squared. In the second step, I'm getting rid of low density pairs. By the virtue of them being low density, I'm not removing so many edges. So at most epsilon over 2 times n squared edges I'm getting rid of. In the third part, you see every time I take a very small piece, every vertex here is adjacent to at most n vertices. So the number of such things, such edges I'm getting rid of in the last step is at most this number times a number of parts M then times n. So it's at most epsilon over 4 times n squared. So here I'm telling you how many edges I've deleted in each step. And in total, putting them together, we see that we get rid of at most epsilon n squared edges from this graph. So that's the cleaning step. So we cleaned up the graph by getting rid of low density pairs, getting rid of irregular pairs, and small vertex s. Now suppose, after this cleaning, some triangles still remains. So we're now onto the third step. So suppose some triangle remains. So where could this triangle sit? Has to be between three parts-- vi, vj, and vk. I, j, and k, they don't have to be distinct. So the argument will be OK if some of them are the same, but it's easier to draw if they're all different. So I have some triangle, like that. Because these edges have not yet been deleted in the cleaning step, I know that the vertex sets are not too small, the edge densities are not too small, and they are all regular with each other. So here, each pair in vi, vj, vk is epsilon over 4 regular and have edge density at least epsilon over 2. And now we apply the triangle counting lemma, and we find that the number of triangles with one vertex in vi, one vertex in vj, one vertex in the vk is at least this quantity here. So that's a correction factor. So 1 over this 2 epsilon. And then a bunch of densities-- so densities are not too small. So I have at least epsilon over 4 n cubed multiplied by the sizes of the vertex sets. Now I know that-- use the fact that these part sizes are not too small. So I have that. Just in case, if i, j, and k happen to be the same, or two of them happen to be the same, I might overcount the number of triangles a little bit. But at most, you overcount by a factor of 6. So that's OK. So if you're worried about that, put the 1 over 6 factor in, just in case i, j, k not distinct. Or if you like, in the cleaning step, you can-- if you apply the equitable version of the regularity lemma, you can also get rid of edges inside the parts. But there are many ways to do this. It's not an important step. Now, this quantity, let me set it to be delta. You see, delta is a function of epsilon because M is a function of epsilon. So now, looking back at the statement, you see for every epsilon there exists a delta, such that if your graph has fewer than delta n cubed triangles, then let me get rid of all those edges. I've gotten rid of fewer than epsilon n squared edges, and the remaining graph should be triangle free. Because if it were not triangle free, then I can find some triangle. And that will lead to a lot more triangles. So for example, if you set this as delta over 2, then this will give you 2 delta n cubed triangles. Therefore, it would contradict the hypothesis. And that finishes the proof of the triangle removal lemma, saying that thus the resulting graph is triangle free. So that's the proof of the triangle removal lemma. So let me recap. We start with a graph, apply Szemerédi's regularity lemma, and clean up the regularity partition by getting rid of low density pairs, getting rid of irregular pairs, and getting rid of edges touching a very small vertex set. And I claim that the resulting graph, after cleaning up, should be triangle free. Because if it were not triangle free and I find some triangle, then I should be able to use that triple of vertex sets, combined with a triangle counting lemma, to produce a lot more triangles and. That would violate the hypothesis of the theorem. Any questions? Yeah. AUDIENCE: Where are you using that there exists a triangle? PROFESSOR: Ah, great. So question is, where am I using there exists a triangle? If there were no triangles, then we're done. So the purpose of the triangle-- the claim in the triangle removal lemma is that you can get rid of all triangles by removing at most epsilon n squared edges. AUDIENCE: So say we did that, and now-- why does this not prove that we still have triangles? PROFESSOR: Can you say your question again? AUDIENCE: So say we've removed everything by our cleaning step, and we've removed epsilon n squared edges, why does this logic not prove that we still have delta n cubed triangles. PROFESSOR: OK. So let me try to answer your question. So why does this proof show that you still have delta n cubed triangles? So I only set delta at the end. But of course, you can also set delta in the beginning of this proof. So I'm saying that you do the step. You get rid of epsilon n squared edges. And now I claim, after the step-- so I claim the remaining graph is triangle free. If it were not triangle free, then, well, it has some triangle. Then the triangle counting lemma would tell me there are lots of triangles. And that would contradict the hypothesis where we assume that this graph G has a small number of triangles. AUDIENCE: So if there is no triangle, then we've removed edges between vi, vj, or vi, vk, or vj, vk for any three i, j, k. PROFESSOR: That's correct. So we're saying, if you do not have any triangles-- well, after the cleaning step, we have gotten rid of all the edges between the bad pairs. And I'm claiming that there is no configuration like this left. And this is the proof because if you have some configuration where you did not delete the edges between these three parts, then you should be able to get a lot more triangles from the triangle counting lemma. Yeah. AUDIENCE: What if there were lots of triangles inside each individual vi, vj, vk? PROFESSOR: You asked me, what happens if there were a lot of triangles inside each vi, vj, vk? So that is fine. If you find some triangle-- so this picture, i, j, or k, they do not have to be distinct. So the same proof works if i, j, and k, some of them are equal to each other. Yep. AUDIENCE: [INAUDIBLE] but, I don't really understand why-- isn't delta over 2 there? PROFESSOR: So you're asking, why did I put the delta over 2? Just because I put less than or equal to delta. If I put strictly less than delta, then I don't need a delta over 2. AUDIENCE: [INAUDIBLE] delta over 2 or 2 delta. PROFESSOR: OK. Don't worry about it. Yes. AUDIENCE: Is there a way to generalize the triangle counting lemma to a general graph? PROFESSOR: OK. You're asking, is there a way to generalize the triangle counting lemma to a general graph? So yes. We will see that not today but I think next time. Any more questions? Great. So why do people care about the triangle removal lemma? So it's a nice, maybe somewhat unintuitive statement. But there was a very good reason why the statement was formulated, and it's because you can use it to prove Roth's theorem. So that's what I want to explain, how to connect this graph theoretic statement to a statement about three-term AP-- three AP-free subsets of the integers. This goes back to the very connection between graph theory and additive combinatorics that I highlighted in the first lecture. First, let me state a corollary of the triangle removal lemma-- namely, that if you have an n vertex graph G, where-- so if G is n vertex, and every edge is in exactly one triangle, then the number of edges of G is little o of n squared. These are actually kind of strange graphs. Every edge is in exactly one triangle. OK. Well, the number of triangles in G-- ever edge is in exactly one triangle. So the number of triangles in G is the number of edges divided by 3. The number of edges is at most n squared. So this quantity is at most quadratic order, which in particular is little o of n cubed. And thus the triangle removal lemma tells us that G can be made triangle free by removing little o of n squared edges. On the other hand, since every edge is in exactly one triangle, well, how many edges do you need to remove to get rid of all the triangles? Well, I need to remove at least a third of the edges. I need to remove at least a third of edges to make G triangle free. Putting these two claims together, we see that the number of edges of G must be little o of n squared. Any questions? AUDIENCE: Are there not more elementary ways to prove this? PROFESSOR: Great. Question is, are there not more elementary ways to prove this? Let me make some comments about that. So the short answer is, yes but not really. And really, the answer is no. [LAUGHTER] So you can ask, what about quantitative bounds? Because what is more elementary, what is less elementary is kind of subjective. But quantitative bounds, something that is very concrete. It's hard to argue. So if you look at the triangle removal lemma, you can ask, how is the dependence of delta on epsilon? So what does the proof give you? Where's the bottleneck? The bottleneck is always in the application of Szemerédi's regularity lemma-- namely in this M. So none of the other epsilons really matter. It's this M that kills you in terms of quantitative bounds. So in triangle removal lemma, this proof gives 1 over delta. So you can take 1 over delta being a tower of twos of height at most polynomial in 1 over epsilon. So that is your different proof. Well, the best known bound due to Fox is that you can replace this height by a different height that is at most essentially logarithmic in 1 over epsilon. Still a tower of twos. So we've changed some really big number to another, but slightly smaller, really big number. So this is still an astronomical number for any reasonable epsilon. And in terms of that corollary, basically the only known proof goes through the triangle removal lemma. Currently, we do not know any other approach to this problem. And you'll see later on that, well, what's the best thing that we can hope for? So it is quite possible that there are other proofs that are yet to be found. So that's actually-- people believe this, that this is not the right proof, that maybe there's some other way to do this. And the best lower bound, which we'll see either later today or next time, shows that we cannot do better than 1 over epsilon being essentially just a little bit more than polynomial in epsilon. So epsilon raised to something that is logarithmic in 1 over epsilon. So you can think of this as very-- it's a little bit bigger than polynomial in 1 over epsilon but not that much bigger than polynomial in 1 over epsilon. So there is a very big gap in our knowledge on what is the right dependence between epsilon and delta in the triangle removal lemma. And that's one of the-- it's a major open problem in extremal combinatorics to close this gap. Other questions? All right. So let's prove Roth's theorem. So let me remind you that Roth's theorem, which we saw in the very first lecture, says that if you have a subset of 1 through n that is free of three-term arithmetic progressions, then the size of the set must be sublinear. So what does this have to do with a triangle removal lemma? So if you remember the first lecture, maybe the connection shouldn't be so surprising. What we will do is we will set up a graph, starting from some arithmetic sets such that the graph encodes some arithmetic information-- in particular, the three-term APs in your graph, in the set, correspond to the triangles in the graph. So let's set up this graph. It will be helpful to view A not as a subset of the integers. It'll just be more convenient to view it as a subset of a cyclic group. Because I don't have to worry about edge cases so much when you're working a cyclic group. Here I take M to be 2N plus 1. So having it odd makes my life a bit simpler. Then if A is three AP free subset of 1 through n, then I claim that A now sitting inside this cyclic group is also three AP free. So it's a subset of Z mod n. And what we will do is that we will set up a certain graph. So we will set up a tripartite graph, x, y, z. And here, x, y, and z are going to be M elements whose vertices are represented by elements of Z mod n. And I need to tell you what are the edges of this graph. So here are the edges. I'm putting an edge between vertex x and y if and only if y minus x is an element of A. So it's a rule for how to put in the edges. And this is basically a Cayley graph, a bipartite variant of a Cayley graph. Likewise, I put an edge between x and z. So let me put x down here and y up there. So let me put in the edge between y and z if and only if z minus y is an element of A. And for the very last pair, it's similar but slightly different. I'm putting that edge if and only if z minus x divided by 2 is an element of A. Because we're in an odd cyclic group I can divide by 2. So this is a graph. So starting with a set A I give you this rule for constructing this tripartite graph. And the question now is, what are the triangles in this graph? If the vertices x, y, z is a triangle, then these three numbers by definition, because of the edges-- because they're all edges in this graph, these three numbers, they all lie in A. But now notice that these three numbers, they form a three-term arithmetic progression because the middle element is the average of the two others. But we said that A is a set that is three AP free. Has no three-term arithmetic progression. So what must be the case? So A is 3 AP free. But you can still have three APs using the same element three times. So all the three-term arithmetic progressions must be of that form. So these three numbers must then equal to each other. And in particular, you see that if you select x and y, it determines z. This equality here is the same as saying that x, y, and z they themselves form a three AP in Z mod nz. So this is precisely the description of all the triangles in the graph. So all the triangles in the graph G are precisely x, y, z, where x, y, and z form a three-term arithmetic progression. And in particular, every edge of G lies in exactly one triangle. You give me an edge-- for example, xy-- I complete it two a three AP, x, y, z. And that's the triangle. And that's the unique triangle that the edge sits in. And likewise, if you give me xz or yz, I can produce for you a unique triangle. So we have this graph. It has this property that every edge lies in exactly one triangle, so we can apply the corollary up there to deduce a bound on the total number of edges. Well, how many edges are there? On one hand, we see that because it's a Cayley graph, each of the three parts-- there are three parts here. Each of the three parts, if I start with any vertex, I have A edges coming out of that vertex to the next part by the construction. On the other hand, by the corollary up there, the number of edges has to be little o of M squared. And because M is essentially twice n, we obtain that the size of A is little o of M. And that proves Roth's theorem. Yeah? AUDIENCE: Could you explain one more time why every edge is in exactly one triangle? PROFESSOR: OK. So the question is, why is every edge in exactly one triangle? So you know what all the edges are. So this is a description of what all the edges are. And what are all the triangles. Well, x, y, z is a triangle precisely when these three expressions all lie in A. But note that these three expressions, they form a three AP because the middle term is the average of the two others. So x, y, z is the triangle if and only if this equation is true. And this equation is true if and only if x, y, z form a three AP in Z mod n. So if you just read out this equation, I give you x and y. So what is z? So all the triangles in x, y, z are precisely given by three APs, where one of the differences y minus x is in A. OK. So I give you an edge. For example, xy, such that y minus z is in A. And I claim there's a unique z that completes this edge to a triangle. Well, it tells you what that z is. z has to be the element in Z mod m that completes x and y to a three AP. Namely, z is the solution to this equation. No other z can work. And you can check that z indeed works and that all the remaining pairs are edges. So it's something you can check. Any more questions? So starting with the set A that is three AP free, we set up this graph with a property that every edge lies in exactly one triangle. And the one triangle basically corresponds to the fact that you always have these trivial three APs repeating the same element three times. And then, by applying this corollary of the triangle removal lemma, we deduce that the number of edges in the graph must be subquadratic. So then the size of A must be sublinear. And that proves Roth's theorem. So we did quite a bit of work in proving this theorem-- Szemerédi's regularity lemma, counting lemma, removal lemma, and then we set up this graph. So it's not an easy theorem. Later in the course, we'll see a different proof of Roth's theorem that goes through Fourier analysis. That will look somewhat different, but it will have similar themes. So we'll also have this theme comparing structure and pseudorandomness, which comes up in the proof-- in the statement and proof of Szemerédi's graph regularity lemma. So there, it's really about understanding what is the structure of the graph in terms of decomposition into parts that look pseudorandom. Yeah. AUDIENCE: You called the graph the Cayley graph. Why? PROFESSOR: OK. So question is, why do I call this graph the Cayley graph? So usually the Cayley graph refers to a graph where I give you a group, and I give you a subset of the group, and I connect two elements if, let's say, their difference lies in my subset. This basically has that form. So it's not exactly what people mean by a Cayley graph, but it has that spirit. Any more questions? OK. So earlier I talked about bounds for triangle removal lemma. So what about bounds for Roth's theorem? We do know somewhat better bounds for Roth's theorem compared to this proof. Somehow it's a nice proof, it's a nice graph, theoretic proof, but it doesn't give you very good bounds. It gives you bounds that decay very poorly as a function of n. Actually, what does it give you as a function of n? If you were to replace this little o by a function of n according to this proof, what would you get? I'm basically asking, what is the inverse of the function where you input some number and it gives you a tower of exponentials of height with that input? It's called a log star. So the log star-- so this is essentially N over the log star of N. So the log star basically is the number of times you have to take the logarithm to get you below 1. So that's the log star. And there's a saying that the log star, we know that it grows to infinity, but it has never been observed to do so. It's extremely slowly growing function. Any more questions? So I want to-- so next time I want to show you a construction that gives you a-- so next time I will show you a construction that gives you a subset A of n that is fairly large. So you might ask, OK, so you have this upper bound, but what should the truth be? And here's more or less the state of knowledge. So best bounds of Roth's theorem. Basically, the best bounds have the form N divided by basically log N raised to power 1 plus little o1. The precise bounds are of the form N over log N, and then there's some extra log-log factors. But let's not worry about that. The best lower bounds-- so we'll see this next time. So there exists subsets of 1 through N such that the size of A is at least e to the-- so N times-- so first, let me say it's pretty close to-- the exponent is as close to 1 as you wish. So there exists as A such that the size of A is N to the 1 minus little o1. And already, this fact is an indication of the difficulty of the problem because if you could prove Roth's theorem through some fairly elementary techniques, like using a Cauchy-Schwarz a bunch of times for instance, then experience tells us that you probably expect some bound that's power saving, replacing the 1 by some smaller number. But that's not the case. And the fact that that's not the case is already indication of the difficulty of this upper bound of Roth's theorem, even getting a little o. So you don't expect there to be simple proofs getting the little o. The bound that we'll see next time-- so we'll see a construction which gives you a bound that is of this form. So it's maybe a little bit hard to think about how quickly this function grows, but I'll let you think about it. Now, how does this-- so let's look at this corollary here. Can you see a way to construct a graph which has lots of edges, such that every edge lies in exactly one triangle? So we did this connection showing how to use this corollary to prove Roth's theorem. But you can run the same connection. So starting from this three AP free A, we can use that construction to build a graph n, such that a graph of n vertices with essentially order of n times the size of A number of edges, such that every edge lies in exactly one triangle. So you run the same construction. And this is actually more or less the only way that we know how to construct such graphs that are fairly dense. So on one hand-- basically what I said earlier. On one hand, you have this upper bound, which is given by the proof of using Szemerédi's regularity lemma that gives you a tower in the upper bound of 1 over delta. And if you use this construction here of three AP free set to construct the graph, you get this lower bound on delta, which is quasipolynomial. And that's more or less that we know. And there's a major open problem to close these two gaps. Any more questions? So I want to give you a plan on what's coming up ahead. So today we saw one application of Szemerédi's regularity lemma-- namely, the triangle removal lemma, which has this application to Roth's theorem. So we've seen our first proof of Roth's theorem. And next lecture, and the next couple lectures, I want to show you a few extensions and applications of Szemerédi's regularity lemma. So one of the questions today was, we knew how to count the triangles, but what about other graphs? And as you can imagine, if you can count triangles, then the other graphs should also be doable using the same ideas. And we'll do that. So we'll see how to count other graphs. And we'll give you a-- well, I'll give you a proof of the Erdos-Stone-Simonovits theorem that we did not prove in the first part of this course. So it gives you an upper bound on the extremal number of a graph H that depends only on the chromatic number of H. So we'll do that. And then I'll also mention, although not prove, some extensions of the regularity lemma to other settings, such as to hypergraphs. And what that's useful for is that it will allow us to deduce generalizations of Roth's theorem to longer arithmetic progressions. Proving Szemerédi's theorem. So one way to deduce Szemerédi's theorem is to use a hypergraph removal lemma-- the hypergraph extension of the graph removal lemma, the triangle removal lemma that we saw today. It would also let us derive higher dimensional generalizations of these theorems. So it's a very powerful tool. And actually, the hypergraph removal lemma, as mentioned in the very first lecture, it's a very difficult extension of the graph removal lemma. And the hypergraph regularity lemma, which can be used to prove the hypergraph removal lemma, is a difficult extension of the graph regularity lemma. So we'll see that in the next few lectures.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
11_Pseudorandom_graphs_I_quasirandomness.txt
PROFESSOR: So we spent the last few lectures discussing Szemerédi's regularity lemma. So we saw that this is an important tool with important applications, allowing you to do things like a proof of Roth's theorem via graph theory. One of the concepts that came up when we were discussing the statement of Szemerédi's regularity lemma is that of pseudorandomness. So the statement of Szemerédi's graph regularity lemma is that you can partition an arbitrary graph into a bounded number of pieces so that the graph looks random-like, as we called it, between most pairs of parts. So what does random-like mean? So that's something that I want to discuss for the next couple of lectures. And this is the idea of pseudorandomness, which is a concept that is really prevalent in combinatorics, in theoretical computer science, and in many different areas. And what pseudorandomness tries to capture is, in what ways can a non-random object look random? So before diving into some specific mathematics, I want to offer some philosophical remarks. So you might know that, on a computer, you want to generate a random number. Well, you type in a "rand," and it gives you a random number. But of course, that's not necessarily true randomness. It came from some pseudorandom generator. Probably there's some seed and some complex-looking function and outputs something that you couldn't distinguish from random. But it might not actually be random but just something that looks, in many different ways, like random. So there is this concept of random. You can think about a random graph, right, generate this Erdos-Renyi random graph. Every edge occurs independently with some probability. But I can also show you some graph, some specific graph, which I say, well, it's, for all intents and purposes, just as good as a random graph. So in what ways can we capture that concept? So that's what I want to discuss. And that's the topic of pseudorandomness. And of course, well, this idea extends to many areas, number theory and whatnot, but we'll stick with graph theory. In particular, I want to explore today just one specific notion of pseudorandomness. And this comes from an important paper called "Quasi-random graphs." And this concept is due to Chung, Graham, and Wilson back in the late '80s. So they defined various notions of pseudorandomness, and I want to state them. And what it turns out-- and the surprising part is that these notions, these definitions, although they look superficially different, they are actually all equivalent to each other. So let's see what the theorem says. So the set-up of this theorem is that you have some fixed real p between 0 and 1. And this is going to be your graph edge density. So for any sequence of graphs, Gn-- so from now, I'm going to drop the subscript n, so G will just be Gn-- such that the number of vertices-- so G is n vertex with edge density basically p. So this is your sequence of graphs. And the claim is that we're going to state some set of properties. And these properties are all going to be equivalent to each other. So all of these properties capture some notion of pseudorandomness, so in what ways this is graph G or really a sequence of graphs. Or you can talk about a specific graph and have some error parameters and error balance. They're all roughly the same ideas. So in what ways can we talk about this graph G being random-like? Well, we already saw one notion when we discussed Szemerédi's regularity lemma. And let's see that here. So this notion is known as discrepancy. And it says that if I restrict my graph to looking only at edges between some pair of vertex sets, then the number of edges should be roughly what you would expect based on density alone. So this is basically the notion that came up in epsilon regularity. This is essentially the same as saying that G is epsilon regular with itself where this epsilon now is hidden in this little o parameter. So that's one notion of pseudorandomness. So here's another notion which is very similar. So it's almost just a semantic difference, but, OK, so I have to do a little bit of work. So let me call this DISC prime. So it says that if you look at only edges within this set-- so instead of taking two sets, I only look at one set-- and then look at how many edges are in there versus how many you should expect based on density alone, these two numbers are also very similar to each other. So let's get to something that looks dramatically different. The next one, I'm going to call count. So count says that for every graph H, the number of labeled copies of H in G-- OK, so labeled copies, I mean that the vertices of H are labeled. So for every triangle, there are six labeled triangles that correspond to that triangle in the graph. The number of labeled copies of H is-- so what should you expect if this graph were truly random? You would expect p raised to the number of edges of H plus small error times n raised to number of vertices of H. And just as a remark, this little o term, little o 1 term, may depend on H. So this condition, count, says for every graph H, this is true. And by that, I mean for every H, there is some sequence of decaying errors. But that sequence of decaying errors may depend on your graph H. OK. The next one is almost a special case of count. It's called C4. And it says that the number of labeled copies of C4, so the fourth cycle, is at most p raised to power of 4-- so again, what you should expect in a random setting just for cycle count alone. I see, already, some of you are surprised. So we'll discuss that this is an important constraint. It turns out that alone implies everything, just having the correct C4 count. The next one, we will call codegree. And the codegree condition says that if you look at a pair of vertices and look at their number of common neighbors-- in other words, their codegree-- then what should you expect this quantity to be? So there are n vertices that possibly could be common neighbors, and each one of them, if this were a random graph with edge probability p, then you expect the number of common neighbors to be around p squared n. So the codegree condition is that this sum is small. So most pairs of vertices have roughly the correct number of common neighbors. So codegree is number of common neighbors. Next, and the last one, certainly not the least, is eigenvalue condition. So here, we are going to denote by lambda 1 through lambda G the eigenvalues of the adjacency matrix of G. So we saw this object in the last lecture. So I include multiplicities. If some eigenvalue occurs with multiple times, I include it multiple times. So the eigenvalue condition says that the top eigenvalue is around pn and that, more importantly, the other eigenvalues are all quite small. Now, for d regular graph, the top eigenvalue-- and it's fine to think about d regular graphs if you want to get some intuition out of this theorem. For d regular graph, the top eigenvalue is equal to d, because the top eigenvector is d. It's the all-one vector. So top eigenvector is all-one vector, which has eigenvalue d. And what the eigenvalue condition says is that all the other eigenvalues are much smaller. So here, I'm thinking of d as on the same order as n. OK, so this is the theorem. So that's what we'll do today. We'll prove that all of these properties are equivalent to each other. And all of these properties, you should think of as characterizations of pseudorandomness. And of course, this theorem guarantees us that it doesn't matter which one you use. They're all equivalent to each other. And our proofs are actually going to be-- I mean, I'm going to try to do everything fairly slowly. But none of these proofs are difficult. We're not going to use any fancy tools like Szemerédi's regularity lemma. In particular, all of these quantitative errors are reasonably dependent on each other. So I've stated this theorem so far in this form where there is a little 1 error. But equivalently, so I can equivalently state theorem as-- for example, have DISC with an epsilon error, which is that some inequality is true with at most epsilon error instead of little o. And you have a different epsilon for each one of them. And the theorem, it turns out that-- OK, so the proof of this theorem will be that these conditions are true, so all equivalent, up to at most a polynomial change in the epsilons. In other words, so property one is true for epsilon implies that property two is true for some epsilon raised to a constant. So the changes in parameters are quite reasonable. And we'll see this from the proof, but I won't say it again explicitly. Any questions so far about the statement of this theorem? So as I mentioned just now, the most surprising part of this theorem and the one that I want you to pay the most attention to is the C4 condition. This seems, at least at face value, the weakest condition among all of them. It just says the correct C4 count. But it turns out to be equivalent to everything else. And there's something special about C4, right? If I replace C4 by C3, by just triangles, then it is not true. So I want you to think about, where does C4 play this important role? How does it play this important role? OK. So let's get started with a proof. But before that, let me-- so in this proof, one recurring theme is that we're going to be using the Cauchy-Schwarz inequality many times. And I want to just begin with an exercise that gives you some familiarity with applying the Cauchy-Schwarz inequality. And this is a simple tool, but it's extremely powerful. And it's worthwhile to master how to use a Cauchy-Schwarz inequality. So let's get some practice. And let me prove a claim which is not directly related to the proof of the theorem, but it's indirect in that it explains somewhat the C4 condition and why we have less than or equal to over there. So the lemma is that if you have a graph on n vertices such that the number of edges is at least pn squared over 2, so edge density basically p, then the number of labeled copies of C4 is at least p to the 4 minus little o 1 n to the 4th. So if you have a graph with each density p-- p's your constant-- then the number of C4s is at least roughly what you would expect in a random graph. So let's see how to do this. And I want to show this inequality as a-- well, I'll show you how to prove this inequality. But I also want to draw a sequence of pictures, at least, to explain how I think about applications of the Cauchy-Schwarz inequality. OK. So the first thing is that we are counting labeled copies of C4. And this is basically but not exactly the same as number of homomorphic copies of C4 and G. So by this guy here, I really just mean you are mapping vertices of C4 to G so that the edges all map to edges. But we are allowing not necessarily injective maps, C4 to G. But that's OK. So the number of non-injective maps is at most cubic. So we're not really affecting our count. So it's enough to think about homomorphic copies. OK. So what's going on here? So let me draw a sequence of pictures illustrating this calculation. So first, we are thinking about counting C4s. So that's a C4. I can rewrite the C4 count as a sum over pairs of vertices of G as the squared codegree. And what happens here-- so this is true. I mean, it's not hard to see why this is true. But I want to draw this in pictures, because when you have larger and bigger graphs, it may be more difficult to think about the algebra unless you have some visualization. So what happens here is that I notice that the C4 has a certain reflection. Namely, it has a reflection along this horizontal line. And so if I put these two vertices as u and v, then this reflection tells you that you can write this number of homomorphic copies as the sum of squares. But once you have this reflection-- and reflections are super useful, because they allow us to get something into a square and then, right after, apply the Cauchy-Schwarz inequality. So we apply Cauchy-Schwarz here. And we obtain that this sum is at most where I can pull the square out. And I need to think about what is the correct factor to put out here. And that should be-- so what's the correct factor that I should put out there? AUDIENCE: 1 over n squared. PROFESSOR: OK, so 1 over n squared. So I don't actually like doing these kind of calculations with sums, because then you have to keep track of these normalizing factors. One of the upcoming chapters, when we discuss graph limits-- or in fact, you can even do this. Instead of taking sums, if you take an average, if you take an expectation, then it turns out you never have to worry about these normalizing factors. So normalizing factors should never bother you if you do it correctly. But just to make sure things are correct, please keep me in check. All right. So what happened in this step? In this step, we pulled out that square. And pictorially, what happens is that we got rid of half of this picture. So we used Cauchy-Schwarz, and we wiped out half of the picture. And now what we can do is, well, we're counting these guys, this path of length 2. But I can reprioritize this picture so that it looks like that. And now I notice that there is one more reflection. So there's one more reflection. And that's the reflection around the vertical axis. So let me call this top vertex x. And I can rewrite the sum like that. OK. So once more, we do Cauchy-Schwarz, which allows us to get rid of half of the picture. And now I'm going to draw the picture first, because then you see that what we should be left with is just a single edge. And then you write down the correct sum, making sure that all the parentheses and normalizations are correct. But somehow, that doesn't worry me so much, because I know this will definitely work out. But whatever it is, you're just summing the number of edges. So that's just the number of edges. And so we put everything in. And we find that the final quantity is at least p raised to 4 n to 4. So I did this quite slowly. But I'm also emphasizing the sequence of pictures, partly to tell how I think about these inequalities. Because for other similar looking inequalities-- in fact, there is something called Sidorenko's conjecture, which I may discuss more in a future lecture, that says that this kind of inequality should be true whenever you replace C4 by any bipartite graph. And that's a major open problem in combinatorics. It's kind of hard to keep track of these calculations unless you have a visual anchor. And this is my visual anchor, which I'm trying to explain. Of course, it's down to earth. It's just the sequence of inequalities. And this is also some practice with Cauchy-Schwarz. All right. Any questions? But one thing that this calculation told us is that if you have edge density p, then you necessarily have C4 density at least p to the 4th. So that partly explains why you have at most, then, here. So you always know that it's at least this quantity. So the C4 quasi randomness condition is really the equivalent to replacing this less than or equal to by an equal sign. So let's get started with proving the Chung-Graham-Wilson theorem. So the first place that we'll look at is the two versions of DISC. So DISC stands for discrepancy. So first, the fact that DISC implies DISC prime, I mean, this is pretty easy. You take y to equal to x. Be slightly careful about the definitions, but you're OK. So not much to do there. The other direction, where you only have discrepancies for a single set and you want to produce discrepancies for pairs of sets-- so this is actually a fairly common technique in algebra that allows you to go from bilinear forms to quadratic forms and vice versa. It's that kind of calculation. So let me do it here concretely in this setting. So here, what you should think of is that you have two sets, x and y, and they might overlap. And what they correspond to in the-- when you think about the corresponding Venn diagram, where I'm looking at ways that a pair of vertices can fall in x and/or y-- so if you have x and y. And so it's useful to keep track of which vertices are in which set. But what the thing finally comes down to is that the number of edges with one vertex in x and one vertex in y, I can write this bilinear form-type quantity as an appropriate sum of just number of edges in single sets. And so there are several ways to check that this is true. One way is to just tally, keep track of how many edges are you counting in each step. So if you are trying to count the number of edges in-- yeah, so let's say if you're trying to count the number of edges in-- with one vertex in x, one vertex in y. Then what this corresponds to is that count. But let me do a reflection. And then you see that you can write this sum as an alternating sum of principal squares, so this one big square plus the middle square and minus the two sides squares, which is what that sum comes to. All right. So if we assume DISC prime, then I know that all of these individual sets have roughly the correct number of edges up to a little o of n squared error. And again, I don't have to do this calculation again, because it's the same calculation. So the final thing should be p times the sizes of x and y together plus this same error. So that shows you DISC prime implies DISC. So the self version of discrepancy implies the pair version of discrepancy. So let's move on to count. To show that DISC implies count-- actually, we already did this. So this is the counting lemma. So the counting lemma tells us how to count labeled copies if you have these epsilon regularity conditions, which is exactly what DISC is. So count is good. Another easy implication is count implies C4. Well, this is actually just tautological. C4 condition is a special case of the count hypothesis. All right. So let's move on to some additional implications that require a bit more work. So what about C4 implies codegree? So this is where we need to do this kind of Cauchy-Schwarz exercise. So let's start with C4. So assume a C4 condition. And suppose you have this-- so I want to deduce that the codegree condition is true. But first, let's think about just what is the sum of these codegrees as I vary u and v over all pairs of vertices. So this is that picture. So that is equal to the sums of degrees squared, which now, by Cauchy-Schwarz, you can deduce to be at least n times 2 raised to number of edges-- namely, the sum of the degrees-- that thing squared. So now we assume the C4 condition-- actually, no, we assume that G has the density as written up there. So this quantity is p squared plus little 1 times n cubed, which is what you should expect in a random graph of Gnp. But that's not quite what we're looking for. So this is just the sum of the codegrees. What we actually want is the deviation of codegrees from its expectations, so to speak. Now, here's an important technique from probabilistic combinatorics is that if you want to control the deviation of a random variable, one thing you should look at is the variance. So if you can control the variance, then you can control the deviation. And this is a method known as a second moment method. And that's what we're going to do here. So what we'll try to show is that the second moment of these codegrees-- namely, the sum of their squares-- is also what you should expect as if the random setting. And then you can put them together to show what you want. So this quantity here, well, what is this? We just saw-- see, up there, it's also codegree squared. So this quantity is also the number of labeled copies of C4-- not quite, because you might have two vertices and the same vertex. So I incorporate a small error. So it's a cubic error, but it's certainly sub n to the 4. And we assume that the number of labeled copies of C4 by the C4 condition is no more than basically p to the 4 times n raised to power 4. OK. So now you have a first moment. You have some average, and you have some control. In the second moment, I can put them together to bound the deviation using this idea of controlling variance. So the codegree deviation is upper bounded by-- so here, using Cauchy-Schwarz, it's upper bounded by basically the same sum, except I want to square the summand. This also gets rid of the pesky absolute value side, which is not nicely, algebraically behaved. OK. So now I have the square, and I can expand the square. So I expand the square into these terms. And the final term here is p to the 4 n to the 6. No, n to the 4. All right. But I have controlled the individual terms from the calculations above. So I can upper bound this expression by what I'm writing down now. And basically, you should expect that everything should cancel out, because they do cancel out in the random case. Of course, the sanity check, it's important to write down this calculation. So if everything works out right, everything should cancel out. And indeed, they do cancel out. And you get that-- so this is a multiplication. This is p squared. Is that OK? So everything should cancel out. And you get a little o of n cubed. To summarize, in this implication from C4 to codegree, what we're doing is we're controlling the variance of codegrees using the C4 condition and the second moment bound, showing that the C4 condition trumps over the codegree condition. Any questions so far? So I'll let you ponder in this calculation. The next one that we'll do is codegree implies DISC. And that will be a calculation in a very similar flavor. But it will be a slightly longer but with similar flavor of calculation. So let me do that after the break. All right. So what have we done so far? So let's summarize the chain of implications that we have already proved. So first, we started with showing that the two versions of DISC are equivalent. And then we also noticed that DISC implies count through the counting lemma. So we also observed that count implies C4 tautologically and C4 implies codegree. So the next natural thing to do is to complete this circuit and show that the codegree condition implies the discrepancy condition. So that's what we'll do next. And in some sense, these two steps, you should think of them as going in this natural chain, where C4-- so C4 is like this, C4. Codegree condition is really about that. And DISC is really about single edges. So you can go from-- so double-- if you half, you get much more power. So it's going in the right direction, going downstream, so to speak. So that's what we're doing now, going downstream. And then you go upstream via the counting lemma. All right. Let's do codegree implies DISC. So we want to show the discrepancy condition, which is one written up there. But before that, let me first show you that the degrees do not vary too much, show that the degrees are fairly well distributed, which is what you should expect in a pseudorandom graph. So you don't expect the half the vertices, half in degrees, twice the other half. So that's the first thing I want to establish. If you look at degrees, this variance, this deviation, is not too big. OK. So like before, we see an absolute value sign. We see a sum. So we'll do Cauchy-Schwarz. Cauchy-Schwarz allows us to bound this quantity, replacing the summand by a sum of squared. I have a square, so I can expand the square. So let me expand the square. And I get that, so just expanding this square inside. And you see this degree squared is that picture, so that sum of codegrees. And sum of the degrees is just the number of edges. But we now assume the codegree condition, which in particular implies that the sum of the codegrees is roughly what you would expect. So the sum of the codegrees should be p squared n cubed plus a little o n cubed error at the end. Likewise, the number of edges is, by assumption, what you would expect in a random graph. And then the final term. And like before-- and of course, it's good to do a sanity check-- everything should cancel out. So what you end up with is little o of n squared, showing that the degrees do not vary too much. And once you have that promise, then we move onto the actual discrepancy condition. So this discrepancy can be rewritten as the sum over vertices little x and big X, the degree from little x to y minus p times the size of y, so rewriting the sum. And of course, what should we do next? Cauchy-Schwarz. Great. So we'll do a Cauchy-Schwarz. OK, so here's an important step or trick, if you will. So we'll do Cauchy-Schwarz. And something very nice happens when you do Cauchy-Schwarz here. OK. So you can write down the expression that you obtain when you do Cauchy-Schwarz. So let me do that first. OK. So here's a step which is very easy to gloss over. But I want to pause and emphasize this step, because this is actually really important. What I'm going to do now is to observe that the summand is always non-negative. Therefore, I can enlarge the sum from just little x and X to the entire vertex set. And this is important, right? So it's important that we had to do Cauchy-Schwarz first to get a non-negative summand. You couldn't do this in the beginning. So you do that. And so I have this sum of squares. I expand. I expand. I write out all these expressions. And now the little x range over the entire vertex set. All right. So what was the point of all of that? So you see this expression here, the degree from little x to big Y squared, what is that? How can we rewrite this expression? So counting little x and then Y squared-- AUDIENCE: Sum over u and big Y. PROFESSOR: Yeah. So sum of codegree of two vertices in Y, so Y, Y prime, and Y codegree of little y, little y prime. And likewise, the next expression can be written as the sum of the degrees of vertices in Y. And the third term, I leave unchanged. So now we've gotten rid of these funny expressions where it's just degree from the vertex to a set. And we could do this because of this relaxation up here. So that was the point. We had to use this relaxation so that we get these codegree terms. But now, because you have the codegree terms and we assume the codegree hypothesis, we obtain that this sum is roughly what you expect as in a random case, because all the individual deviations do not add up to more than little o n cubed. That codegree sum is what you expect. And the next term, the sum of degrees, is also, by what we did up there, what you expect. And finally, the third term. And as earlier, if you did everything correctly, everything should cancel. And they do. And so what you get at the end is little o of n squared. This completes this fourth cycle. Any questions so far? So we're missing one more condition, and that's the eigenvalue condition. So far, everything had to do with counting various things. So what does eigenvalue have to do with anything? So the eigenvalue condition is actually a particularly important one. And we'll see more of this in the next lecture. But let me first show you the equivalent implications. So what we'll show is that the eigenvalue condition is equivalent to the C4 condition. So that's the goal. So I'll show equivalence between EIG and C4. So first, it implies a C4 condition, because up to-- so instead of counting C4s, which is a little bit actually not-- it's a bit annoying to do actual C4s. Just like earlier, we want to consider homomorphic copies, which are also labeled walks, so closed walks of length 4. So up to a cubic error, the number of labeled C4s is given by the number of closed walks of length 4, which is equal to the trace of the 4th power of the adjacency matrix of this graph. And the next thing is super important. So the next thing is sometimes called a trace method. One important way that the eigenvalue, so the spectrum of a graph or matrix, relates to other combinatorial quantities is via this trace. So we know that the trace of the 4th power is equal to the fourth moment of the eigenvalues. So if you haven't seen a proof of this before, I encourage you to go home and think about it. So this is an important identity, of course. 4 can be replaced by any number up here. And now you have the eigenvalue condition. So I can estimate the sum. There's a principle term-- namely, lambda 1. So that's the big term. Everything else is small. And the smallness is supposed to capture pseudorandomness. But the big term, you have to analyze separately. OK, so let me write it out like that. So the big term, you know that it is p to the 4 n to the 4 plus little o of n to the 4. OK. So next thing is what to do with the little terms. So we want to show that the contribution in total is not too big. So what can we do? Well, let me first try something. So first, well, you see that each one of these guys is not too big. So maybe let's bound each one of them by little o of n raised to 4. But then there are n of them, so you have to multiply by an extra n. And that's too much. That's not good enough. So you cannot individually bound each one of them. And this is a novice mistake. This is something that we actually will see this type of calculation later on in the term when we discuss Roth's theorem. But you're not supposed to bound these terms individually. The better way to do this or the correct way to do this is to pull out just a couple-- some, but not all-- of these factors. So it is upper bounded by-- you take max of-- in this case, you can take out one or two. But you take out, let's say, two factors. And then you leave the remaining sum intact. In fact, I can even put lambda 1 back into the remaining sum. So that is true. So what I've written down is just true as an inequality. And now I apply the hypothesis on the sizes of the other lambdas. So the one I pulled out is little o of n squared. And now what's the second sum? That sum is the trace of a squared, which is just twice the number of edges of the graph. So that's also at most n squared. So combining everything, you have the desired bound on the C4 count. Of course, this gives you an upper bound. But we also did a calculation before the break that shows you that the C4 bound has a lower bound, as well. So really, having the correct eigenvalue-- actually, no, this already shows you that the C4 bound is correct in both directions, because this is the main term. And then everything else is small. OK. The final implication is C4 implies eigenvalue. For this one, I need to explore the following important property of the top eigenvalue. So there's something that we also saw last time, which is the interpretation of the top eigenvalue of a matrix interpreted as-- so this is sometimes called the Courant-Fischer criterion. Or actually, this is a special case of Courant-Fischer. This is a basic linear algebra fact. If you are not familiar with it, I recommend looking it up. The top eigenvalue of a matrix, of a real, symmetric matrix, is characterized by the maximum value of this quadratic form. Let's say if x is a non-zero vector. So in particular, if I set x to be a specific vector, I can lower bound lambda 1. So if we set this boldface 1 to be the all-one vector in R raised to the number of vertices of G, then the lambda 1 of the graph is at least this quantity over here. The numerator and denominators are all easy things to evaluate. The numerator is just twice the number of edges, because you are summing up all the entries of the matrix. And the denominator is just n. So the top eigenvalue is at least roughly pn. So what about the other eigenvalues? Well, the other eigenvalues, I can again refer back to this moment formula relating the trace and closed walks. It is at most the trace of the 4th power minus the top eigenvalue raised to the 4th power. It's the sum of the other eigenvalue raised to the 4th power. And 4 here, we're using the 4. It's an even number, right? So you have this over here. So having a C4 hypothesis and also knowing what lambda 1 is allows you to control the other lambdas. See, lambda 1 cannot be much greater than pn. Also comes out of the same calculation. Yep. AUDIENCE: So [INAUDIBLE] number 1 equal to [INAUDIBLE]?? PROFESSOR: Yeah, thank you. Yeah, so there's a correction. So lambda 1 is-- so in other words, the little o is always respect to the constant density. OK, yeah. Question. AUDIENCE: You said in the eigenvalue implies C4, you somewhere also used the lower bound to be proved [INAUDIBLE]. PROFESSOR: OK. So the question is in eigenvalue implies C4, it says something about the lower bound. So I'm not saying that. So as written over here, this is what we have proved. But when you think about the pseudorandomness condition for C4, it shouldn't be just that the number of C4 count is at most something. It should be that it equals to that, which would be implied by the C4 condition itself, because we know, always, it is the case that a C4 count is at least what it is compared to the random case. So just one more thing I said was that lambda 1, you also know that it is at most pn plus little n, because-- OK. Yeah. So this finishes the proof of the Chung-Graham-Wilson theorem on quasi-random graphs. We stated all of these hypotheses, and they are all equivalent to each other. And I want to emphasize, again, the most surprising one is that C4 implies everything else, that a fairly seemingly weak condition, this just having the correct number of copies of labeled C4s, is enough to guarantee all of these other much more complicated looking conditions. And in particular, just having the C4 count correct implies that the counts of every other graph H is correct. Now, one thing I want to stress is that the Chung-Graham-Wilson theorem is really about dense graphs. And by dense, here, I mean p constant. Of course, the theorem as stated is true if you let p equal to 0. So there, I said p strictly between 0 and 1. But it is also OK if you let p be equal to 0. You don't get such interesting theorems, but it is still true. But for sparse graphs, what you really want to care about is approximations of the correct order of magnitude. So what I mean is that you can write down some sparse analogs for p going to 0, so p as a function of n going to 0 as n goes to infinity. So let me just write down a couple of examples, but I won't do all of them. You can imagine what they should look like. So DISC should say this quantity over here. And the discrepancy condition is little o of pn squared, because pn squared is the edge density overall. So that's the quantity you should compare against and not n squared. If you're comparing n squared, you're cheating, because n squared is much bigger than the actual edge density. Likewise, the number of labeled copies of H is-- I want to put the little o 1 plus little in front, so instead of plus little o of n to the H at the end. So you understand the difference. So for sparse, this is the correct normalization that you should have, when p is allowed to go to 0 as a function of n. And you can write down all of these conditions, right? I'm not saying there's a theorem. You can write out all these conditions. And you can ask, is there also some notion of equivalence? So are these corresponding conditions also equivalent to each other? And the answer is emphatically no, absolutely not. So all of these equivalents fail for sparse. Some of them are still true. Some of the easier ones that we did-- for example, the two versions of DISC are equivalent. That's still OK. And some of these calculations involving Cauchy-Schwarz are mostly still OK. But the one that really fails is the counting lemma. And let me explain why with an example. So I want to give you an example of a graph which looks pseudorandom in the sense of DISC but has no, let's say, C3 count. It also has no C4 count, but it has no-- has the clean, correct number of triangles. So what's this example? So let p be some number which is little o of 1 over root n so some decaying quantity with n. And let's consider Gnp. Well, how many triangles do we expect in Gnp? So let's think of p as just slightly below 1 over root n. So the number of triangles in Gnp in expectation is-- so that's the expected number. And you should expect the actual number to be roughly around that. But on the other hand, the number of edges is also expected to be this quantity here. And you expect that the actual number of edges to be very close to it. But p is chosen so that the number of triangles is significantly smaller than the number of edges, so asymptotically smaller, fewer copies of triangles than edges. So what we can do now is remove an edge from each copy of a triangle in this Gnp. We removed a tiny fraction of edges, because the number of triangles is much less than the number of edges. We removed a tiny fraction of edges. And as a result, we do not change the discrepancy condition up to a small error. So the discrepancy condition still holds. However, the graph has no more triangles. So you have this pseudorandom graph in one sense-- namely, of having a discrepancy-- but fails to be pseudorandom in a different sense-- namely, it has no triangles. Yep. AUDIENCE: Do the conditions C4 and codegree also hold here-- so the issue being from DISC to count? PROFESSOR: Question, do the conditions C4 and codegree still hold here? Basically, downstream is OK, but upstream is not. So we can go from C4 to codegree to DISC. But you can't go upward. And understanding how to rectify the situation, perhaps adding additional hypotheses to make this true so that you could have counting lemmas for triangles and other graphs and sparser graphs, that's an important topic. And this is something that I'll discuss at greater length in not next lecture, but the one after that. And this is, in fact, related to the Green-Tao theorem, which allows you to approve Szemerédi's theorem among the primes. The primes contain arbitrarily long arithmetic progressions, because the primes are also a sparse set. So it has density going to 0. It's density decaying, like, 1 over log n, according to prime number theorem. But you want to do regularity method. So you have to face this kind of issues. So we'll discuss that more at length in a couple of lectures. But for now, just a warning that everything here is really about dense graphs. The next thing I want to discuss is an elaboration of what happens to these eigenvalue conditions. So for dense graphs, in some sense, everything's very clear from this theorem. Once you have this, theorem, they're all equivalent. You can go back and forth. And you lose a little bit of epsilon here and there, but everything is more or less the same. But if you go to sparser world, then you really need to be much more careful. And we need to think about other tools. And so the remainder of today, I want to just discuss one fairly simple but powerful tool relating eigenvalues on one hand and the discrepancy condition on the other hand. All right. So you can go from eigenvalue to discrepancy by going down this chain. But actually, there's a much quicker route. And this is known as the expander mixing lemma. For simplicity and really will make our life much simpler, we're only going to consider d-regular graphs. So here, d-regular means every vertex is degree d. Same word, but different meaning from epsilon regular. And unfortunately, that's just the way it is. So d regular, and we're going to have n vertices. And the adjacency matrix has eigenvalues lambda 1, lambda 2, and so on, arranged in decreasing order. Let me write lambda as the maximum in absolute value of the eigenvalues except for the top one. In particular, this is either the absolute value of the second one or the last one. As I mentioned earlier, the top eigenvalue is necessarily d, because you have all-ones vector as an eigenvector. So the expander mixing lemma says that if I look at two vertex subsets, the number of edges between them compared to what you would expect in a random case-- so just like in the disc setting, but here, the correct density I should put is d over n-- this quantity is upper bounded by lambda times the root of the product of x and y. So in particular, if this lambda-- so everything except for the top eigenvalue-- is small, then this discrepancy should be small. And you can verify with what we did, that it's consistent, what we just did. All right. So let's prove the expander mixing lemma, which is pretty simple given what we've discussed so far, relating-- so there was this spectral characterization up there of the top eigenvalue. So we can let J be the all-ones matrix. So let J be the all-ones matrix. And we know that the all-ones vector is an eigenvector of the adjacency matrix of G with eigenvalue d. So the eigendecomposition of J is also the all-ones vector and its complement. So we now see that A sub G minus d over nJ has the same eigenvectors as AG. So you can choose the eigenvectors for that. It's the same set of eigenvectors. Of course, we consider this quantity here, because this is exactly the quantity that comes up in this expression once we hit it by characteristic vectors of subsets from left and right. All right. So what are the eigenvalues? So A previously had eigenvalues lambda 1 through lambda n. But now the top one gets chopped down to 0. So you can check this explicitly. So you can check this explicitly by checking that if you take this matrix multiplied by the all-ones vector, you get 0. And if you have a eigenvector-eigenvalue pair, then hitting this by any of the other ones gets you the same as in A, because you have this orthogonality condition. All the other eigenvectors are orthogonal to the all-ones vector. All right. So now we apply the Courant-Fischer criteria, which tells us that the number in this discrepancy quantity, which we can write in terms of this matrix, it is upper bounded by the product of the length of these two vectors, x and y, multiplied by the spectral norm. So I'm not quite using the version up there, but I'm using the spectral norm version, which we discussed last time. It's essentially the one up there, but you allow not just single x but x and y. And that corresponds to the largest eigenvalue in absolute value, which we see that. It's at most lambda. So at most lambda times size of x, size of y square root. And that finishes the proof of the expander mixing lemma. So the moral here is that, just like what we saw earlier in the dense case but for any parameters-- so here, it's a very clean statement. You can even have done the degree graphs. d could be a constant. If lambda is small compared to d, then you have this discrepancy condition. And the reason why this is called an expander mixing lemma is that there's this notion of expanders, which is not quite the same but very intimately related to pseudorandom graphs. So one property of pseudorandom graphs that is quite useful-- in particular, in computer science-- is that if you take a small subset of vertices, it has lots of neighbors. So the graph is now somehow clustered into a few local pieces. So there's lots of expansion. And that's something that you can guarantee using the expander mixing lemma, that you have lots of-- you take a small subset of vertices. You can expand outward. So graphs with that specific property, taking a small subset of vertices always gets you lots of neighbors, are called expander graphs. And these graphs play an important role, in particular, in computer science in designing algorithms and proving complexity results and so on but also play important roles in graph theory and combinatorics. Well, next time, we'll address a few questions which are along the lines of, one, how small can lambda be as a function of d? So here is this. If lambda's small compared to d, then you have this discrepancy. But if d is, let's say, a million, how small can lambda be? That's one question. Another question is, considering everything that we've said so far, what can we say about, let's say, the relationship between some of these conditions for sparse graphs but that are somewhat special-- for example, kd graphs or vertex-transitive graphs? And it turns out some of these relations are also equivalent to each other.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
13_Sparse_regularity_and_the_GreenTao_theorem.txt
YUFEI ZHAO: Last time, we considered the relationship between pseudo-random graphs and their eigenvalues. And the main message is that the smaller your second largest eigenvalue is, the more pseudo-random a graph is. In particular, we were looking at this class of graphs that are d-regular-- they are somewhat easier to think about. And there is a limit to how small the second largest eigenvalue can be. And that was given by the Alon-Boppana bound. You should think of d here as a fixed number-- so here, d is a fixed constant. Then, as the number of vertices becomes large, the second largest eigenvalue of a d-regular graph cannot be less than this quantity over here. So this is the limit to how small this second largest eigenvalue can be. And last time, we gave a proof of this bound by constructing an appropriate function that witnesses this lambda 2. We also gave a second proof which proves a slightly weaker result, which is that the second largest eigenvalue in absolute value is at least this quantity. So in spirit, it amounts to roughly the same result-- although technically, it's a little bit weaker. And that one we proved by counting walks. And also at the end of last time, I remarked that this number here-- the fundamental significance of this number is that it is the spectral radius of the infinite d-regular tree. So that's why this number is here. Of course, we proved some lower bound. But you can always ask the question, is this the best possible lower bound? Maybe it's possible to prove a somewhat higher bound. And that turns out not to be the case. So that's the first thing that we'll see today is some discussions-- I won't show you any proofs-- but some discussions on why this number is best possible. And this is a very interesting area of graph theory-- goes under the name of Ramanujan graphs. So I'll explain the history in a second, why they're called Ramanujan graphs. Ramanujan did not study these graphs, but they are called them for good reasons. So by definition, a Ramanujan graph is a d-regular graph, such that, if you look at its eigenvalue of the adjacency matrix, as above, the second largest eigenvalue in absolute value is, at most, that bound up there-- 2 root d minus 1. So it's the best possible constant you could put here so that there still exists infinitely many d-regular Ramanujan graphs for fixed d-- and the size of the graph going to infinity. And the last time, we also introduced some terminology. Let me just repeat that here. So this is, in other words, an nd lambda graph, with lambda at most 2 root d minus 1. Now, it is not hard to obtain a single example of a Ramanujan graph. So I just want some graph such that-- or the top eigenvalue is d. I want the other ones to be small. So for example, if you get this click, it's d-regular. Here, the top eigenvalue is d. And if it's not too hard to compute that all the other eigenvalues are equal to exactly minus 1. So this is an easy computation. But the point is that I want to construct graphs-- I want to understand whether they are graphs where d is fixed. So this is somehow not a good example. What we really want is fixed d and n going to infinity. Large number of vertices. And the main open conjecture in this area is that for every d, there exists infinitely many Ramanujan d-regular graphs. So let me tell you some partial results and also explain the history of why they're called Ramanujan graphs. So the first paper where this name appeared and coined this name-- and I'll explain the reason in a second-- is this important result of Lubotzky, Phillips, and Sarnak. From the late '80s. So their paper was titled Ramanujan graphs. So they proved that this conjecture is true. So the conjecture is true for all d, such that d minus 1 is a prime number. I should also remark that the same result was proved independently by Margulis at the same time. Their construction of this graph is a specific Cayley graph. So they gave an explicit construction of a Cayley graph, with the group being the projective special linear group-- PSL 2 q. So, some group-- and this group actually comes up a lot. It's a group with lots of nice pseudo-randomness properties. And to verify that, the corresponding graph has the desired eigenvalue properties, they had to invoke some deep results from number theory that were related to Ramanujan conjectures. So that's why they called these graphs Ramanujan graphs. And that name stuck. So these papers, they proved that these graphs exist for some special values of d-- namely, when d minus 1 is a prime. There was a later generalization in the '90s, by Morgenstern, generalizing such constructions showing that you can also take d minus 1 to be a prime power. And really, that's pretty much it. For all the other values of d, it is open whether there exists infinitely many d-regular Ramanujan graphs. In particular, for d equal to 7, it is still open. Do there exist infinitely many semi-regular Ramanujan graphs? No. What about a random graph? If I take a random graph, what is the size of its second largest eigenvalue? And there is a difficult theorem of Friedman-- I say difficult, because the paper itself is more than 100 pages long-- that if you take a fixed d, then a random end vertex d-regular graph. So what does this mean? So the easiest way to explain the random d-regular graph is that you look at the set of all possible d-regular graphs on a fixed number of vertices and you pick one uniformly at random. So random-- such graph is almost Ramanujan, in the following sense-- that the second largest eigenvalue in absolute value is, at most, 2 root d minus 1 plus some small arrow little 1, where the little 1 goes to 0 as n goes to infinity. So in other words, this constant cannot be improved, but this result doesn't tell you that any of these graphs are Ramanujan. Experimental evidence suggests that if you take, for fixed value of d-- let's say d equals to 7 or d equals to 3-- if you take a random d-regular graph, then a specific percentage of those graphs are Ramanujan. So the second largest eigenvalue has some empirical distribution, at least from computer experiments, where some specific fraction-- I don't remember exactly, but let's say 40% of three regular graphs-- is expected in the limit to be Ramanujan. So that appears to be quite difficult. We have no idea how to even approach such conjectures. There were some exciting recent breakthroughs in the past few years concerning a variant, a somewhat weakening of this problem-- a bipartite analogue of Ramanujan graphs. Now, in a bipartite graph-- so all bipartite graphs have the property that its eigenvalues, its spectrum, is symmetric around 0. Its smallest eigenvalue is minus d. So if you plot all the eigenvalues, it's symmetric around 0. This is not a hard fact to see-- I encourage you to think about it. And that's because, if you have an eigenvector-- so it lifts somewhere on the left and somewhere on the right-- I can form another eigenvector, which is obtained by flipping the signs on one part. If the first eigenvector has eigenvalue lambda, then the second one has eigenvalue minus lambda. So the eigenvalues come in symmetric pairs. So by definition, a bipartite Ramanujan graph is one where it's a bipartite graph and I only require that the second largest eigenvalue is less than 2 root d minus 1. Everything's symmetric around the origin. So this is by definition. If you start with a Ramanujan graph, I can use it to create a bipartite Ramanujan graph, because if I look at this 2 lift-- so where there's this construction-- this means if I start with some graph, G-- so for example, if G is this graph here, what I want to do is take two copies of this graph, think about having them on two sheets of paper, one on top of the other. And I draw all the edges criss-crossed. So that's G cross K 2. This is G. You should convince yourself that if G has eigenvalues lambda then G cross K 2 has eigenvalues. The original spectrum, as well, it's symmetric-- so it's negation. So if G is Ramanujan, then G cross K 2 is a bipartite Ramanujan graph. So it's a weaker concept-- if you have Ramanujan graphs, then you have bipartite Ramanujan graphs-- but not in reverse. But, still a problem of do there exist d-regular bipartite Ramanujan graphs is still interesting. It's a somewhat weaker problem, but it's still interesting. And there was a major breakthrough a few years ago by Marcus Spielman and Srivastava, showing that for all fixed d, there exist infinitely many d-regular bipartite Ramanujan graphs. And unlike the earlier work of Lubotzky, Phillips, and Sarnak-- which, the earlier work was an explicit construction of a Cayley graph-- this construction here is a probabilistic construction. It uses some very nice tools that they called interlacing families. So it showed, probabilistically, using a very clever randomized construction, that these graphs exist. So it's not just take a usual d-regular random bipartite graph, but there's some clever constructions of randomness. And this is more or less the state of knowledge regarding the existence of Ramanujan graphs. Again, the big open problem is that there exists d-regular Ramanujan graphs. For every d, there are infinitely many such Ramanujan graphs. Yeah. AUDIENCE: So in the conception of G cross K 2, lambda 1 is equal to d? Or is equal to original [INAUDIBLE].. YUFEI ZHAO: Right, so the question is, if you start with a d-regular graph and take this construction, the spectrum has 1 d and it also has a minus d. If your graph is bipartite, its spectrum is symmetric around the origin. So you always have d and minus d. So a bipartite graph can never be Ramanujan. But the definition of a bipartite Ramanujan graph is just that I only require that the remaining eigenvalues sit in that interval. I'm OK with having minus d here. So that's by definition of a bipartite Ramanujan graph. Any more questions? All right, so combining a Alon-Boppana bound and both the existence of Ramanujan graphs and also Friedman's difficult result that random graph is almost Ramanujan, we see that this 2 root d minus 1-- that number there is optimal. So that's the extent in which a d-regular graph can be pseudo-random. Now, the rest of this lecture, I want to move onto a somewhat different topic, but still concerning sparse pseudo-random graphs. Basically, I want to tell you what I did for my PhD thesis. So, so far we've been talking about pseudo-random graphs, but let's combine it with the topic in the previous chapter-- namely, similarities regularity lemma. And we can ask, can we apply the regularity method to sparse graphs? So when we talk about similarities regularity, I kept emphasizing that it's really about dense graphs, because there are these error terms which are little and squared. And if your graph is already sparse, that error term eats up everything. So for sparse graphs, you need to be extra careful. So I want to explore the idea of a sparse regularity. And here, sparse just means not dense. So sparse means x density, little 1. So, the opposite of dense. We saw the triangle removal [INAUDIBLE].. So, let me remind you the statement. It says that for every epsilon, there exists some delta, such that if G has a small number of triangles, then G can be made triangle-free by removing a small number of edges. I would like to state a sparse version of this theorem that works for graphs where I'm looking at sub constant x densities. So roughly, this is how it's going to go. I'm going to put in these extra p factors. And you should think of p as some quantity that goes to 0 with n. So, think of p as the general scale. So that's the x density scale we're thinking of. I would like to say that if G has less than that many triangles, then G can be made free by deleting a small number of edges. But what does small mean here? Small should be relative to the scale of x densities you're looking at. So in this case, we should add an extra factor of p over here. So that's the kind of statement I would like, but of course, this is too good to be true, because we haven't really modified anything. If you read the statement, it's just completely false. So I would like adding some conditions, some hypotheses, that would make such a statement true. And hypothesis is going to be roughly along those lines. So I'm going to call this a meta-theorem, because I won't state the hypothesis precisely. But roughly, it will be along the lines that, if gamma is some, say, sufficiently pseudo-random graph and vertices and x density, p, and G is a subgraph of gamma, then I want to say that G has this triangle removal property, relatively inside gamma. And this is true. Well, it is true if you're putting the appropriate, sufficiently pseudo-random condition. So I'm leaving here-- I'll tell you more later what this should be. So this is a kind of statement that I would like. So a sparse extension of the triangle removal lemma says that, if you have a sufficiently pseudo-random host, or you think of this gamma as a host graph, then inside that host, relative to the density of this host, everything should behave nicely, as you would expect in the dense case. The dense case is also a special case of the sparse case, because if we took gamma to be the complete graph-- which is also pseudo-random, it's everything-- it's uniform-- then this is also true. And that's triangle removal among the dense case, but we want this sparse extension. Question. AUDIENCE: Where does the c come into [INAUDIBLE]?? YUFEI ZHAO: Where does p come in to-- so p here is the edge density of gamma. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: Yeah. So again, I'm not really stating this so precisely, but you should think of p as something that could decay with n-- not too quickly, but decay at n to the minus some small constant. Yeah. AUDIENCE: Delta here doesn't depend on gamma? YUFEI ZHAO: Correct. So the question is, what does delta depend on? So here, delta depends on only epsilon. And in fact, what we would like-- and this will indeed be basically true-- is that delta is more or less the same delta from the original triangle removal lemma. Yeah. AUDIENCE: If G is any graph, what's stopping you from making it a subgraph of some large [INAUDIBLE]? YUFEI ZHAO: So the question is, if G is some arbitrary graph, what's to stop you from making it a subgraph of a large pseudo-random graph? And there's a great question. If I give you a graph, G, can you test whether G satisfies the hypothesis? Because the conclusion doesn't depend on gamma. The conclusion is only on G, but the hypothesis requires us to gamma. And so my two answers to that is, one, you cannot always embed it in the gamma. I guess the easier answer is the conclusion is false with other hypothesis. So you cannot always embed it in such a gamma. But it is somewhat difficult to test. I don't know a good way to test whether a given G lies in such a gamma. I will motivate this theorem in a second-- why we care about results of this form. Yes. AUDIENCE: Don't all sufficiently large pseudo-random graphs-- say, with respect to the number of vertices of G-- contain copies of every G? YUFEI ZHAO: So the question is, if you start with a sufficiently large pseudo-random gamma, does it contain every copy of G? And the answer is no, because G has the same number of vertices as gamma. A sufficiently pseudo-random-- again, I haven't told you what sufficiently pseudo-random even means yet. But you should think of it as controlling small patterns. But here, G is a much larger graph. It's the same size, it's just maybe, let's say, half of the edges. So what you should think about is starting with gamma being, let's say, a random graph. And I delete adversarially, let's say, half of the edges of gamma. And you get G. So let me go on. And please ask more questions. I won't really prove anything today, but it's really meant to give you an idea of what this line of work is about. And I also want to motivate it by explaining why we care about these kind of theorems. So first observation is that it is not true with all the hypothesis-- hopefully all of you see this as obviously too good to be true. But will also see some specific examples. Here's a specific example. So this is not true without this gamma. So for example, you can have this graph, G. And we already saw this construction that came from Behren's construction, where you have n vertices and n to the 2 minus little 1 edges, where every edge belongs to exactly one triangle. If you plug in this graph into this theorem, with all the yellow stuff-- if you add in this p-- you see it's false. You just cannot remove-- anyway. In what context can we expect such a sparse triangle removal lemma to be true? One setting for which it is true-- and this was a result that was proved about 10 years ago-- is that if your gamma is a truly random graph. So this is true for a random gamma if p is sufficiently large and roughly it's at least-- so there's some constant such that if p is at least c over [INAUDIBLE],, then it is true. So this is the result of Conlon and Gowers. Yeah. AUDIENCE: Is this random in the Erdos-Rényi sense? YUFEI ZHAO: So this is random in the Erdos-Rényi sense. So, Erdos-Rényi random graph. But this is not the main motivating reason why I would like to talk about this technique. The main motivating example is the Green-Tao theorem. So I remind you that the Green-Tao theorem says that the primes contain arbitrarily long arithmetic progressions. So the Green-Tao theorem is, in some sense, an extension of Szemeredi's theorem, but a sparse extension. Szemeredi's theorem tells you that if you have a positive density subset of the integers, then it contains long arithmetic progressions. But here, the primes-- we know from prime number theorem that the density of the primes up to n decays, like 1 over log n. So it's a sparse set, but we would like to know that it has all of these patterns. It turns out the primes are, in some sense, pseudo-random. But that's a difficult result to prove. And that was proved after Green-Tao proved their initial theorem-- so, by later works of Green and Tao and also Ziegler. But the original strategy-- and also the later strategy for the stronger result, as well. But the strategy for the Green-Tao theorem is this-- you start with the primes and you embed the primes in a somewhat larger set. You start with the primes and you embed it in a somewhat larger set, which we'll call, informally, pseudoprimes. And these m, roughly speaking, numbers with no small prime divisors. Because these numbers are somewhat smoother compared to the primes, they're easier to analyze by analytic number theory methods, especially coming from sieve theory. And it is easier, although still highly nontrivial, to show that these pseudoprimes are, in some sense, pseudo-random. And that's the kind of pseudo-random host that corresponds with the gamma over there. So the Green-Tao strategy is to start with the primes, build a slightly larger set so that the prime sit inside the pseudoprimes in a relatively dense manner. So it has high relative density. And then, if you had this kind of strategy for a sparse triangle removal lemma-- but imagine you also had it for various other extensions of sparse hypergraph removal lemma, which allows you to prove Szemeredi's theorem. And now you can use it in that setting. Then you can prove Szemeredi's theorem in the primes. That's the theorem and that's the approach. And that's one of the reasons, at least for me, why something like a sparse triangle removal lemma plays a central role in these kind of problems. So I want to say more about how you might go about proving this type of result and also what pseudo-random graph means over here. So, remember the strategy for proving the triangle removal lemma. And of course, all of you guys are working on this problem set and so the method of regularity hopefully should be very familiar to you by the end of this week. But let me remind you that there are three main steps, one being to partition your graph using the regularity lemma. The second one, to clean. And the third one, the count. And I want to explain where the sparse regularity method fails. So you can try to do everything the same and then-- so what happens if you try to do all these things? So first, let's talk about sparse regularity lemma. So let me remind you-- previously, we said that a pair of vertices is epsilon regular if, for every subset U of A and W of B-- neither too small. So if neither are too small, one has that the number of edges between U and W differs from what you would expect. So the x density between U and W is close to what you expect, which is the ordinal edge density between A and B. So they differ by no more than epsilon. So this should be a familiar definition. What we would like is to modify it to work for the sparse setting. And for that, I'm going to add in an extra p factor. So I'm going to say epsilon, p regular. Well, this condition here-- now, oh, the densities are on the scale of p. Which goes to 0 as n goes to infinity. So in what does the property compare them? I should add an extra factor of p to put everything on the right scale. Otherwise, this is too weak. And given this definition here, we can say that a partition of vertices is epsilon regular if all part but at most epsilon fraction pairs is epsilon regular-- so an equitable partition. And I would modify it to the sparse setting by changing the appropriate notion of regular to the sparse version, where I'm looking at scales of p. I still require at most epsilon fraction-- that stays the same. That's not affected by the density scale. Previously, we had the irregularity lemma, which said that for every epsilon, there exists some M such that every graph has an epsilon regular partition into at most M parts. And the sparse version would say that if your graph has x density at most p-- and here, all of these constants are negotiable. So when I say p, I really could mean 100 times p. You just change these constants. So if it's most p, then it has an epsilon, p regular partition into at most m parts. Here, m depends only on epsilon. So previously, I wrote down the sparse triangle removal lemma. And I wrote down the statement and it was false-- with all the additional hypotheses, it was false. It turns out that this is actually true-- the version of the sparse regularity lemma, which sounds almost too good to be true, initially. We are adding in a whole lot of sparsity and sparsity seems to be more difficult to deal with. And the reason why I think sparsity is harder to deal with is that, in some sense, there are a lot more sparse graphs than there are dense graphs. So let me pause for a second and explain that. It is not true that, in some sense, there are more sparse graphs. Because if you just count-- once you have sparser things, there are fewer of them. But I mean in terms of the actual complexity of the structures that can come up. When you have sparser objects, there's a lot more that can happen. In dense objects, Szemeredi's regularity lemma tells us, in some sense, that the amount of complexity in the graph is bounded. But that's not the case for sparse graphs. In any case, we still have some kind of sparse regularity lemma. And this version here, as written, is literally true if you have the appropriate definitions-- and more or less, we have those definitions up there. But I want to say that it's misleading. This is true, but misleading. And the reason why it is misleading is that, in a sparse graph, you can have lots of intricate structures that are hidden in your irregular parts. It could be that most edges are inside irregular pairs, which would make the irregularity lemma a somewhat useless statement, because when you do the cleaning step, you delete all of your edges. And you don't want that. But in any case, it is true-- and I'll comment on the proof in a second. But the way I want you to think about the sparse regularity lemma is that it should work when-- so before jumping to that, a specific example where this happens is, for example, if your graph G is a click on a sublinear fraction of vertices. Somehow, you might care about that click. So that's a pretty important object in the graph. But when you do the sparse regularity partition, it could be that the entire click is hidden inside an irregular part. And you just don't see it-- that information gets lost. The proper way to think about the sparse regularity lemma is to think about graphs, G, that satisfy some additional hypotheses. So in practice, g is assumed to satisfy some upper regularity condition. And an example of such an hypothesis is something called no dense spots, meaning that it doesn't have a really dense component, like in the case of a click on a very small number of vertices. So no dense spots-- one definition could be that there exists some eta-- and here, just as in quasi-random graphs, I'm thinking of sequences going to 0. So there exists eta sequence going to 0 and a constant, c, such that for all set x in the graph-- let's say X and Y. If X and Y have size at least eta fraction of V, then the density between X and Y is bounded by at most a constant factor, compared to the overall density, p, that we're looking at. So in other words, no small piece of the graph has too many edges. And with that notion of the no dense spots, we can now prove the sparse regularity lemma under that additional hypothesis. And basically, the proof is the same as the usual semi-regularity lemma proof that we saw a few weeks ago. So if you have proof of sparse regularity with no dense spots-- hypothesis. OK, so I claim this as the same proof as Szemeredi's irregularity lemma. And the reason is that in the energy increment argument, you do everything the same. You do partitioning if it's not regular. You refine and you keep going. In the energy increment argument, one key property we used was that the energy was bounded between 0 and 1. And every time, you went up by epsilon to the fifth. And now the energy increment argument-- that each step, the energy goes up by something which is like epsilon, let's say, to the fifth and p squared. The energy is some kind of mean square density, so this p squared should play a role. So if you only knew that, then the number of iterations might depend on p-- it might depend on n. So, not a constant-- and that would be an issue. However, if you have no dense spots-- so, because no dense spots-- the final energy, I claim, is, at most, something like C squared p squared. Maybe some small error, because of all the epsilons flowing around, but that's the final energy. So you still have a bounded number of steps. So the bound only depends on epsilon. So the entire proof runs through just fine. OK, so having the right hypothesis helps. But then I said, the more general version without the hypothesis is still true. So how come that is the case? Because if you do this proof, you run into the issue-- you cannot control the number of iterations. So here's a trick introduced by Alex Scott, who came up with that version there. So this is a nice trick, which is that, instead of using x squared-- the function as energy-- let's consider a somewhat different function. So the function I want to use is fe of x which is initially quadratic-- so, initially x squared-- but up to a specific point. Let's say 2. And then after this point, I make it linear. So that's the function I'm going to take. Now, this function has a couple of nice properties. One is that you also have this boosting, this energy increment step, because for all random variables, x-- so x is a non-negative random variable. So think of this as edge densities between parts on the refinement. If the mean of x is, at most, 1, then, if you look at this energy, it increases if x has a large variance. Previously, when we used fe as square, this was true. So this is true with equal to 1-- in fact, that's the definition of variance. But this inequality is also true for this function, fe-- so that when you do the irregularity breaking, if you have irregular parts, then you have some variance in the edge densities. So you would get an energy boost. But the other thing is that we are no longer worried about the final energy being much higher than the individual potential contributions. Because, if you end up having lots of high density pieces, they would contribute a lot. So, in other words, the expectation for the second thing is that the expectation of fe is upper-bounded by, let's say, 4 times the expectation of x. And so this inequality there would cap the number of steps you would have to do. You would never actually end up having too many iterations. So this is a discussion of the sparse regularity lemma. And the main message here is that the regularity lemma itself is not so difficult-- that's largely the same as Szemeredi's regularity lemma. And so that's actually not the most difficult part of sparse triangle removal lemma. The difficulty lies in the other step in the regularity method-- namely, the counting step. And we already alluded to this in the past. The point is that there is no counting lemma for sparse regular graphs. And we already saw an example where, if you start with a random graph which has a small number of triangles and I delete a small number of edges corresponding to those triangles-- one, I do not affect its quasi-randomness. But two, there's no triangles anymore, so there's no triangle counting lemma. And that's a serious obstacle, because you need this counting step. So what I would like to explain next is how you can salvage that and use this hypothesis here written in yellow to obtain a counting lemma so that you can complete this regularity method that would allow you to prove the sparse triangle removal lemma. And a similar kind of technique can allow you to do the Green-Tao theorem. So let's take a quick break. OK, any questions so far. So let's talk about the counting lemma. So, the first case of the counting lemma we considered was the triangle counting lemma. So remember what it says. If you have 3 vertex sets-- V1, V2, V3-- such that, between each pair, it is epsilon regular. And edge density-- that's for simplicity's sake-- they all have the same edge density. Actually, they can be different. So d sub ij-- so possibly different edge densities. But I have the set-up. And then the triangle counting lemma tells us that the number of triangles with one vertex in each part is basically what you would expect in the random case-- namely, multiplying these three edge densities together, plus a small error, and then multiplying the vertex sets' sizes together. So what we would like is a statement that says that if you have epsilon p regular and x densities now at scale p, then we would want the same thing to be true. Here, I should add an extra p cubed, because that's the densities we're working with. And I want some error here-- OK, I can even let you take some other epsilon. But small changes are OK. So that's the kind of statement we want-- and this is false. So this is completely false. And the example that I said earlier was one of these examples where you have a random graph. So this initial version is false, because if you take a G and p, with p somewhat less than 1 over root n, and then remove an edge from each triangle-- or just remove all the triangles-- then you have a graph which is still fairly pseudo-random, but it has no triangles. So you cannot have a counting lemma. So there's another example which, in some sense, is even better than this random example. And it's a somewhat mysterious example due to a law that gives you a pseudo-random gamma. So it's, in some sense, an optimally pseudo-random gamma, such that it is d-regular with d on the order of n to the 3/2s. And it's an nd lambda graph, where lambda is on the order of root d. Because here, d is not a constant. But even in this case, roughly speaking, this is as pseudo-random as you can expect. So the second eigenvalue is roughly square root of the degree. And yet, this graph is triangle free. So you have some graph which, for all the other kinds of pseudo-randomness is very nice. So it has all the nice pseudo-randomness properties, yet it is still triangle free. It's sparse. So the triangle counting lemma is not true without additional hypotheses. So I would like to add in some hypotheses to make it true. And I would like a theorem. So again, I'm going to put as a meta-theorem, which says that if you assume that G is a subgraph of a sufficiently pseudo-random gamma and gamma has edge density p, then the conclusion is true. And this is indeed the case. And I would like to tell you what is the sufficiently pseudo-random-- what does that hypothesis mean? So that at least you have some complete theorem to take. There are several versions of this theorem, so let me give you one which I really like, because it has a fairly clean hypothesis. And the version is that the pseudo-randomness condition-- so here it is. So, a sufficient pseudo-randomness hypothesis on gamma, which is that gamma has the correct number-- "correct" in quotes, because this is somewhat normative. So what I'm really saying is it has, compared to a random case, what you would expect. Densities of all subgraphs of K-- 2, 2, 2. Having correct density of H means having H density. 1 plus little 1 times p, raised to the number of edges of H, which is what you would expect in a random case. So you should think of there, again, not being just one graph, but a sequence of graphs. You can also equivalently write it down in terms of deltas and epsilons having error parameters. But I like to think of it having a sequence of graphs, just as in what we did for quasi-random graphs. If your gamma has this pseudo-randomness condition, which is we're in this sparse setting. So if you try to compare this to what we did for quasi-random graphs, you might get confused. Because there, having the correct C4 count already implies everything. This condition, it actually does already include having the correct C4 count. So K 2, 2, 2 is this graph over here. And I'm saying that if it has the correct density of H, whenever H is a subgraph, of K 2, 2, 2-- then it has a correct density. So in particular, it already has a C4 count, but I want more. And it turns out this is genuinely more, because in a sparse setting, having the correct C4 count is not equivalent to other notions of pseudo-randomness. So this is a hypothesis. So if I start with a sequence of gammas, I have the correct counts of K 2, 2, 2s as well as subgraphs of K 2, 2, 2s. Then I claim that that pseudo-random host is good enough to have a counting lemma-- at least for triangles. Any questions? Now, you might want to ask for some intuitions about where this condition comes from. The proof itself takes a few pages. I won't try to do it here. I might try to give you some intuition how the proof might go and also what are the difficulties you might run into when you try to execute this proof. But, at least how I think of it is that this K 2, 2, 2 condition plays a role similar to how previously, in dense quasi-random graphs, we had this somewhat magical looking C4 condition, which can be viewed as a doubled version of an edge. So actually, the technical name is called a blow-up. It's a blow-up of an edge. Whereas the K 2, 2, 2 condition is a 2 blow-up of a triangle. And this 2 blow-up hypothesis is some kind of a graph theoretic analogue of controlling second moment. Just as knowing the variance of a random variable-- knowing its second moment-- helps you to control the concentration of that random variable, showing that it's fairly concentrated. And it turns out that having this graphical second moment in this sense also allows you to control its properties so that you can have nice tools, like the counting lemma. So let me explain some of the difficulties. If you try to run the original proof of the triangle removal lemma for the sparse setting, what happens? So if you start with a vertex-- so remember how the proof of triangle removal lemma went. You start with this set-up and you pick a typical vertex. This typical vertex has lots of neighbors to the left and lots of neighbors to the right. And here, a lot means roughly the edge density times the number of vertices-- and a lot of vertices over here. And then you say that, because these are two fairly large vertex sets, there are lots of edges between them by the hypotheses on epsilon regularity, between the bottom two sets. But now, in the sparse setting, we have an additional factor of p. So these two sets are now quite small. They're much smaller than what you can guarantee from the definition of epsilon, p regular. So you cannot conclude from them being epsilon regular that there are enough edges between these two very small sets. So the strategy of proving the triangle removal lemma breaks down in the sparse setting. In general-- not just for triangles, but for other H's as well-- we also have this counting lemma. So, the sparse counting lemma. And also the triangle case, which I stated earlier. So this is drawing work due to David Colin, Jacob Fox, and myself. Says that there is a county lemma. So let me be very informal. So, that there exists a sparse counting lemma for counting H, in this set-up as before. If gamma has a pseudo-random property of containing the correct density of all subgraphs of the 2 blow-up of H. Just as in the triangle, the 2 blow-up is K 2, 2, 2. In general, the 2 blow-up takes a graph, H, and then doubles every vertex and puts in four edges between each pair of vertices. So that's the 2 blow-up of H. If your gamma has pseudo-random properties concerning counting subgraphs of this 2 blow-up, then you can obtain a counting lemma for H itself. Any questions? OK, so let's take this counting lemma for granted for now. How do we proceed to proving the sparse triangle removal lemma? Well, I claim that actually it's the same proof where you run the usual simulated regularity proof of triangle removal lemma. But now, with all of these extra tools and these extra hypotheses, you then would obtain the sparse triangle removal lemma, which I stated earlier. And the hypothesis that I left out-- the sufficiently pseudo-random hypothesis on gamma-- is precisely this hypothesis over here, as required by the counting lemma. And once you have that, then you can proceed to prove a relative version of Roth's theorem-- and also, by extension, two hyper-graphs-- also a relative version of Szemeredi's theorem. So, recall that the Roth's theorem tells you that if you have a sufficiently large-- so let me first write down Roth's theorem. And then I'll add in the extra relative things in yellow. So if I start with A, the subset of z mod N, such that A has size-- at least delta n. So then, Roth's theorem tells us that A contains at least one three-term arithmetic progression. But actually, you can boost that theorem. And you've seen some examples of this in homework. And also our proofs also do this exact same thing. If you look at any of the proofs that we've seen so far, it tells us that A not only contains one single 3Ap, but it contains many 3Ap's, where C is some number that is positive. So you can obtain this by the versions we've seen before, either by looking at a proof-- problem is in the the proof gift stack-- or by using the black box version of Roth's theorem. And then there's a super saturation argument, which is similar to things you've done in the homework. What we would like is a relative version. And a relative version will say that if you have a set, S, which is sufficiently pseudo-random. And S has density, p. Here, [INAUDIBLE]. And now A is a subset of S. And A has size at least delta, that of S. Then, A contains still lots of 3Ap's, but I need to modify the quantity, because I am looking at density, p. So this statement is also true if you're putting the appropriate hypothesis into sufficiently pseudo-random. And what should those hypotheses be? So think about the proof of Roth's theorem-- the one that we've done-- where you set up a graph. So, you set up this graph. So, one way to do this is that you say that you put in edges between the three parts-- x, y, and z. So the vertex sets are all given by z mod N. And you put in an edge between x and y, if 2x plus y lies in S. Pulling the edge between x and z-- if x minus z lies in S and a third edge between y and z, if minus y minus 2z lies in S. So this is a graph that we constructed in the proof of Roth's theorem. And when you construct this graph, either for S or for A-- as we did before-- then we see that the triangles in this graph correspond precisely to the 3Ap's in the set. So, looking at the triangle counting lemma and triangle removal lemma-- the sparse versions-- then you can read out what type of pseudo-randomness conditions you would like on S-- so, from this graph. So, we would like a condition, which says that this graph here-- which we'll call gamma sub S-- to have the earlier pseudo-randomness hypotheses. And you can spell this out. And let's do that. Let's actually spell this out. So what does this mean? What I mean is S, being a subset of Z mod N-- we say that it satisfies what's called a 3-linear forms condition. If, for uniformly chosen random x0, x1, y0, y1, z0, z1 elements of z mod nz. Think about this K 2, 2, 2. So draw a K 2, 2, 2 up there. So what are the edges corresponding to the K 2, 2, 2? So they correspond to the following expressions-- minus y0 minus 2z0-- minus y1 minus 2z0-- minus y0 minus 2z1-- minus y1 minus 2z1. So those are the edges corresponding to the bottom. Draw C4 across the bottom two vertex sets. But then there are two more columns. And I'll just write some examples, but you can fill in the rest. OK, so there are at least 12 expressions. And what we would like is that, for random, the probability that all of these numbers are contained in S is within 1 plus little 1 factor of the expectation, if S were a random set. In other words, in this case, it's p raised to 12-- random set of density, p. And furthermore, the same holds if any subset of these 12 expressions are erased. Now, I want you to use your imagination and think about what the theorem would look like for not 3Ap's, but for 4Ap's-- and also for k-Ap's in general. So there is a relative Szemeredi theorem, which tells you that if you start with S-- so here, we fix K. If you start with this S, that satisfies the k-linear forms condition. And A is a subset of S that is fairly large. Then A has k-Ap. So I'm being slightly sloppy here, but that's the spirit of the theorem-- that you have this Szemeredi theorem inside a sparse pseudo-random set, as long as the pseudo-random set satisfies this k-linear forms condition. And that k-linear forms condition is an extension of this 3-linear forms condition, where you write down the proof that we saw for Szemeredi's theorem, using hyper-graphs. Write down the corresponding linear forms-- you expand them out and then you write down this statement. So this is basically what I did for my PhD thesis. So we can ask, well, what did Green and Tao do? So they had the original theorem back in 2006. So their theorem, which also was a relative Szemeredi theorem, has some additional, more technical hypotheses known as correlation conditions, which I won't get into. But at the end of the day, they constructed these pseudoprimes. And then they verified that those pseudoprimes satisfied these required pseudo-randomness hypotheses-- that those pseudoprimes satisfied these linear forms conditions, as well as their now-extraneous additional pseudo-randomness hypotheses. And then combining this combinatorial theorem with that number adiabatic result. You put them together, you obtain the Green-Tao theorem, which tells you not just that the primes contain arbitrarily long arithmetic progressions, but any positive density subset of the primes also contains arbitrarily long arithmetic progressions. All of these theorems-- now, if you pass down to a relatively dense subset, it still remains true. Any questions? So this is the general method. So the general method is you have the sparse regularity method. And provided that you have a good counting lemma, you can transfer the entire method to the sparse setting. But getting the counting lemma is often quite difficult. And there are still interesting open problems-- in particular, what kind of pseudo-randomness hypotheses do we really need? Another thing is that you don't actually have to go through regularity yourself. So there is an additional method-- which, unfortunately I don't have time to discuss-- called transference, where the story I've told you is that you look at the proof of Roth's theorem, the proof of Szemeredi's theorem. And you transfer the methods of those proofs to the sparse setting. And you can do that. But it turns out, you can do something even better-- is that you can transfer the results. And this is what happens in Green-Tao. If you look at Szemeredi's theorem as a black-box theorem and you're happy with its statement, you can use these methods to transfer that result as a black box without knowing its proof to the sparse pseudo-random setting. And that sounds almost too good to be true, but it's worth seeing how it goes. And if you want to learn more about this subject, there's a survey by Colin Fox and by myself called "Green-Tao theorem, an exposition." So, where you'll find a self-contained complete proof of the Green-Tao theorem, except no modulo-- the proof of Szemeredi's theorem, which we've called as a black box. But you'll see how the transference method works there. And it involves many of the things that we've discussed so far in this course, including discussions of the regularity method, the counting lemma. And it will contain a proof of this sparse triangle counting lemma. OK, good. We stop here.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
6_Szemerédis_graph_regularity_lemma_I_statement_and_proof.txt
YUFEI ZHAO: We're about to embark on a new chapter in this course where I want to tell you about Szemeredi's graph regularity lemma. Szemeredi's graph regularity lemma is a very powerful tool in modern graph theory, developed back in the '70s. Today I want to show you the statement and the proof of this graph regularity lemma. And next time, we'll see how to apply the lemma for graph theoretic applications. And we'll also use it to give a proof of Roth's theorem. The idea of Szemeredi's regularity lemma is that if you are given a very large graph, G. And it's a fairly robust theorem, so any large, dense graph. And here, "dense" means, let's say, positive x density. Then it is possible to partition the vertex set of this graph G into a bounded number of pieces So that G looks random-like between most pairs of parts. So for instance, I might produce for you a partition of the vertex set into some number of parts. I'll draw five here. So you give me a graph G. I manage to produce for you this vertex partition so that if I look at between a typical pair of parts, you see here maybe the edge density is close to 0.2 but otherwise, the bipartite graph looks like a random graph in some precise sense I will describe in a bit. And if you look at what the graph looks like between another pair of parts, maybe now it's a different x density. Maybe it's around 0.4. And again, looks like a random graph with that density. So in some sense, Szemeredi's regularity lemma is a universal structural description that allows you to approximate a graph by a bounded amount of information. So that's informally the idea. And you can already sense that this can be a very powerful tool. It doesn't matter what graph you input. You apply this lemma, and you get an approximate structural or as later on we'll see, it's also, in some sense, an analytic description of the graph. So the first part of today's lecture will develop just a statement of this regularity lemma. I'll show you what exactly do I mean by "random-like." Well, first let me give some definitions. I denote by the letter e if I input a pair of vertex sets, x and y. Here I might, later on, draw the subscript G if it's clear that I'm always talking about some graph G. So this is basically the number of edges between x and y. And I say "basically" because even though I will draw and depict everything as if x and y are disjoint sets, and that's the easiest case to think about, I'm also going to allow x and y to overlap, and also allow x and y to be the same set, in which case you should read a definition as to what this means. But it's fine to think of it as disjoint sets. So you're looking at a bipartite graph between x and y. We're also going to look at the edge density between x and y. And this is simply the number of edges divided by the product of the sizes of the sets, so what fraction of the possible pairs are actual edges. So from now on, I'll refer to this quantity as "edge density." So now, here's the definition of what "random-like" means for the purpose of Szemeredi's regularity lemma. So we define a notion of an epsilon regular pair to be as follows-- throughout, and later on, I will omit even saying this-- G will be some graph. And we're going to be looking at subsets of vertices of G. And we say that this pair of subsets of vertices is epsilon regular, again, in G, but later on I will even drop saying in G if it's clear which graph we're working with. So we say x and y is epsilon regular if for all subsets A of X, all B subsets of Y that are not too small, so each at least an epsilon proportion of the sets that they live in, we find that the x density between A and B differs from the x density between X and X by no more than epsilon. Let me draw you a picture. I have sets A and B. So I have sets X and Y in my graph G. And I want to say that the edges between X and Y are epsilon regular, so it's random-like, if the following holds-- that whenever I pick a subset A in the left set and a subset B of the right set, the edge density between A and B is approximately the same as the overall edge density between X and Y. So in particular, this bipartite graph, for instance, is not really dense in one part and really sparse in another part. Somehow the edges are evenly distributed in this precise manner. So that's the definition of epsilon regular. Yes, question. AUDIENCE: What is the epsilon for the size of A the same as epsilon for [INAUDIBLE]? YUFEI ZHAO: The question is here, why are we using the same epsilon here, here, and there? And that's a great question. So that's mostly out of convenience. So you could use different parameters. And they do play somewhat different roles, but at the end, we'll generally be looking at one type of epsilons. So we just make our life easier. So you could extend the definition by having an epsilon comma eta, if you like, but it will not be necessary for us, and mostly for simplification. Any more questions? All right. Now if you have a pair x, y that is not epsilon regular, I just want to introduce a piece of terminology. So you can read from the definition what it means to be not epsilon regular. And sometimes I will say "epsilon irregular," but to be precise, I'll stick with not epsilon regular. Then we can exhibit this A and B that witnesses the irregularity. So if x, y is not epsilon regular, then their irregularity as, we say it's "witnessed by" some pair A in X and B in Y, satisfying-- basically, you read the definition, and such that the density between A and B differs quite a bit from the density between X and Y. So when I say "to exhibit" or "to witness irregularity," that's what I mean. Now, there's a bit of an unfortunate nomenclature in graph theory, where previously, we said "irregular graphs" to mean that every vertex is degree D. And now we say "epsilon regular" to mean this. Sorry about that. These are both standard, so usually from context, it's clear which one is meant. So this is what it means for a single pair of vertex sets to be epsilon regular. But now I give you a graph. And I give you a partition of the vertex set. So what does it mean for that partition to be epsilon regular? And here's the second definition. So an epsilon regular partition, we say that a partition-- and generally, I will denote partition by curly letters such as that, P. So the partition will divide a vertex set into a bunch of subsets. So we say that that partition is epsilon regular if the following is true-- if I sum over all pairs of irregular or pairs of vertex sets that are not epsilon regular, so over Vi, Vj not epsilon regular, and sum over the product of their sizes, then what I would like is for the sum to be at most epsilon times the number of pairs of vertices in G. In other words, a small fraction of pairs of vertices, not necessarily edges, but just pairs of vertices, lie between pairs of vertex parts that are not epsilon regular. So for instance, if you do not have epsilon-- if all of your pairs are epsilon regular, then the partition is epsilon regular. But I do allow a small number of blemishes. And that will be necessary. Just to clarify a subtle point here, here I do allow in the summation i equals to j, although in practice it doesn't really matter. You'll see that it's not really going to come up as an issue. And one of the reasons that it's not going to come up as an issue is usually when we apply this lemma, we're going to have a lot of parts. In fact, we can make sure that there is a minimum number of parts. And if none of the parts are too big, then having i equals to j contributes very little to that sum anyway. In particular, if all the set sizes in this partition are roughly the same-- so if they're all roughly 1 over k fraction of the entire vertex set-- then that statement up there being epsilon regular partition up to changing this epsilon is basically the same as saying that fewer than epsilon fraction of the pairs Vi, Vj are not epsilon regular. And here, if k is large enough, I can even let you make i and j different. It's not going to affect things after small changes in epsilon. So when it comes to-- so for people who are seeing Szemeredi's regularity lemma for the first time-- I think that's maybe all of you, or most of you-- I don't want you to focus on the precise statements so much as the spirit of the lemma. Because if you get too nitty gritty with is that the same as that epsilon, you get very confused very quickly. So I want you to focus on the spirit of this lemma. I will state everything precisely, but the idea is that most pairs are not epsilon regular. And don't worry too much about if you are allowed to take i equals to j or not. So now we're ready to state Szemeredi's regularity lemma. And it says that for every epsilon, there exists some constant M depending only on epsilon such that every graph has an epsilon regular partition into at most M parts. You give me the epsilon, for example 1%, and there exists some constant such that every graph has a 1% regular partition into a bounded number of parts. In particular-- and this is very important, make sure you understand this part-- that the number of parts does not depend on the size of the graph. Now, it's true that for some graphs, maybe you do need very many parts. But the number of parts does not get substantially bigger, or does not exceed this bound, even if you look at graphs that have unbounded size. So it is really a universal theorem in the sense that it's independent of the size of the graph. Any questions about the statement of this theorem? Yes. AUDIENCE: So in the informal statement at the beginning, you said G was a large, dense graph. YUFEI ZHAO: That's right. AUDIENCE: Is the dense condition appropriate anywhere in here? YUFEI ZHAO: So the question is, why did I say that G is a large, dense graph? And that's a great question. And that's because if G had a sub-linear number of edges, then I claim that all-- if you look at the definition of epsilon regular pair, and your epsilon is a constant, and if your edge densities are sub-linear, then all of these guys, they are little o of 1. They go to 0. So trivially, you will satisfy the epsilon regular condition. So if your graph is sparse-- sparse in the sense of having sub-quadratic number of edges-- then you trivially obtain epsilon regularity. And so the theorem is still true. It's just not meaningful. It's just not useful. But there are settings where having sparse graphs-- and we'll come back to this later in the course-- it's important to explore what happens to sparse graphs. Yeah. AUDIENCE: So that M is independent of G. YUFEI ZHAO: Yes, M is independent of G. M depends only on epsilon. AUDIENCE: M is really large, but there's no enough vertices in the graph. YUFEI ZHAO: OK, question is, what happens when M is very large, but there are not enough vertices in the graph? Well, if your M is a million, and your graph only has 1,000 vertices, what you can do is have every vertex be its own part. Every vertex is its own part, a singleton partition. And you can check that that partition satisfies the properties. Every pair is a single edge and it's epsilon regular. Yeah. AUDIENCE: So in the definition, is it sort of like all or nothing? You can either [INAUDIBLE] epsilon regularity [INAUDIBLE].. Do you get anything where if you, like, say, make this more continuous, so you allow for it to be-- you quantify how irregular it is, and then can you make [INAUDIBLE]? YUFEI ZHAO: OK, so my understanding what you're asking is in the definition up there, the sum is-- we put the pair in the sum of this epsilon regular, and otherwise don't put it. Is there some gradual way to put some measure of irregularity into that sum? And there are versions of regularity lemma that do that, but they are all, in spirit, morally the same as that one there. Yeah. AUDIENCE: In the informal definition, what does "random-like" mean? YUFEI ZHAO: So in the informal definition, what does "random-like" mean? This is the formal definition of what "random-like" means. So actually later on in the course, one of the chapters will explore what pseudo-random graphs are. So pseudo-random graph, in some sense, means graphs that are not random, but behave in some sense like random. So "random-like" generally just means that in some aspect, in some property, it looks like a random object. And this is one way that something can look like random. So a random graph has this property, but random graphs also have many other properties that are not being exhibited in this definition. But this is one way that graph can look like random. So that's a great question. And we'll come back to that topic later in the course. All of these are great questions. So Szemeredi's regularity lemma, the first time you see it, it can look somewhat scary. But I want you to try to understand it more conceptually. So please do ask questions. Before diving into the proof, I want to make a few more remarks about a statement. It is possible to-- we will prove this version of the regularity lemma. But as I mentioned, it is the spirit of the regularity lemma that I care more about. And it's a very robust statement. You can add on extra declarations that somehow doesn't change the spirit. And the proof will be more or less the same, but for various applications will be slightly more useful. So in particular, it is possible to make the partition equitable. And "equitable partition" sometimes is also called an "equipartition," meaning that it has such that all the Ai's, all the Bi's have sizes differing by at most 1. So basically, all the parts have the same size up to at most 1, because of divisibility. So let me state a version of regularity lemma for equitable partitions. So for every epsilon in m, little m0, there exists a big M such that every graph has an epsilon regular equitable partition of the vertex set into k parts, where k is at least little m, so I can guarantee a minimum number of parts, and at most some bounded number. Again this bound may depend on your inputs epsilon and m0, but it does not depend on the graph itself. And you see the slightly stronger conclusion for many applications is more convenient, to use this formulation. And I will comment on how you may modify the proof that we'll see today into one where you can guarantee equitability. And you see that for this m, little m0 too small, for example, if it's somewhat larger than 1 over epsilon, when you look at the definition of epsilon regular partition, it suffices to check that at most epsilon k squared, epsilon fraction of the pairs, Vi, Vj is epsilon regular over i different from j, again up to changing epsilon, let's say, by a factor of 2. So all of these definitions are basically the same up to small changes in the parameters. Next time, we'll see how to apply the regularity lemma. And we will apply it in the first form, but you see the second form guarantees you a somewhat stronger conclusion, and sometimes more convenient to use. So for example on the homework problems, if you wish to use the second form, then please go ahead. Just make your life somewhat easier, but it essentially captures all the spirit of Szemeredi's regularity. Any questions so far? I want to explain the idea of the proof of the regularity lemma. And this is a very important technique in this area called the "energy increment argument." Here's the idea. We start with some partition, so for example, the trivial partition-- and by that I mean you only have one part. All the vertices are in one part. You're not doing anything to the vertex set. It's one gigantic part. Or if you're looking at some other variant, you can easily modify the proof. So for example, you can also look at an arbitrary partition into little m0 parts, if you wish to have that as your starting point. So or I'm saying is that this proof is fairly robust. And we're going to do some iterations. So as long as your partition is not epsilon regular, we will do something to the partition to move forward. And what we will do is look at each pair of parts in your partition that's not epsilon regular. Well, if they're not epsilon regular, then I can find a pair of subsets which are denoted by the A's that witnesses this non regularity, that witnesses the irregularity. And we start with some partition. So now let us refine the partition into a partition in even more parts by simultaneously refining the partition using all of these Ai, j's that we found in the step above. So you start with some partition. If it is not regular, I can chop up the various parts in some way. So I start with some partition over here. And what we are going to do is, let's say between these two, it's not epsilon regular, so I can find some pairs of vertex sets that exhibits the irregularity. I chop it up. And I can keep further chopping up the rest of the parts. If these two parts are not epsilon regular, then I chop it up like that. And I can keep on doing it. And originally, I have three parts. Now I have 12 parts. And this is a refined partition. And now I repeat until I am done. I am done when I obtain a partition that is epsilon regular. Now, the basic question when it comes to the strategy is, are you ever going to be done? When are you going to be done? And if this process goes on forever or goes on for a very long time, then you might have a lot of parts. But we want to guarantee that there is a bounded number of parts. So what we will show is that-- to show that you have a small number of parts, in other words, why does this process even stop-- and in particular, we want it to stop after a small number of steps, after a bounded number of steps. And to do this, we will define some notion called an "energy" of a partition. And this energy will increase. So first of all, the energy is some quantity that we'll define that lies between 0 and 1. It's some real number lying between 0 and 1. And each step, the energy goes up by some specific quantity. Therefore, because the energy cannot increase past 1, this iteration stops after a bounded number of steps. And once it's done, we end up with a epsilon regular partition. So that's the basic strategy. And what I want to show you is how to execute that strategy. Any questions so far? Yes. AUDIENCE: Just to clarify [INAUDIBLE] a bit, if some Vi's into non-epsilon regular partitions, is it possible for Ai,j and Aik to overlap somehow, right? Just kind of make those into three partitions? YUFEI ZHAO: So if I understand correctly, you are worried about between different pairs, you might have interactions. AUDIENCE: Yeah. YUFEI ZHAO: So you have seen the proof, but I think this is actually a very important and somewhat subtle point, is that I do not refine at each step, I find a pair of witnessing sets. I find all of these witnessing sets all at the same time, and I refine everything all at once. AUDIENCE: OK, so it's like if you do have overlap between two witnessing sets, that's OK? YUFEI ZHAO: That is OK, because this step doesn't care. If you have two witnessing sets that overlap, that is OK. We'll see the proof. Yes. AUDIENCE: Do you just find one pair of witnessing sets for each Vi, Vj, even though there might be more? YUFEI ZHAO: Question is, do we find just one pair of witnessing sets even though there could be more? And the answer is, yes. We just need to find one. There could be lots. So if it's not epsilon regular, it might be very not epsilon regular. And in fact, being a witnessing set is a fairly robust notion. If you just take out a small number of vertices, it's still a witnessing set. Any more questions? Great. So let's take a quick break and then we'll see the proof. Let's get started with the proof of Szemeredi's regularity lemma. And to do the proof, I want to develop this notion of energy which you saw in the proof sketch. So what do I mean by "energy?" First, if I-- let me define some quantities. If I have two vertex subsets, U and W, let me define this quantity, q, which is basically the edge density squared. But I normalize it somewhat according to how big U and W are. I'm going to use the letter and N to denote the number of vertices in G. So this is some cube. And for partitions, if I have a pair of partitions, Pu of U into k parts, and the partition Pw of W into l parts, I set this q of Pu and Pw to be the quantity where I sum over basically all pairs, one part from U, one part from W of this q between Ui and Wj. So this is the density squared. And I'm taking some kind of weighted average of the squared density. So here is a weighted average. If you prefer to think about the special case where this partition is an equipartition, then it is really the average of these squared densities. It's a mean square density. And finally, for a partition P of the vertex set of G into m parts, we define this q of this partition P to be q of P with itself according to the previous definition. Or in other words, I do this double sum, i from 1 to m, j from 1 to m, q of Vi, Vj. And this is the quantity that I will call the "energy" of the partition. It is a mean squared density, some weighted mean of the edge densities between pairs of parts in the partition. You might ask, why is it called an energy? So you might see from this formula here, it's some kind of a mean square density, so it's some kind of an average of squares. So in particular, it's some kind of an L2 quantity. And there's a general phenomenon in mathematics, I think borrowed from physical intuitions, that you can pretty much call anything that's an L2 quantity an energy. And so that's, I think, where the name comes from. So this is the important object for our proof. And let's see how to execute a strategy, the energy increment argument outlined on the board over there. So we want to show that you can refine a partition that is not epsilon regular in such a way that the energy goes up. And to do that, let me state a few lemmas regarding the energy of a partition under refinement. And the point of the next several lemmas is that the energy never decreases under refinement, and it sometimes increases if your partition is not epsilon regular. So the first lemma is that if you look at the energy between a pair of partitions, it is never less than the energy between the two vertex sets. So for instance, if you have U and W like that, and I partition them into Pu and Pw, and I measure the energy, just basically the squared density between U and V versus summing up the individual squared densities after the partition, the left side is always at least as great as the right side. So this is really a claim. It's a fairly simple claim about convexity, but let me set it up in a way that will help some of the later proofs. So let me define a random variable, which I call Z, in the following way. So here's a process that I will use to define this random variable. I will select x, little x, to be a vertex uniformly chosen from U, from the left vertex set. And I will select a vertex y uniformly chosen from W. x and y, they fall into some part in the partition. So suppose Ui is the part where x i falls, and Wi is the set in the partition where y falls. So Ui is a member of this partition. Wi is a member of the other partition of W. Then I define my random variable Z to be the x density between Ui and Wj. So it's Wj. So that's the definition. So pick x randomly. Pick y randomly. Suppose x falls in Ui. Suppose y falls in Uj. Then Z is the x density between these two parts. So Z is some random variable. Let's look at properties of this random variable. First, what is this, it's expectation? It's a discrete random variable. And you can easily compute all of these quantities by just summing up according to how Z is generated. So I look overall, i and j. What's the probability that x falls in Ui? It is the size of Ui as a fraction of U. What's the probability that y falls in Wj? It's the size of Wj as a fraction of size W. And then Z is this quantity here. So this is what I find to be the expectation of Z. But you see the density multiplied by the product of the vertex set sizes, that's just the number of edges between U and W. And you sum over all the i, j's. So that, which is simply the edge density between U and W. So that's the expectation of the Z variable. On the other hand, what's the second moment? In other words, what's the expectation of the square of Z? Again, we do the same computation. First part is the same. The second part now becomes a d squared. And look at how we define energy. This quantity here is basically the energy q between the partition U and the partition of W, except there's normalization. That's not quite the same as the one we used before. So we will just put in that normalization. So now you compare the expectation of Z versus the expectation of Z squared. And we know by convexity that the expectation of Z squared is at least as large as the expectation of Z, that quantity squared. But if you plug in what values you get for these two guys, you derive the inequality claimed in Lemma 1. You have to cancel some normalization factors, but that's easy to do. So that's the first lemma. So the first one is just about a pair of parts, and identify partition, each part, what happens to the energy between this pair. And the second one is a direct corollary of the first one. It says that if you have a second partition, P prime that refines P, then the energy of the second partition, the refinement, is never less than the energy of the first partition. And it is a direct consequence of the first lemma, because we simply apply this lemma to every pair of parts in P. Between every pair of parts, the energy can never go down. So overall, the energy does not go down. And finally, so far, we've just said that the partitions can never make the energy go down. But in order to do this proof, we need to show that the energy sometimes goes up. And that's the point of the third lemma. The third Lemma tells us that you can get an energy boost. So this is the Red Bull Lemma. You can get an energy boost if you are feeling irregular. So if U, W is not epsilon regular, and this epsilon regularity is witnessed by U1 in U and W1 in W, then I claim that the energy obtained by chopping U into U1 and its complement against W1, against the complement of W1 and W, so here again U and W, I find a witnessing set for their irregularity. And now I partition left and right according to-- chop each part into two. So this energy between this partition into two on both sides is bigger than the original energy plus something where we can gain. And this something where we can gain turns out to be at least epsilon raised to the power 4 times the size of U, size of W, divided by n squared. Can you prove it? Let's define Z the same as in the previous proof, as in the proof of Lemma 1. In Lemma 1, we just used the fact that the L2 norm of Z, the expectation of the square, is at least the square of the expectation. But actually, there are differences in it. It's called a "variance." The variance of Z is the difference between these two quantities. I know that it's always non-negative. So if you look at how we derived the expectation of Z and expectation of Z squared, you immediately see that its variance we can write as, up to a normalizing factor, the difference between this energy on one hand and the energy between U and W, namely the mean square of the normalization. On the other hand, a different way to calculate the variance is that it is equal to the expectation of the deviation from the mean squared. So let's think about the deviation from the mean. I am choosing a random vector in, on the left, U, and another random point, random vertex on the left, and a random point on the right. In the event that they both lie in the sets that witness the irregularity, so in the event where x falls here and y falls here, which occurs with this probability, see that this quantity here is equal to the density between U1 and W1 minus the density-- and this expectation of Z is just the density between U and W. So interpreting this expectation for what happens when x falls in U1 and when y falls in W1, ignoring all the other events, because the quantity is always non-negative everywhere. But now from the definition of epsilon regularity, or rather the witnessing of epsilon irregularity, you see that this U1 is at least an epsilon fraction of U. W1's at least an epsilon fraction of W. And this final quantity here is at least epsilon inside, so at least epsilon squared. So here we're using all the different components of the definition of epsilon regular. Yes. AUDIENCE: What happens if we're dividing with more witnessing sets? YUFEI ZHAO: So you're asking what happens if we divide with more witnessing sets? So hold onto that thought. So right now, I'm just showing what happens if you have one witnessing set. Any more questions? So here we have epsilon to the 4th. And if you're putting the normalization comparing these two interpretations, you'll find the inequality claimed by the lemma. So now we are ready to show the key part of this iteration. I'll show you precisely how this iteration works, and show that you always get an energy boost in the overall partition. So I'll call the next one Lemma 4. And this says that if you have a partition P of the vertex set of G into k parts, if this partition is not epsilon regular, then there exists a refinement called Q where every part V sub i is partitioned further into at most 2 the k parts, and such that the partition of Q-- so the energy of the new partition Q-- increases substantially from the previous partition P. And we'll show that you can increase by at least epsilon to the 5th power, some constant in epsilon. So if you look at the strategy up there, if you can do this every step, then that means that the number of iterations is bounded by 1 over epsilon to the 5th power. So to prove this lemma here, we will use the three lemmas up there and put them together. So for all the pairs i, j such that V sub i, V sub j is not epsilon regular, as outlined in the proof in the outline up there, we will find this A superscript i, j in Vi, and A superscript j, i in Vj that witness the irregularity. So do this simultaneously for all pairs i, comma, j, where the Vi, Vj is not epsilon regular. Now what we're going to define Q as is the common refinement. So take all of, just as indicated in that picture up there, simultaneously take all of these A's and use them to refine P. Starting with P, starting with a partition you have, simultaneously cut everything up using all of these witnessing sets. Now, we only have witnessing pairs for pairs that are not epsilon regular. If they're epsilon regular, you don't worry about them. One of the claims in the lemma now is-- this is the Q that we'll end up with. We'll show that this Q has that property. So one of the claims in the lemma is that every Vi is partitioned into at most 2 to the k parts. So I hope that part is clear, because how are we doing the refinement? We're taking Vi. It's divided into parts using these A i, j's, one j coming from each pair that is irregular with Vi. So I'm cutting up Vi using at most k sets, so one coming from each of the other possible. Maybe fewer than k-- that's fine-- but at most k sets are used to cut up each Vi. So you have at most 2 to the k parts once you cut everything up. But the tricky part is to show that you get an energy boost. So let's do this. How do we show that you get an energy boost? We're going to put the top three lemmas together. First, we want to analyze the energy of Q. So let's write it out. So the energy of Q is the sum over this energy of individual partitions of the Vi's. And by this P sub Vi, P sub Vj, I mean the partition of Vj given by Q. So what happens after you cut up the Vi-- that's what I mean by P sub Vj, or rather I should call it Q sub Vi, Q sub Vj. By Lemma 2, we find that-- so let me separate them into two cases. The first case sums over i, j such that Vi, Vj is epsilon regular. And by Lemma 1, so here we're using Lemma 1, we find that this quantity here cannot be less than the Q of Vi, Vj. So take those two parts. Before and after the refinement by Q, the energy cannot go down. So I don't worry too much about pairs that are epsilon regular. But no let me look up here that are not epsilon regular. So what we will do now is even though-- so let's look at that picture up there. So let's focus on what I drew in red. So let's focus between 1 and 2. So suppose the shaded part is the witnessing sets. The witnessing sets got cut up further by other witnessing sets. But I don't have to worry about them because Lemma 2 or Lemma 1, really, tells me that I can do an inequality where I go down to just comparing the energy between this partition of two parts, this single witnessing set and its complement, versus what happens in its partner. So in other words, over here, the Q of this pair, I am saying that it is no less than if I just look at what happens if you only cut up these two sets using the red lines. Let's go on. Applying Lemma 3, the energy boost lemma, the first part stays the same. So this first part stays the same. And the second part, now, because I'm looking at witnessing sets for irregularity, I get this extra boost. So this goes back to one of the questions asked earlier, where in Lemma 3, I don't have to now worry about what happens if you have further cuts, because I only need to worry about the case where I only have a single cut between the epsilon irregular pairs. So putting it together, we see that the previous line is at least, if you sum over the Q's of all the pairs plus this extra epsilon to the 4th term for all pairs that are not epsilon regular. I'm applying monotonicity of energy for the types, for pairs that are epsilon regular, an energy boost for pairs that are not epsilon regular. And for the latter type, I obtain this boost. Now remember what's the definition of an "epsilon regular partition." Unfortunately, it's no longer on the board, but it says that this sum over here, if it is an epsilon regular partition, it is at most epsilon. So if it is not epsilon regular, we can lower bound it. And that's indeed what we will do. The first sum here is, by definition, Q of the partition P. And the second sum, by the definition of epsilon regular, is at least epsilon to the power 5. So here we're using the definition of epsilon regular partition, namely, that a large fraction, so at least an epsilon fraction, basically, of pairs of vertex sets are not epsilon regular, but in this weighted sense. And that finishes the proof of Lemma 4 up there. Any questions so far? All right, so now we are ready to finish everything off, and prove Szemeredi's regularity lemma. So let's prove Szemeredi's regularity lemma. Let's start with the trivial partition, meaning just one large part. And we are going to repeatedly apply Lemma 4 whenever the partition at hand is not regular, whenever the current partition is not epsilon regular. So let's look at its energy. The energy of this partition-- so this is a weighted mean of the edge density squared, so it always lies between 0 and 1, just from the definition of energy. On the other hand, Lemma 4 tells us that the energy increases by at least epsilon to the 5th power at each iteration. So this process cannot continue forever. So it must stop after at most epsilon to the minus 5th power number of steps. And when we stop, we must result in an epsilon regular partition, because otherwise, you're going to continue applying the lemma and push it even further. And that's it. So that proves Szemeredi's graph regularity lemma. Question. AUDIENCE: It's going to be some really big value of M. YUFEI ZHAO: OK, let's talk about bounds. So let's talk about how many parts. So how many parts does this proof produce? We can figure it out. So we have some number of steps. Each step increases the number of parts by something. So if P has k parts, so then Lemma 4 refines P into at most how many parts? AUDIENCE: 2 to the k YUFEI ZHAO: Yeah, so k times 2 to the k. And I have many iterations of this guy. So some of you are already laughing, because it's going to be a very large number. In fact, because it's going to be so large, it makes my calculations slightly more convenient. It really doesn't change the answer so much if I just bound k to the 2 to the k by 2 to the 2 to the k. So the final number of parts is this function iterated on itself epsilon to the minus 5 times. So it's a power of 2 of height at most 2 to the epsilon to the 5. It's a finite number, so it depends only on epsilon and not on the size of your graph. And this is the most important thing. It does not depend on the size of your graph. It is quite large. In fact, even for reasonable values of epsilon, like 1% or even 10%, this number is astronomically large. And you may ask is it really necessary, because we did this proof, and it came out fairly elegantly, I would say it, how the proof was set up. And you arrived at this finite bound. But maybe there's a better proof. Maybe you can work harder and obtain somewhat better bounds. So you can ask, is it possible that the truth is really somehow much smaller? And the answer turns out to be no. So there is a theorem by Tim Gowers which says that there exists some constant. The precise statement, again, is not so important, but based on what I just said, you cannot improve this bound given by this proof. So for every epsilon small enough, there exists a graph whose epsilon regular partition requires how many parts? So the number of parts at least this tower of 2 of height some epsilon to the minus c. So really it's a tower of exponentials of size, essentially polynomial in 1 over epsilon. So maybe you can squeeze the 5 to something less. Actually, we don't even know if that's the case, but certainly you cannot do substantially better than what the proof gives. So Szemeredi's regularity lemma is an extremely powerful tool. And we'll see applications that are basically very difficult to prove. And for some of these applications, we don't really know other proofs except using Szemeredi's regularity lemma. But on the other hand, it gives terrible quantitative bounds. So there is a lot of interest in combinatorics where once you see a proof that requires Szemeredi's regularity lemma, or that is first proved using this technique, to ask can it be used using some other technique? In fact, Szemeredi himself has worked a lot in that direction, trying to get rid of the uses of his lemma. Any questions? AUDIENCE: How could you modify it for equipartitions? YUFEI ZHAO: OK, great. Question is, how can we modify it for equipartitions? So let's talk about that. So it's a fantastic question. So look at this proof and see what can we do if we really want all the parts to have roughly the same size, let's say differing by at most 1. So how to make the epsilon regular partition equitable? Any guesses? Any attempts on what we can do? I mean, basically it's going to follow this proof. As I said, the spirit of Szemeredi's regularity lemma is what I've shown you. But the details and executions may vary somewhat depending on the specific purpose you have in mind. Yeah. AUDIENCE: Can we just add-- [INAUDIBLE] add things to the smaller part because we know that-- by the fact that it's not [INAUDIBLE] that parts aren't too small? YUFEI ZHAO: OK, so you're saying we're going to add something or to massage the partition to make it epsilon-- AUDIENCE: Add vertices to the smaller parts of the partition. YUFEI ZHAO: Add vertices to the smaller parts of the partition, now when are you going to do that? AUDIENCE: When they're-- like so you do the refinement, then when they're not [INAUDIBLE] YUFEI ZHAO: So you want to do this at every stage of the process. AUDIENCE: Yes. [INAUDIBLE] YUFEI ZHAO: I like that idea. So here's what we're going to do. So we still run the same process. So we're going to have this P, which is the current partition. So I have current partition. And as before, we initially have it as either the trivial partition, if you like, or m arbitrary equitable parts. Start with something where you don't really care about anything except for the size. And you run basically the same proof, where if your P is not epsilon regular, then do what we've done before, so basically exactly the same thing. We refine P using pairs witnessing regularity, same as the proof that we just did. And now we need to do something a little bit more to obtain equitability. And what we will do is right after-- so each step in iteration, right after we do this refinement, so after we cut up our graph where maybe some of the parts are really tiny, let's massage the partitions somewhat to make them equitable. And to make our life a little bit easier, we can refine the partition somewhat further to chop it up into somewhat smaller resolution. And this part, you can really do it either arbitrarily or randomly. Some ways may be slightly easier to execute, but it doesn't really matter how you do it. It's fairly robust. You refine it further. And basically, I want to make it equitable. Sometimes, you can just do that by refining, but maybe if you have some really small parts, then you might need to move some vertices around, so I call that "rebalancing." So move and merge some vertices, but only a very small number of vertices, to make equitable. So you run this loop until you find that your partition is epsilon regular. Then you're done. Whenever you run this loop, because we're doing the second step, your partition is always going to be equitable. But we now need to control the energy again to limit the number of steps. And the point here is that the first part still is exactly the same as before, where the energy goes up by at least epsilon to the minus 5. But the second part, the energy might go down, because we're no longer refining, just refining. Because we're doing some rebalancing. But you can do it in such a way that the amount of rebalancing that you do is really small. You're not actually changing the energy by so much. So I'll just hand wave here, and say that we can do this in such a way where the energy might go down, but only a little bit. So you're only changing a very small number of vertices, very small fraction of vertices. So if you change only an epsilon fraction of vertices, you don't expect the energy, which is something that comes out of summing pairs of vertex parts, to change by all that much. So putting these two together, you see that the energy still goes up by, let's say, at least 1/2 of epsilon to the 5th power. And so then, the rest of the proof runs the same as before. You finish in some bounded number of steps. And you result in an equitable partition that's epsilon regular. I don't want to belabor the details. I mean, here, there's some things to check, but it's, I think, fairly routine. It's is more of an exercise in technical details. But the thing that actually is somewhat important is there's a wrong way to do this. I just want to point out that what's the wrong way to do this, is that you apply regularity lemma, and you think now it has something that's epsilon regular. Then I massage it to try to make it equitable at the end. And so if I don't look into the proof, I just look at a statement of Szemeredi's regularity lemma, and I get something that's epsilon regular, I say I'm just going to divide things up a little bit further, that doesn't work. Because the property of being epsilon regular is actually not preserved under refinement. So look at the definition. You have something that's epsilon regular. You refine the partition. If might fail to be epsilon regular. So you really have to take into the proof to get equitability. So just to repeat, a wrong way to try to get equitability is to apply regularity lemma, and at the end, try to massage it to get equitable. That doesn't work. Next time, I will show you how to apply Szemeredi's regularity lemma.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
10_Szemerédis_graph_regularity_lemma_V_hypergraph_removal_and_spectral_proof.txt
PROFESSOR: We've been spending quite a few lectures so far discussing Szemeredi's regularity lemma, it's applications in variants of the regularity Lemma. So I want to spend one more lecture, before moving on to a different topic, to tell you about other extensions and other perspectives on Szemeredi's regularity lemma. So hopefully, you all will become experts of the regularity lemma, especially the next homework problem set where there will be plenty of problems for you to practice using the regularity lemma. So one of the things that I would like to discuss today is a hypergraph extension of the triangle removal lemma. So as we saw, the triangle removal lemma was one of the important application of Szemeredi's graph regularity lemma. OK. So that works inside graphs, but now let's go to hypergraphs. In particular, even in the case of three uniform hypergraphs where-- so let's set some terminology. So when r uniform hypergraph, or simply abbreviated as r graph, consists of a vertex set and an edge set where the edge set consists of r couples. So the edges are r element subsets of the vertex set. So r equals the 2 corresponds to the graph case. And you can talk about sub-graphs or density, various counts all analogously to how we did it for graphs. And you can imagine what the hypergraph removal lemma might look like. So let me write down a statement. That for all r graph h and epsilon bigger than zero, there exists a delta such that-- so the last time, there was some complaints about sentences going on too long, so let me try to cut sentences into smaller parts. So if g is an end vertex r graph with small number of copies of each, so h is sub-graphs, then g can be made h free. So far, everything's still the same as in the graph case, but now in the graph case, the number of edges is quadratic, most quadratic. Here, it's a most n to the r, so I want to make this graph, this r graph, h free by removing less than epsilon n to the r edges from g. So that's the statement of the hypergraph removal lemma. So it's an extension of the graph removal lemma. Any questions about the statement? So before discussing the proof, let me show you why you might care about this statement. And we used the triangle removal lemma to deduce Roth's theorem. Remember, there was a graph theoretic set up where start with a 3-AP-free subset. We set up a graph, and then that graph has some nice properties that allows us to use a corollary of the triangle removal lemma, namely the corollary that says that if you have a graph, where every edge sits on exactly one triangle, then it has a sub quadratic number of edges. So we can do a similar type of deduction showing that the hypergraph removal lemma implies Szemeredi's theorem. So that's what we'll do. So let's deduce Szemeredi's theorem from the hypergraph removal lemma. So recall Szemeredi's theorem says that for every fixed positive integer k, if a is a subset of 1 through n, that is kp free, has no k term arithmetic progressions, then the size of a is sub-lineal. Instead of illustrating how to do this proof for general k, I'm just going to do it for the case of k close to 4. And you can look at the proof and it will be clear how to generalize. So we'll just illustrate the proof for 4-APs. Now, before even showing you what this proof looks like, you might wonder, do we really need the hypergraph removal lemma? Could it be that with the graph removal lemma and a more clever choice of a graph, you could prove Szemeredi's theorem using just that. So we set up some graph previously where triangles correspond to 3-APs. Maybe you can set up some other graph where some other kinds of structure correspond to 4-APs. And it turns out the answer is emphatically no. So there's a very good reason for this that for-- these are things which we might go into more when we discuss additive combinatorics, but 4-APs is a pattern that's sometimes called complexity two. Where as 3-APs is a pattern which is called complexity one. I won't go into the precise definitions of what this means, but the message is that you cannot prove the 4-AP theorem with just graph machinery. You really have to use something stronger. That there is a very real sense in which just a graph removal lemma is at least-- or Szemeredi's theorem, Szemeredi's regularity lemma is not enough. And so we really do have to go to hypergraph. And this extra layer of complexity, like in this word sense of the complexity, introduces also additional difficulties in there. So that there makes significantly harder than graph removal lemma. So I won't even show you anything that's close to a complete proof, but I will illustrate some of the ideas and highlight some of the difficulties today. But deducing Szemeredi's theorem from the hypergraph removal lemma is actually not so bad. So I will show you how to do that right now. So from the hypergraph removal lemma, even just for three uniform hypergraph, even for tetrahedra-- so into the triangles graph, tetrahedron, we can have the following corollary analogous to all we have for triangles. So if g is a three graph such that every edge is contained in a unique tetrahedron, then g has sub-cubic number of edges. So completely analogous to the one we have for triangles and the proof is identical. You read the proof and everything works exactly the same way, once you have the removal lemma. By tetrahedron, I mean the complete graph on four vertices. Complete three graph of four vertices. OK? So you four vertices and you look at all possible triples of vertices. All right. So now let's prove Szemeredi's theorem, at least the 4-AP case. The general case is completely analogous. Of course, you have to go to higher order. Instead of three graphs, you have to look at r graphs. So just as in the proof of Roth's theorem, we're going to set up some particular three graph where let's look at a certain modulus m being 6n plus 1. The exact number is not so important here. I really just wanted this number is bigger than 3n and it's co-prime to 6, as well as t sum divisibility's by 2 and 3, that will be useful. So let's build a 4-partite three graph g where the four vertex parts x, y, z, w, they all have m vertices each. And I'll show you what the edges are. So four vertices, little x and big X and so on. So here are the rules for putting the edges. I'll just tell you exactly what they are. So I put in an edge xyz if and only if the following expression 3x plus 2y plus z lies in a. I put in the edge xyw if and only if 2x plus y minus w lies in a. OK? So xzw if and only if x minus the minus 2w lies in a. And finally, yzw if and only if minus y minus 2z minus 3w lies in a. So these are the rules for putting in edges of this hypergraph. Of course, you have this hypergraph. And you might be wondering, why did I choose these expressions? So if it's not clear yet, it will soon be very clear why we do this. Just as in the proof of Roth's theorem using triangle removal lemma, let's examine the tetrahedra in this three graph. So what are the tetrahedra? Notice that xyzw is a tetrahedron. So if all three triples are present, if and only if all four of these expressions lie in a. Well, just like in a proof of Roth's theorem, these four expressions form a 4-AP. And the common difference is minus x minus y minus z minus w. OK? So they were chosen to satisfy this property. But furthermore, notice that I can't just put any expressions in. I put these expressions in with the very nice property that the i-th linear form does not use the i-th variable. So each expression really corresponds to an edge in this three graph. All right. But we started with a set a that is 4-AP free. It follows that you don't have this kind of configurations unless the common difference is zero. So the only tetrahedra correspond to these trivial 4-APs. And just as in the proof of Roth's theorem that we saw from triangle removal lemma, the conclusion is that every edge lies in exactly one tetrahedron. And therefore, by the corollary, the number of edges is equal to little o of m cubed. But on the other hand, how many edges are there? So for each of the four conditions here, so for each of these four parts, the number of edges is, well, you get to choose x and y whatever you want. And the last variable has a choices. So this implies that the size of a is little o of m, and m is on the same order as n, and that proves the theorem. OK. Any questions so far? Yeah. AUDIENCE: Where do we use the m is co-prime to 6? PROFESSOR: Great. Question's where do we use the condition that m is co-prime to 6? Anyone know the answer? AUDIENCE: To solve for that last variable, divide by 2 or 3. PROFESSOR: Great. So to solve for the last variable, we ought to maybe divide by 2 or 3. And to do that, you need to have co-primality with 6. Yeah, so I'm hiding a bit of details here. OK? But it's a great question. So if you work out the details of this statement here, every edge slicing exactly one tetra tetrahedron and also counting the number of edges, but more especially, the first sentence. You need to actually do just a tiny bit of work. Any more questions? So this deduction is the same deduction as the one that we did four triangles, but you have to come up with a slightly different set of linear forms. And usually, if you're given a specific pattern, you know you can play by hand and try to come up with a set of linear forms. You can think about, also, how to do this generally. And more generally, this hypergraph removal lemma using this type of ideas, it allows you to deduce multi-dimensional Szemeredi theorem. So if you give me some pattern, then a subset of the integer lattice in the fixed dimension avoiding that pattern must have density going to zero. So we stated this in the very first lecture. And I won't spell all the details, but you can follow this kind of framework and gave you that theorem. And I will post the problem where you asked to do this for a specific pattern, namely that of a geometric square, axis-aligned square. So if you have that pattern in z2, so it's worth thinking about how would you run this argument for that pattern in z2. Yes. AUDIENCE: How close can the small o m be? PROFESSOR: OK. The question's how close can the-- so you're asking what is the rate, this little o m gives? Let me address-- OK, so hold onto this question. I will address it once I discuss what is known about hypergraph removal lemma. And that's a great question and there's a lot of mystery surrounding what happens there. OK. Questions? Any others? Great. So let's discuss this hypergraph removal lemma. And as I had warned you already at the beginning of lecture, this one is very difficult. So I mentioned in the very first lecture that the development of Szemeredi's regularity lemma was a stroke of ingenuity. But this one here-- but we saw that the proof and we did the proof of Szemeredi's graph regularity lemma in one lecture. And once you understand it, it's not so bad. You do the energy increment, conceptually it's not so bad. But that when there is actually incredibly difficult. It's incredibly difficult both conceptually and technically. But I want to at least illustrate some of the ideas and give you some sense of the difficulty, like why is this difficult. So as we imagine, we have graph regularity. And to prove hypergraph removal, one would develop some kind of a hypergraph regularity method. And the basic idea in hypergraph regularity or just regularity, in general, is that I give you some arbitrary graph or hypergraph, and you want to find some kind of partitioning, some kind of regularization into some bounded number of pieces, some bound amount of data so that that's a good approximation for the actual graph. Just like in the graph regularity case. So let's try to do this. So what does this partition even look like? So here's an attempt. And of course, I call it an attempt because eventually, it will not work. But it's a very natural first thing to try. And maybe, I shouldn't call it naive because it's actually not a bad idea to begin with. OK, so let's see. So suppose you were given a three graph g. I'm going to just to help you remember what's the uniformity of each graph. I will denote in parentheses, in the superscript, so just help you remember that this is the three graph. So in geometry, they also do this with manifolds. So if you put an n on top, it's an n manifold. But this 3 is for three graph. Suppose we partition the vertex set of this three graph similar to proof of Szemeredi's regularity lemma. Think about how the proof of Szemeredi's regularity lemma goes. So you have this partition, but in the proof, there is this iterative refinement. And each step you say, well, I have this notion of regularity. If it doesn't satisfy regularity, I can keep cutting things up further. So what's the notion of regularity that might get you some kind of vertex partition? Can anyone think of a notion of a regularity for three uniform hypergraphs? Yep. AUDIENCE: So the same sort of thing, [INAUDIBLE] variations like [INAUDIBLE]. PROFESSOR: So let me try to rephrase what you're saying. So if we have a notion of regularity, let's say I have three vertex sets, V1, V2, V3. And I want that the density between these three, they do not differ from-- if I restrict these vertex sets to subsets that are not too small. Does this make sense? So here, this d is the fraction of triples that are edges of the hypergraph. So this is the natural extension, natural generalization of the notion of regularity that we saw earlier for graphs. And indeed, it's a very natural notion. it is a nice notion. And actually, if you use this notion, if you use more or less precisely what I've written, you can run through the entire proof of Szemeredi's graph regularity lemma and produce a regularity theorem that tells you given an arbitrary three uniform hypergraph, you can decompose the vertex set in such a way that most triples of vertex sets have this property. So the same proof as Szemeredi's regularity lemma implies-- so I won't write down the entire statement, but you get the idea. So for every epsilon, there exists m such that we can partition into the vertex set into at most m parts, even equitable, if you like, so that at most an epsilon fraction of triples or parts are not epsilon regular in the sense that I just said. So it's literally is the same proof. So you really look at the proof and then you get that. OK? So far everything seems pretty good, pretty easy. OK. So why did I say, initially, that actually, hypergraph regularity is incredibly difficult? So what not good about this one? Also remember, in our application of the regularity method, there were three steps. What are they? Partition, clean, and count. OK. So partition, OK, you do partition. Clean, well, you do some kind of cleaning. But counting, that's a big thing. And that's something that wasn't so hard. We had to do a little bit of work, but it wasn't so hard. And you can ask, is there a counting lemma associated to this regularity lemma? And the answer is emphatically no. And I want to convince you that for this notion of regularity, there's no counting lemma. OK. Yes. AUDIENCE: Is this version true, though? PROFESSOR: It is true. So you ask, is this version true? So this statement is true with this definition. And you can prove it by literally rerunning the entire proof of Szemeredi's graph regularity lemma. So the regularity statement I've written down is true, but it is not useful. For example, it cannot be used to prove the tetrahedron removal lemma because if you try to run the same regularity proof of the removal lemma, you run to the issue that you do not have a counting lemma. OK. So why is it that you do not have a counting lemma? So let me show you an example. And keep in mind that the notions of regularity, they are supposed to model the idea of pseudo randomness, which is the topic we'll explore in further length in the next chapter. But the idea of pseudo randomness is that I want some graph which is not random, but in some aspects look random. So this is an important concept in mathematics and computer science, and it's a very important idea. But of course, you can generate a pseudo random object by just taking a random object, and it should hopefully satisfy some properties of pseudo randomness. So let's see what this notion of regularity, how it works even for random hypergraph. What's a random hypergraph? So there are different ways to generate a random hypergraph. One way is to have a bunch of triples all appearing uniformly at random, independently at random. So I have a bunch of possible triples that I can make as edges, each one I flip a coin. But there's a different way, and let me show you a different way to generate a random three graph. Let me give you two parameters, p and q. They are constants between 0 and 1. So let's build first a graph, so a random two graph which I'll call g2. So this is just the Erdos-Renyi random graph, gnp. The usual one where you flip a coin for each edge so that each edge appears with probability p independently. And now, I make the actual three graph that want, g3, by including every triple, so every triangle of g has an edge. So here, an edge means a triple-- edge is three vertices in the hypergraph-- with probability q. So it's a two-step process. I first generate a random graph, and then I look at the triangles on top of that graph. And each triangle I include as a triple with a probability q. If you like, q be even one. So I do a random graph and my hypergraph is a set of triangles. And let's compare this construction to the more naive version of a random hypergraph where we look at this hypergraph of graph where I put in each triple appearing independently with probability p cubed q. So these are two different constructions of random hypergraph. And you can check that they have basically the same edge density. So how many edges appear in the first one? While the density of triangles in g2 is p cubed, and each of those triangles appears an edge further with probability q. So they have similar edge densities. And furthermore, you can check that this condition here is true for both graphs. So both graphs satisfy this notion of epsilon regularity as justifying with high probability. OK. Great. So if you have the counting lemma, it should give you some prediction as to the number of tetrahedra that come directly from the densities, in particular, they should be the same for these two constructions. But are they the same? So the density of tetrahedra in the first case, actually, let's do the second case first. In b, so what's the density of tetrahedra? So if I have four vertices, so each of those three edges appear uniformly at random, independently, so the density of tetrahedra is just the edge density raised to the power of 4. What about the first one? AUDIENCE: 6. PROFESSOR: So in the first one, to get a tetrahedra, the underlying graph needs to have a k4. So p raised to 6, and then on top of that, I want 4q's. So p raised to 6, q raise to 4. And when p is different, these numbers are different. So this is an example showing why there is no counting lemma because you have two different graphs that have the same type of regularity and densities, but have vastly different densities of tetrahedra. Any questions about this example? It shows you, at least, why this naive attempt does not work, at least if you follow our regularity recipe. But in any case, it's good for something. So you do not have a counting lemma for tetrahedra, but you can still salvage something. So it turns out there is a counting lemma if your graph h-- if this r graph h is linear. So linear means every pair of edges intersecting at most one vertex. So for example, if you look at-- so hypergraph or each line is an edge of triples. So that's a linear hypergraph because each pair intersecting at most one vertex. Tetrahedron is not linear. Two faces of a tetrahedron can intersect in two vertices. OK. So we can try to prove that this is true. And actually, the proof is basically the same as the counting lemma that we saw for graphs. Yes. AUDIENCE: How many edges can a linear graph have? PROFESSOR: The question, how many edges can a linear hypergraph have? You mean, given the bounded number of vertices. OK. I'll leave you to think about it. Any more questions? But for the graph that we really care about, namely tetrahedra which relates to Szemeredi's theorem, this method does not work. So what should we do instead? Let's come up with a different notion of regularity. And that's somewhat inspired by that example up there where we need to look at not just triple densities between three vertex sets, but also what happens to triples that sit on top of a graph. So we should come up with some notion of an edge density on top of two graphs. So given a, b, and c being edge sets of a complete graphs, so these are graphs a, b, and c-- you should think of them as graphs-- and a three graph g, we can define this quantity b of abc-- so there's always a hidden g which I'll usually omit-- to be the fraction of triples xyz where xyz are such that they sit on top of abc. So yz lie in a. xz lie in b. xy lie in c. So diffraction off such triples that are actually triples of g. In the case when abc are the same, it's asking what fraction of triangles are edges of g. But you are allowed to use three different sets. So think of abc as red, green, blue masking for fractions of red, green, blue triangles that are actually triples of the hypergraph. All right. So now, we can try to come up with some notion of regularity. And as you might expect, at this point, it's not sufficient to partition the vertex set. Instead, we'll go further. We'll partition the edge set of the complete graph. So we'll partition the set of pairs of vertices. Let's partition each set of the complete graph as a union of graphs such that we would like a similar type of regularity condition but for those types of densities. Such that for most ijk such that we have lots of triangles on top of these three graphs in the partition. So for most ijk, such that this is the case, this partition, so this triple is regular in the sense that for all subgroups with not too few copies, so not too few triangles, on top of these a's. One has that the density, the triple density, among the g's is similar to the triple density on top of the a's. So I'm doing some kind of partition where g1 is like that. g2 like that, and g3 that. And what I'm saying is that if you take subset of g1, g2, g3 so that there are still lots of triangles, and that's analogous to this condition of a i's now being too small, then counting the number of triples to the fraction of those triangles that are edges of g. That fraction is roughly the same when you pass down to sub-graphs. Don't worry about the specific details, and I'm not going to try to give you the specific details. But think about the analogy to instead of partitioning the vertex set, we are partitioning the edge set of a complete graph. But actually, hypergraph regularity involves one more step, namely that we need to further regularize these g's via partitioning the vertex set. Similar to what happens in Szemeredi's graph regularity lemma, but actually more similar to the strong regularity lemma that we discussed last time. So the data of hypergraph regularity is not simply a partition of the vertex set, but it's twofold. One is a partition of the edge set of the complete graph, so partition of the vertex pairs, into pseudo random graphs, so in into graphs, so that the hypergraph g sits pseudo randomly on top. And furthermore, there's also a partition of the vertex set of g so that the graphs in part one are extremely pseudo random with respect to this partition. And this idea of extremely random we saw in the last lecture, you have some the sequence of epsilons that depend on how many parts you have in the first step of the regularity. OK. Any questions? Yes. AUDIENCE: What happens with triples g's that don't have a lot of triangles? PROFESSOR: The question is, what happens to triples of g's that do not have lots of triangles? So they are similar to in graphs, you have these small sets of vertices. So you have to deal with them somehow, but I'm, again, leaving out all these technical details. And in fact, I am writing down a very sketchy version of hypergraph regularity. You could write down a more precise version. You can find it in literature. In fact, you can find more than one version of the statement of hypergraph regularity in literature. And they're not all obviously equivalent. It actually takes a lot of work even to show that different versions of the statement are equivalent to each other. And it's still somewhat mysterious as to what is the right, the most natural formulation of hypergraph regularity. That's something that I think we still do not yet have a satisfactory answer. There was a question earlier about bounds. OK. So what kind of bounds do you get for hypergraph regularity? So let me address that issue now. So what kind of bounds do you get? Well for Szemeredi's graph regularity lemma, the bound is a power function because we have to iterate the exponential which comes out of the partitioning. And in hypergraph regularity, because of this extremely pseudo random-- so you are doing some kind of partitioning the first stage, and then you are iterating that on top for the second stage. Similar to how we did strong regularity in the last lecture. So the bounds for hypergraph regularity is also iterated power which we saw last time, and this is known as a Wowzer. So it's even worse than graph regularity. And just like in the case of graph regularity, this was Wowzer type bound is necessary, at least for most statements, any of these useful statements of hypergraph regularity. What about the applications? So applications to multi-dimensional Szemeredi's theorem-- OK, so first of all to Szemeredi theorem, well, you can prove Szemeredi theorem this way and you would get inverse Wowzer type bounds, which is it's not so great. But there are better proofs. So there are more efficient proofs quantitatively. So for Szemeredi's theorem, the best result for general k is due to Gowers' which tells you that a must be, at most, n over something that's log log n raised to power some constant c depending on k. That's for k equals 3 and 4, you can do somewhat better before general k. This is the best bounds so far. But for multi-dimensions, for multi-dimensional patterns, it turns out that-- well, historically, the first proof of the multi-dimensional Szemeredi's theorem was done using Ergodic theory which has even worse bounds compared to this approach in that the Ergodic theoretic proof gives no bounds because it has to use compactness arguments, so they actually give no quantitative bounds. And one of the motivations for this hypergraph regularity method, the removal lemma, is to produce quantitative proof of multi-dimensional Szemeredi's theorem. So in general, still the best bounds come from this removal lemma, so hypergraph removal lemma. Although in special cases, and really, not that many special cases, but really just the case of a corner as we saw earlier you have somewhat better bounds. So for corner, you have bounce, which have density like polylog log. But even for a geometric square, we do not know any Fourier analytic methods, we do not know other methods. And this is basically the best bound coming out of hypergraph regularity. And there are serious obstructions for trying to use Fourier methods to do other patterns such as geometric square. OK? Any questions? Yes. AUDIENCE: So what do the bounds look like for higher degree uniformity. Are they still just Wowzers? PROFESSOR: OK. So what are the bounds like for higher degree uniformity? So this is Wowzers for free uniform, and for full uniform, you iterate Wowzer. So you go up in Ackermann hierarchy. You iterate Wowzer, you get a four uniform hypergraph regularity lemma as so. Let's take a short break. So the second topic I want to discuss today is a different approach to proving Szemeredi's graph regularity lemma. And this is a good segue into our next topic, the next lecture, which is about pseudo random graphs, in particular, the idea of the spectrum eigenvalues, in particular, play a central role. So I want to consider a spectral approach giving an alternative way to prove the Szemeredi regularity lemma. And if you're already sick of the regularity lemma at this point, this will be the last topic on regularity lemma for now, although it will come up again later in this course when we discuss graph limits. But for now, this is the last thing I want to say. And just like the discussion about hypergraph regularity, it will be somewhat sketchy. So this idea has appeared in literature in the past, but it was popularized by many good things in life by Terry Tao's blog. So it's a good place to look up a discussion of what I'm about to say. OK. So we saw the proof of regularity lemma via this iterated partitioning and keeping track of our progress through the use of an energy. But here's a different perspective, namely if we start with a graph g, I can look at the adjacency matrix a sub g. So this is the n by n matrix where n is a number of vertices whose i j-th is zero if i is not adjacent to j, and 1 if i is adjacent to j. So this is a pretty standard thing to look at to associate a graph to this matrix. So this graph here would be like that and so on. It's a real symmetric matrix and that's always pretty nice. Symmetric matrices have lots of great properties that will be convenient to use. In fact, if you're like myself, if you're too used to working with symmetric matrices, you forget that some of these properties actually do not apply in general to non-symmetric matrices. But it is symmetric, so we're happy. So for symmetric matrices, we have a set of real eigenvalues. We have real eigenvalues and eigenvectors. And for now, let me enumerate the eigenvalues by lambda 1 through lambda n, so multiplicity included, and I sort them according to the size of their absolute value. So the spectral theorem tells us a decomposition. So here again, we're using the a as real symmetric. So it tells us that this matrix a can be written as the sum coming from the eigenvalues and eigenvectors where the u i's are the eigenvectors, but I can choose them so that they form an orthogonal basis orthonormal basis, so they're all unit vectors. So when I say the spectrum, I mean this data also, specifically, this set of eigenvalues. All right. So let's go through some basic properties of the spectrum. So first, how big can the lambdas be? So I claim that-- so first of all, the sum of the squares of these lambdas is-- let me not even call this a lemma, so it's just an observation. So the sum of the squares is this one. So this is the trace of a squared, which is also the sum of the squares of the entries. So here, I'm always using that a is real symmetric-- sum of squares of entries of a. And the case when you have a being an adjacency matrix, this is simply twice the number of edges, which is at most n squared. So that's always a good thing to remember. OK? So as a result, the i-th eigenvalue cannot be bigger than what? So you have i eigenvalues. So they're sorted in decreasing order. So the i-th eigenvalue cannot be too large, particular it cannot be larger than n over root i. Because otherwise, the sum of the first i eigenvalue squared would exceed n squared. So these things, they do decay. So second observation is that if you have some epsilon and an arbitrary function, so this is known as a growth function. That's just a name, don't worry about it. So which we'll call f. So its function from the positive integers, the positive integers, and for convenience, I'm going to assume that f of j is always at least j. For every j there exists some c which depends only on your epsilon and this growth function. So this growth function plays the same role as the sequence of decaying epsilons in these strong regularity lemma. So there exists some constant bound such that for every graph g and ag as above, so associated with the lambdas and u's, there exists a j less than c such that if I sum up the eigenvalues squared for eigenvalues i index between j and c of j, the sum is fairly small. It's at most epsilon n squared. I'll let you ponder that for a second. So choose your favorite growth function. It can be as quickly growing as you can. It can be exponential, power, or whatever. And it's saying that I can look up to a bounded point so that this stretch of spectrum squared is, at most, epsilon n squared. Question. AUDIENCE: What is c of j? PROFESSOR: F of j. Thank you. Well, the statement hopefully will become clearer once I show you the proof. OK. So here's how you would prove it. So you first let j1 equal to 1, and I obtained the subsequent j's by applying f to j. So I claim that one cannot have this inequality violated for too many of these j i's. So one cannot have the sum going between jk and jk plus 1 for all k from 1 to 1 over epsilon. Let's change this to zero. But you cannot have this because if you had this then you sum up all of these inequalities you would get that the total sum would exceed and squared, which would violate the inequality about sum of the squares of the spectrum. And so therefore, so thus to the claimed inequality, so this is true star holds for sum j equal to ji so jk where k is less than 1 over epsilon. And this j, in particular, is less than-- well, whatever it is, it's bounded. So it's less than f applied to itself at most 1 over epsilon times-- OK. So this should look somewhat familiar. And I'll ask you to think about, later on, how this proof of spectral proof of Szemeredi's graph regularity lemma compare to the proof that we saw earlier. And you should see where the analogous step is here. This is that density. This is the energy increment step. All right. OK. So what's the regularity decomposition? So I give you this graph. I give you this adjacency matrix. And I want be able to find a partition, but there's a different way to view a partition. So this is, I think, a important idea which, again, is popularized by Terry Tao, that instead of looking at things as a regularity partition, we can view these ideas as regularity de-compositions. Namely, pick j as in the lemma and I now write my adjacency matrix a sub g as a sum of three matrices, which we'll call a structured plus a small plus a pseudo random. Where a structured equals to the sum for basically that sum, this sum here, so this spectral de-competition but only for the first j minus 1 eigenvalues. So those of you coming from or who have taken classes in something like Statistics might recognize this as a principal component analysis. So this has many names. It's a very powerful idea. You look at the top spectral data, and that should describe you most of the information that you care about about a graph or a matrix, in general. The small piece is the sum but only for i between j and f of j. And the pseudo random piece is for i at least f of j. OK. So we decompose this adjacency matrix into these three pieces. And the question now is, what does this have to do with Szemeredi's graph regularity lemma? So what do the individual components correspond to in the version of the regularity lemma that you've seen and are now familiar with? So here is what's going on. So I want to show you that this structured piece roughly corresponds to the partition. So this is the bounded partition. And the small piece roughly corresponds to the small fraction of irregular pairs. And the pseudo random piece roughly corresponds to the idea of pseudo randomness between pairs. First, to understand what the spectral data have anything to do with partitions, let me remind you a basic fact about how the spectrum, how the eigenvalues of a real symmetric matrix relate to other properties of this matrix. And namely, this notion of a spectral radius or sometimes called spectral norm. So far I'm only going to discuss real symmetric matrices. So many of the things I will say are not true for if you're not in a real symmetric case. So the spectral radius spectral norm of a is the largest eigenvalue of a in absolute value. And this quantity turns out to be equal to the operator norm which is the norm of this a as a linear operator, namely it is the max or super, it turns out to be a max, of av over-- length of av divided by length of v. So if you hit it with a unit vector, how far can you go? So it's also equal to this bi-linear form. If you hit it from left and right by unit vectors, how big can you get? So for real symmetric matrices, these quantities are equal to each other. And that will be an essential fact for relating the spectral data with combinatorial quantities. All right. So if you give me this de-composition, how can I produce for you a partition? Basically, you can look at a structure which has its state in its data a bounded number of eigenvectors. And by rounding, we can basically round these guys so when you round the individual values by rounding the coordinate values-- So let's pretend that they take only a bounded, let's see, a small number of values. So just to simplify things in your mind, pretend for a second-- well, of course, this is far from the truth-- pretend for a second that these guys are 0 comma 1 valued. Of course, that's not going to be the case but 0 comma 1 or plus/minus 1, if you like. OK. So this is definitely not true. But for the purpose of exposition, let's pretend this is the case. And you can more or less achieve it by rounding the individual values to their nearby closest multiple of something. Then the level sets of these top eigenvectors, they partition the vertex set into a bounded number of parts. So if you, for example, in the simplified version where you're only have plus minus 1 values for this eigenvectors, then you have, at most, 2 to the j parts. But you may get some more because some epsilons, but for the purpose of illustration, let's not worry about that. And this is basically the regularity partition. I want to show that this set here has very nice properties that they basically behave like the regularity partition we've gotten previously. So what I would like to show is that the other two parts, they do not contribute very much in the sense of our regularity partition. So for example, if you look at the pseudo random piece, if I hit it left and right with indicator vectors of vertex sets, how big can this number get? So this number here is, at most, while the norm of u times the norm of this w which is just-- so let me write down-- so the norm of indicator of u, norm indicator of w multiplied by the operator norm of the pseudo random part of a. But these two guys here, so they're, at most, root n each. So this number here is, at most, n but we know from our hypothesis on the pseudo random part of a that the spectral norm is no more than this quantity over here. And by choosing f appropriately large, I can make sure that this number is extremely small. f to be large compared to the number of parts in b partition so this quantity is small. And this is basically the notion of epsilon regularity that you saw in the usual version or version of Szemeredi's regularity lemma I presented in the first version, in the very first lecture that we discussed regularity. This quantity here is something which measures the difference between-- so for now, if for a second ignore the middle piece. If you ignore the small piece, then this is precisely the difference between the actual densities between u and w and the predicted density between u and w. AUDIENCE: Why is there a square root here? PROFESSOR: The question is, why is there a square root here? There should not be a square root here. Good. It then becomes a square root of the-- yeah, so there is no square root, but the length of this vector is the square root of the size of u which is, at most, n. Yeah. AUDIENCE: Did you say to be small in general or just small like you compare it to f squared? Because I guess, isn't it like going to be this constant function that you choose before [INAUDIBLE]? PROFESSOR: OK. Question is do, how small do we want this f of j to be? So I want this quantity to be quite a bit smaller than, let's say-- so basically, I want it to be less than f of n squared, but f of n over the number of parts square. Because this quantity is, let's say, the sizes of each part. So let me just be not precise and say much less than. So this quantity here is the size of each part. And I want to think about the case when u and w they lie inside each part. In which case, I want the difference to be much less than epsilon times the size of the part squared. Yeah. AUDIENCE: [INAUDIBLE] j is different based on the graph? PROFESSOR: Question is, is the j different based on the graph? Yes. And that's also the case for Szemeredi's regularity lemma. In Szemeredi's regularity lemma, you don't know when you stop. But you know that you stop before a certain point. OK. And finally, what's happening with a small part of a. So in a small, the sum of the squares of entries-- so this also has a convenient name. It's called the Hilbert-Schmidt norm. So the sum of the squares of the entries. We basically saw this calculation earlier, it's the sum of the squares of the eigenvalues in which case we've truncated all the other eigenvalues. So the only eigenvalues left are between j and index between j and f of j. And we chose j so that this number is small. So a small as in a bunch of noise, but no adversarial noise, if you will, into your graph, but only a very small amount, at most, epsilon amount. So it might destroy the epsilon regularity for, let's say, around epsilon fraction of pairs. But that's all it could do. So all but epsilon fraction of your pairs will still be epsilon regular. And that is the consequence of Szemeredi's graph regularity lemma that we saw earlier. Yeah. AUDIENCE: Doesn't large F have to be special? PROFESSOR: OK. So question, does a large F have to be special? The F should be chosen-- if you want to achieve Szemeredi's graph regularity lemma, you should find this F so that basically this inequality is true. So f should be quite a bit larger than the number of parts. But if you choose even bigger values of f, you can achieve more regularity. And this is akin to what happened with strong regularity. So there's this idea if you iterate one version regularity, you can get a strong version of regularity. And there's some iteration happening over there. So if you choose your f to be a much bigger function of j, you can achieve a much stronger notion irregularity which is similar and, perhaps, even equivalent to strong regularity that we discussed last time. So you get to choose what f you want to put in here. Yeah. AUDIENCE: How do you make it equitable? PROFESSOR: The question is, how do you make it equitable? OK. So let me now discuss that. So in this case, you can also do very similar things to what we've done before, but to massage the partitions. It's not entirely clear from this formulation. But the message here is that there's this equivalence between operator norm on one hand and combinatorial discrepancy on the other hand. And we'll explore this notion further in the next several lectures.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
2_Forbidding_a_subgraph_I_Mantels_theorem_and_Turáns_theorem.txt
YUFEI ZHAO: So the first topic that I want to discuss in this course is extremal graph theory. And in particular, there is a whole class of problems which have to do with what happens if you forbid a specific subgraph. Forbid a specific subgraph. And I ask you, what's the maximum number of edges that can appear in your graph? In particular, and this is the question that we saw at the end of last lecture, which, now we're going to pretend that's a theorem. Mantel's theorem essentially asks, if you know that your graph has no triangles, what's the maximum number of edges it can have? And Mantel's theorem tells us that the extremal example is when your graph consists of putting half the vertices on one side, half the vertices on the other side, and putting in all the edges between the two sides. So this is a complete bipartite graph. For these partitions, we denote complete bipartite graphs like that. And Mantel's theorem tells us that this graph, among triangle-free graphs, has the most number of edges. Triangle-free graph, n vertices, has at most-- so the number of edges there is n squared divided by 4. Round down-- that many edges. And from this example, this bound is tight. So Mantel's theorem them gives us a completely satisfactory answer to the question of, what's the maximum number of edges in a graph without triangles? And I want to begin by showing you a few different proofs of Mantel's theorem. So we're illustrating some different techniques in graph theory. So we'll see quite a few proofs in today's lecture. The first one begins-- well, here's the setup. I have G, an n vertex graph. And let me denote the vertices and edges of G by V and E. If I have an edge in G between vertices x and y, then note that they cannot have any common neighbors. Because if they did, I would see a triangle. So I assume G is triangle-free. So what can we say about the degrees of the two endpoints of this edge? Well, they cannot add up to? AUDIENCE: They can't have more than n. YUFEI ZHAO: They cannot add up to more than n. So in particular-- so exactly. So the degrees of these two endpoints, there's at most n whenever xy is an edge. So here I'm using d to denote degree. Well, OK. So now let me consider the quantity which is the sum of the squared degrees. On one hand, I claim that the sum is equal to this quantity here, where I sum over all edges. And the reason is, well, look at the sum. Suppose, imagine writing out all the sum n's. How many times does each dx come up? So each dx appears one for each edge x is in, so appears exactly dx times. But we saw from up here that each sum n is at most n. So this sum here is that most mn. On the other hand, let's consider the quantity which is just the sum of the degrees. And you sometimes know it as the handshaking lemma, that the sums of the degrees is just twice the number of edges. Each edge is considered twice in the sum. Well, now we apply the Cauchy-Schwarz inequality, which, as you'll see many times in this course, although you might think of it as a fairly simple inequality, it's extremely powerful. And it will come up pretty much throughout this course. By the Cauchy-Schwarz inequality, you compare these two quantities. I find that we have this inequality over here relating the sums of the degrees and square of the sum of the degrees. But we saw, on one hand, the left-hand side is 4m squared. And we also saw that the right-hand side is at most mn squared. Putting them together, we see that m is at most n squared over 4. And of course, because it's an integer, it's at most the floor of this number. And that's a proof of Mantel's theorem. What can you tell me about the equality case in this proof? So I'll let you think about that. But let me show you some other proofs. In other words, are there graphs with the same number of edges as the graph shown up there that is also triangle-free? Is that a unique example? So let me show you a different proof of Mantel's theorem. In this proof, so we begin with a step that seems a little tricky. Let's let A be a subset of vertices such that A is a largest independent set of G. So remember, independent set is a subset of vertices with no edges inside. It may have many independent sets all having the same maximum size. Take one of them. So why should you do this step? Well, you know, sometimes magic happens. Let's consider some vertex. So consider some vertex, let's say x, and look at its neighborhood. The neighborhood must be an independent set. Otherwise, I get a triangle. So every neighborhood is an independent set. And as a result, the degree of every vertex is at most the size of the largest independent set. So now, let B be the complement of A. So I have this A and B. Every edge has to intersect B. The edge cannot be entirely containing A, because it has no edges. So the number of edges of G is at most-- well, I count over the vertices in B, the degree. So maybe I overcount. So the edges containing B, I count twice, but that's OK. So this is an upper bound on the number of edges. But I also know that every vertex in the graph has a degree at most the size of A. So each sum n is at most the size of A. And I have B terms. So I have that. Now, by the AMGM inequality, you'll have that. And the sizes of A and B add up to the entire vertex set, n squared over 4. So that gives you another proof of Mantel's theorem. What does this proof tell us about the equality case? Now, something I always want you to keep in mind when we do proofs is, especially if we have a tight example like that-- and later on in the course, we'll almost never have good examples like that. So this is still early on in the course. And we're still very clean in the examples-- is to keep the extremal example in mind. And every step in your proof, it should be tight for that example. Otherwise, something went wrong with your proof. Let's think about the inequalities. At equality, we must have that-- so looking at this inequality. So there are no edges in B. We also have that-- so by that, so every vertex in B is complete to A. And finally, A and B should have the same size. So it is exactly the configuration shown up there. Now, when n is odd, we're rounding down by 1. So you can lose a little bit. But you can check that actually what we described up there is also the unique example. So that graph, the complete bipartite graph with two equal parts, is the unique maximal example, number of edges in a triangle-free graph. Great. So once we know what the answer is for triangle-free, of course, we should ask further questions. Instead of forbidding a triangle, what if we forbid other graphs? And what are some natural next steps to take? Well, say, instead of triangles, we can ask, what about if we forbid a K4, a clique of four vertices? Or in general, what if we forbid a clique on the fixed number of vertices? So what is the maximum number of edges in a Kr plus 1-free-- there's a good reason why index is r plus 1-- graph on n vertices. So for example, if we're interested in K4-free-- so what might be a good candidate for a graph with lots of edges that has no K4's? I mean, certainly, that example we saw, it does not have K4's, because it doesn't have any triangles. But we can do even better. Instead of taking two parts, you can take more parts. For K4, if I take three parts, each with n/3 number of vertices, of course, if n is not divisible by 3, round up and run down. And putting all the edges between different parts, you can see this graph here has no K4. So that's an example of a K4-free graph. Well, does it have the maximum possible number of edges? So is this the best that we can do? So it turns out the answer is yes. And that's the next theorem that we'll see. But just to give it a name, so we're going to call graphs like these Turán graphs. So the next theorem is proved by Turán. It's Turán's theorem. So a Turán graph, so we'll denote T sub nr. It is a complete r-partite graph such that there are n vertices whose part sizes are all nearly the same, up to at most 1 difference. So this is an example here. But in general, maybe you have r parts. And I put in all the edges between different parts. So it's not too hard to calculate the number of edges in such a graph. And Turán's theorem tells us that that is the extremal example. You cannot do better in terms of getting more edges. So if G is an n vertex, try a K sub r plus 1-free graph. Then it has at most the number of edges of the Turán graph. It's a generalization of Mantel's theorem. And well, you can think about if the proofs that we did for Mantel's generalizes to Turán's theorem. And it's actually not entirely clear how to do it. So let me present for you three different proofs of Turán's theorem. So some of them, you can think about, are they related to the proofs of Mantel's theorem that we did? And they all are going to look somewhat different, but maybe superficially. The first proof, we will use induction on the number of vertices. So actually, this is one of the very few times in this course where we will see induction. So of course, induction is a powerful technique in combinatorics. But for almost the rest of the course, we're not going to have clean examples. And when we do have clean examples to work with, somehow increasing n by 1 doesn't buy you all that much. Here, there are very clean examples, very clean answers, and induction works out quite well. When n is small, of course, you should always address that. You can come up with many funny proofs if you don't address when n is small. So when n is small, this problem is basically trivial. n is almost r. You could have the complete graph on r vertices. And then we're good. So let's assume that we're not in this case. And also, by induction hypothesis, let's assume that it is true for all graphs fewer than n vertices. And let G be a graph that is K sub r plus 1-free on n vertices. And also, let's assume a maximum example to begin with. So let's assume that the G that you chose has already the maximum possible number of edges. There are only finite of any such examples. So pick one that already has the maximum number of edges. And let's think about what properties this graph G has. I claim that G must already contain a clique on r vertices. So think about that. If G does not contain a clique on r vertices, then I can add in more edges. And I can still maintain the property of being K sub r plus 1-free. So I can assume that G must contain a K sub r. So let's look at one of these K sub r's. So let n be the vertices of some r clique in G. So we have some A. And the complement to that, let me call that B. Look at the vertex in B. How many neighbors can have in A? It cannot be complete to A, otherwise I would have an r plus 1 clique. So every vertex in B has at most r minus 1 neighbors in A. So let's count all the edges. The number of edges in G is that most-- well, first, we should account for the edges inside A. And there are-- or choose two of them. And then the edges between A and B, for every vertex in B, there are at most r minus 1 vertices going to A for each vertex in B. And finally, the edge set of B. Well, we can say something more about these quantities. We know that the size of B is exactly n minus r. But what can we say about the number of edges in B? We can use induction hypothesis. So B is also r plus 1 clique-free. So the number of edges is at most the number of edges in a corresponding Turán graph. Now, at this point, you can do a calculation. Well, I mean, you should expect that the answer we're looking for is-- I mean, this should be equal to the number of edges in the Turán graph. So you can either do a calculation to figure this out, or remember what I said earlier, that keep the tight example in mind, and everything should check out for the tight example. So in particular, if you are in the situation of a complete multipartite graph with equal size or nearly equal-size parts, what is A? So A is one vertex from each part. So you take out one vertex from each part. And read off this calculation for this graph over here. And then you see that that is indeed equality. So it should check out. You don't need to do any actual calculations. Any questions about the proofs we've done so far? All right. Let me show you another proof of Turán's theorem. So this proof has a name. So it's called Zykov's symmetrization. So Zykov has the unfortunate honor of having a name that's hard to beat in terms of alphabetical order. I think, if he and I were to write a paper, I wouldn't be the last author. So what's this about? So let G be the graph. Again, as before, we're going to take a maximal example. So be the n-vertex graph that is free of cliques of r plus 1 vertices and has already the maximum number of edges. So here's the property I want to prove about this graph. I claim that if you look at the complement of the graph, of this extremal example, it must be an equivalence relation. So more precisely, claim that if xy is a non-edge, and yz is a non-edge, then xz must also be a non-edge. So in other words, non-edges form equivalence relation. And again, you should always think about the extremal example. And it is true for the extremal example. Because the complement are a bunch of cliques. So it is an equivalence relation. So let's prove this claim. So let's assume that the conclusion is not true. So suppose, for the contrary, that we have xy and yz being non-edges, but xz is an edge. So let's think about what happens to the degrees of non-adjacent vertices. So I claim that if the degree of y were smaller than the degree of x, then I can do something to the graph that violates the maximality of the number of edges. Namely, I can replace y by a clone of x. So I chop off y from the graph. And I look at x. And I clone. So cloning it means taking some x prime. And I join x prime to all the same neighbors as x. So clone x into some other vertex, x prime. Now, when you do this, I claim that you also get a graph that is free of K sub r plus 1. So we also obtain a K sub r plus 1-free graph. So why is that? So if you were to have an r plus 1 clique, well, it shouldn't have both x and x clone. Because there's no edge between them. So it only has one of x and x clone. But then that would have been a clique in the original graph. However, what about the number of edges that we obtain after this transformation? If x had more outgoing edges, a higher degree than y, then cloning x to replace y increases the number of edges. We obtain a graph that is also K sub r plus 1-free and has more edges than G, which is not possible, because we assumed that g started with the maximum number of edges. So therefore, the degree of y is at most the degree x, basically for every non-edge xy. Likewise, we also have that there is no edge between y and z. So the degree of y is-- thus the degree of y is at least the degree of x now. So degree of y is at least the degree of z. Now, y has a lot of outgoing edges. So now let's replace both x and z by clones of y. As before, we obtain some graph that we'll call G prime. And G prime, same reason as before, is K sub r plus 1-free. If you had a K sub r plus 1, then the same clique should have shown up in G. So G prime does not have r plus 1 cliques either. But what about the number of edges in G prime? So the number of edges in G prime-- well, we started with G. We deleted x and z. So we deleted this many edges. Here, we're crucially using the fact that there is an edge between x and z. But now we also added back in a bunch of edges coming from the clone of y. And they are 2 times dy edges added back in. Now, you see, because y has degree at least that of both x and z, we obtain that this number of edges is strictly bigger than that of G, which contradicts the maximality of G. So at this point, we know that the complement of G must be an equivalence relation. So the complement is a bunch of cliques. And that's a lot of information. So that's almost the best thing we can hope for is the structural information. We're not quite done yet. And why is that? Well, we're almost there. But we're not quite done yet. AUDIENCE: [INAUDIBLE] you can figure out the sizes [INAUDIBLE]. YUFEI ZHAO: OK. So we need to figure out the sizes of the individual parts. So we need to figure out the sizes of individual parts. And also, note that you cannot have too many parts. So at this point, after the claim, we know that G is a complete multipartite graph. It has at most r parts. If it had r plus 1 parts, I would see a clique in r plus 1 vertices. So then, finally, I need to show-- I mean, the rest is fairly routine. I need to show what the part sizes have to be to maximize the number of edges. And basically, if you had some two parts-- so consider exactly r parts. They're all empty parts. And some two parts have the number of vertices differing by more than 1. Then what I can do is-- so if you had one part much bigger than the other part, you can imagine moving one vertex from one part to the other part. And you should convince yourself that this operation should strictly increase the number of edges. And then moving vertices should strictly increase the number of edges of G. And putting this all together, we see that the extremal example necessarily has to be the current graph, namely a complete r-partite graph, where all the parts have the same size, up to almost 1 difference. Great. Any questions? Yep? AUDIENCE: Can the Zykov symmetrization technique be used for any other type of problem? YUFEI ZHAO: So the question is, can the Zykov symmetrization technique be used for any other types of problem? I do have something else in mind. But I don't want to discuss it now. Any more questions? Great. And you see that in both proofs, if you look at this proof as well, you see that the Turán graph, it's the unique extremizer. I want to give you a third proof that has a somewhat different flavor. So these two proofs, they're both somewhat combinatorial in flavor. So I'm doing some arguments either looking at, in this case, a clique and arguing what happens outside of it. And over there, again, I'm looking at maximal example. The third proof I want to show is more of a probabilistic proof. So this highlights an important method in combinatorics, and almost a probabilistic method, where we start with a problem that comes with no randomness. But we introduce some randomness to make the problem amenable. It's a very pretty idea. So we start, again, with G being a n-uniform, so n-vertex, K sub r plus 1-free graph on m edges. And what I want to do is to randomly sort the vertex set. Consider a random order of the vertices. So I put all the vertices on the line, but the vertices are chosen at random order. And you see some of the edges like that. Let me show you how to find a clique. And essentially, we do it in a not-so-smart way. Namely, I basically pick all the vertices in some greedy-like manner, where I include the vertex in my set if it is adjacent to all earlier vertices. So if all of its neighbors-- so I include V, if V is the earliest vertex among its neighbors. So earliest means the leftmost, so if I go from left to right. So in this case up here, so I look at the first vertex. Well, the first vertex should always be in your set. So this vertex has one neighbor, just to the right. So that's OK. And hold on. It's-- no. Sorry. That's not what I want to do. Sorry. So I actually do mean what I was going to write down initially. So if V is adjacent to all the earlier vertices in this order-- so for example, this vertex, it's OK. And I'm also going to include this vertex here, because both of its earlier vertices are in-- both the earlier vertices are adjacent to the third vertex. And I think that's it. Now, this set x, I claim two things. One, that x has to be a clique. So claim that x has to be a clique. Because every vertex in the x is adjacent to all of the [INAUDIBLE] vertices in particular. All the vertices in x are joined to each other. On the other hand, how big is x, at least in expectation? So for every vertex, I want to understand the probability that this v is included in x. So v has a bunch of non-neighbors. And the property of v being included in x is that all of its non-neighbors appear after v. So the probability that v, little v, is in the set x is equal to the probability that v appears before all its non-neighbors. All v and its non-neighbors, they're sorted uniformly. So the probability that this occurs is exactly 1 over 1 plus the number of non-neighbors, which is n minus the degree of. Well, we know that this graph G has no cliques of size r plus 1. So if we consider the expected number of the size of x, on one hand, it is at most r. Because G K sub r plus 1-free. On the other hand, by linearity of expectations, each vertex is included with some probability. So the size of x in expectation is just a sum of all of these individual inclusion probabilities, which individually we've computed above like that. Now, by convexity, I can conclude that this quantity, this sum here, is at least the quantity that would have been obtained if all the decrease were equal to each other. And if you rearrange this equation, this inequality, we obtain that an m is at most 1 minus 1 over r times n squared over 2. And if you compare this number to the number of edges in the Turán graph, so you see that this is basically the number of edges, and this gives you a proof if n is divisible by r. And in fact, the number of edges in the Turán graph is exactly this number here, if this divisibility condition is true. If it's not, you need to do a little bit more work. Because we were a little bit lost here in this step. So you should see that this quantity here is minimized when all the degrees are roughly equal to each other, and you make them as close to each other as possible. So with a little bit more work, you can get the exact version of Turán's theorem up there. But at least what we've shown is-- so from here, we've shown that the number of edges is at most this quantity, which, for most purposes, is basically as good as Turán's theorem. Any questions about this proof here? And so it's a probabilistic method of proof. So we're introducing some randomness into the problem that originally had no randomness. Great. So let's take a very quick 2-minute break. And then when we come back, I want to show you that even though we've shown so many different proofs of Turán's theorem and Mantel's theorem-- you might think, OK, this is a pretty simple thing-- even if I tweak the problem just a little bit, there are so many things that we do not understand, and many important open problems in combinatorics that are variants of Turán's theorem. So let's take a quick break. So so far we've been talking about Turán's theorem, or generally the problem of, if you forbid a certain structure, forbid a certain subgraph, what is the maximum number of edges? We're going to spend the next few lectures discussing more problems of that form. And it turns out, for the answers we've just seen, they are deceptively simple. And for almost any other situation, we don't really know the exact answer. And for many questions, we don't know anything close to the truth. And in particular, I want to show you one variant of this problem, namely what happens, instead of looking at graphs, if you look at hypergraphs. And there, that's a major open problem in combinatorics, what the truth should be. So here is an open problem, which is, what happens to Turán's theorem for 3-uniform hypergraphs? So don't be scared by the word hypergraph. So whereas you think of graphs as having edges consisting of pairs of vertices, a hypergraph is simply a structure where, for a 3-uniform hypergraph, the edges are triples, triples of vertices. So the question is, what is the maximum number of triples in 3-uniform hypergraph without-- well, for Mantel's theorem, we asked what happens without a triangle. For a 3-uniform, the basic question you can ask is, what about forbidding a tetrahedral? So suppose you do have four vertices such that every triple inside the four vertices is a niche. What's the maximum number of edges you can have in an n-vertex, 3-uniform hypergraph. Already, it's not so easy to come up with good examples. So Turán's theorem says take a bipartite graph, complete bipartite graph. Well, here, it's not so easy to come up with examples, but there are some examples. And in fact, Turán suggested the following construction. Namely, you divide the set of vertices, as before, to three roughly equal-sized parts. And let me take all triples that look like one of the following forms, either three vertices, one in each vertex set, or a triple, looking like that-- two vertex in one set, one in the next set, or going cyclically, like that. So I include all triples, one of these forms. And you should check that this construction has no tetrahedron. If it had a tetrahedron, you would have had at least two vertices in one part, also then where the other two vertices can be. If you check that, it cannot happen. So then, how many edges does it have? The exact number is not so important. But what's important is that the edge density-- of all the possible triples, the fraction of edges that are containing this construction is 5/9. And it is conjectured that this is optimal. However, we're quite far from proving this number. So the best upper bound that is currently available-- and it's found quite recently using this fairly new method in graph theory called flag algebras, essentially a computerized way to try to prove such inequalities. And the best upper bound is something like 0.562. And there's a major open problem to either prove or disprove that this construction here is the optimal one. So you see, even though I presented so many proofs of Mantel's theorem and Turán's theorem, at this point, hopefully they should all be-- they seem quite simple in retrospect. It's deceptive. And even if I changed and tweak the problem just a little bit, going to 3-uniform hypergraph instead of graphs, we really have no idea what's going on. Yes? Question? AUDIENCE: So basically, how do you-- why you say that this is really bad compared to a two-number? It doesn't seem like that big a difference. YUFEI ZHAO: OK. So great. So the question is, why do I say it's a pretty big gap. So we know that there has to be some proportion of the total number of triples. So, well, you have two numbers. And well, I mean, to me, they seem pretty far apart. It's not some lower-order gap. So this is a first-order gap. Any more questions? But it's true that later in the course, we'll see gaps that are much bigger. We'll see gaps where there's a polynomial on one side and a power of exponentials on the other side. And there, I agree, that's much worse. The gap is much bigger. Here, it's just two numbers. But in the worst case, there can be two numbers anywhere. Any more questions? So throughout this course, I will try to bring out some open problems. And there are lots. So I will tell you what we do understand. But most things, we do not understand. And hopefully, some of you will go out and try to understand them better so the next time I teach the course, I could have something new to present. So now that we've addressed the question of what's the maximum number of edges if you forbid a triangle or a clique, the next natural thing to ask is, what about if you forbid a general H? So give me a graph H. And what's the maximum number of edges if you forbid that H? It will be helpful to do some notation. So the extremal number, which we'll denote by ex, and this is also called the Turán number sometimes because of Turán's theorem. So I'll probably call it the extremal number more frequently. So this number here is defined to be the maximum number of edges in an n-vertex graph containing no copy of H as a subgraph. I want to just clarify a piece of notation. So when I say a subgraph, so what do I mean? So there are several notions of different kinds of subgraphs. And a couple that come up somewhat frequently, one is subgraph. And then there's something called induced subgraph. So it's probably easiest if I just show you an example. So suppose my H is the four cycle. So in the example of H being a subgraph, suppose I have this graph here. So H is a subgraph of this graph here in many different ways, but in particular, like that. But you see there are some more edges that are among the vertices. But that's OK. So subgraph only requires you to have a subset of vertices and a subset of edges, whereas induced subgraphs-- this is not an example of induced subgraph. And so induced subgraph means that you take this set of vertices and you look at all the edges in the big graph among your set of vertices. So that's an induced subgraph. So here, the four cycle is an induced subgraph, but not induced subgraph over here. So I just want to make that distinction clear. And for now, in this chapter, we'll only talk about subgraphs-- so not induced, necessarily. All right. So let's recap what happened for Turán's theorem. So Turán's theorem told us that the extremal number of these cliques, well, we know very precisely to be the number of edges in the Turán graph. And in particular, we saw from the last proof, but also, similarly, if you just do a calculation, that it is at most this quantity over here. And it's basically that quantity. So the number of edges in this Turán graph is asymptotically that quantity up to a lower-order error; if you like n go to infinity, r fixed. And the basic question is, what about general H? And so if I give you some arbitrary graph H, what can you tell me about the maximum number of edges in the graph forbidding this H as a subgraph? And it turns out, for most H's, we have a pretty good understanding, and perhaps quite surprisingly, because you can imagine this problem looks like it might get quite complicated. All the proofs that we've done are very specific to cliques. But it turns out that we already understand a lot. And the critical parameter that governs how this quantity behaves is the chromatic number of H. So if you call me the chromatic number of H, I can already tell you quite a lot. So just to remind you, the chromatic number of a graph is the minimum number of colors you need to properly color this graph. Chromatic number of H, denoted chi of H, is the minimum number of colors needed to color the vertices of H so that no two adjacent vertices have the same color. So for instance, if I give you a clique of r plus 1 vertices, so all the vertices must receive different colors, because every pair of vertices is adjacent-- so the chromatic number is r plus 1. The chromatic number of the Turán graph is what? It's r. So the chromatic number of the Turán graph is r. I can color each part in this complete bipartite graph using a different color. And that's the best I can do. Now, if I have one graph being a subgraph of another graph, what can you tell me about the relationships between their chromatic numbers? So if H is a subgraph of G, so what can you tell me about relationship between their chromatic numbers? AUDIENCE: Chi of H is less than the other chi. YUFEI ZHAO: So OK. You're telling me that the chi of H is at most chi of G. And why is that? AUDIENCE: Because if G can be colored with a certain number of colors, then that also has [INAUDIBLE].. YUFEI ZHAO: Great. Whatever coloring you do for G, you use the same coloring, and that's a proper coloring for H as well. So the chromatic number of H might be smaller, but certainly cannot be bigger than that of G. So in particular, if you have a graph H with chromatic number r plus 1, then this Turán on graph is always H-free. So if H requires four colors, it cannot be embedded into the complete multipartite graph of three parts. So the Turán graph is also an example of an H-free graph with lots of edges. And this tells us that the extremal number of H is at least that of this Turán number, where r is defined as the chromatic number minus 1. So that's some lower-bound construction. So as we saw earlier, and we know what the asymptotic is like for the number of edges, this n goes to infinity. Namely, it's like that. And now the question is, is this the right answer? Is it possible that we completely missed some construction that might produce a lot more edges? And it turns out-- and I think this should be somewhat surprising. Because so far, I feel like I haven't told you anything all that surprising yet. This seems like a fairly mysterious problem. But it turns out that this is more or less the right answer. So you cannot do much better than the Turán graph. And there's the theorem of Erdos, Stone, and Simonovits rich that for every graph H-- so if I fixed H, then the limit as n goes to infinity of the extremal number is a fraction of total number of pairs. So here, this is the edge density. So this quantity here is equal to 1 minus chi of H. So 1 minus 1 over chi of H minus 1. So the chromatic number, in some sense, completely determines how big the extremal number should be. And you see that so far everything we've proved, Turán's theorem, Mantel's theorem agree with this formula here. But if I give you some H, maybe quite complicated, and those previous proofs don't work, well, still you know the first order asymptotics. But there's still more to say. But first, let me just run through some examples for a sanity check. So if H is a triangle-- so if H, the chromatic number, and this limit-- so if H is the triangle, chromatic number is 3. So then this limit is 1/2. And that's indeed the case. So that's what we did with Mantel's theorem. If H is a clique of four vertices, then the chromatic number is 4. And the answer is 2/3. And also, that agrees with Turán's theorem. It agrees with Turán's theorem. But I can give you some fairly complicated-looking H. So for example, H might be the Petersen graph. So every good graph theory course should feature a Petersen graph at least once, somewhere. So that's the Petersen graph. What's the chromatic number of the Petersen graph? Actually, that's kind of a tricky question. The last time I taught this course, I even got the answer wrong. So it turns out you can three-color the Petersen graph. So it's completely not obvious how to do this. But there are only so many vertices. If you stare at it long enough, you can see what happens. But let me just show you a three-coloring of the Petersen graph. Then the third color is like that. So that's a three-coloring of the Petersen graph. That's chromatic number 3. It's not 2. So why is it not 2? AUDIENCE: It's not bipartite. YUFEI ZHAO: It's not bipartite. It's a five cycle, which cannot be two-colored. So then the limit is 1/2. And-- no. That's 3. Great. So the limit is 1/2. And here, I think that we can apply this theorem of Erdos-Stone-Simonovits. And I think this should be somewhat surprising. Because the Petersen graph looks quite complicated. If you try to forbid this graph in some big G, it seems like it's kind of hard to do. But it turns out the chromatic number completely governs the behavior of the extremal numbers. But it turns out that's not the entire story. Because while this is quite a good theorem, and I said, it gives you the first-order asymptotics, actually, that's a lie. It doesn't always give you the first-order asymptotics. And when is Erdos-Stone-Simonovits not effective? Or rather, it's not the complete-- it's not the final answer. Well, you can say, well, what about this little o? So we're still trying to understand, there's a limit. And you can understand what is the-- how quickly does it converge to this number here? So that's certainly a valid question. But more importantly, though, if your graphic is bipartite, if chi of H equals to 2, i.e. bipartite, then all this theorem tells you is that the limit is equal to 0, which somehow is not the most satisfying answer. You want to know the first-order asymptotics. I mean, this still tells you something. So the Erdos-Stone-Simonovits theorem tells you that the extremal number is little o of n squared. But of course, no. You are a curious mathematician. And you want to know, really, what is the asymptotics? Is it like n to the 3/2? Is it like n to the 4/3? You know, is this is not satisfying. And it turns out, from most bipartite graphs H, it is a very difficult problem that we still do not know the answer to, even what is the order of the asymptotics. So the next few lectures, what I want to do is to show you some techniques that will allow you to prove some upper bounds for this extremal number in the case when H is bipartite that shows you that the exponent can be less than 2. And I will also show you some constructions that sometimes, but in the very few cases, matches the upper bound. So there are very few examples of graphs H for which we know the first-order asymptotics. And for most graph H, they are major open problems in combinatorics. And there are some really old ones for which any solution may be quite exciting. I will not show you a proof of Erdos-Stone-Simonovits now. We will see that later in the term, once we have developed some more machinery. Although, later on in the term, we'll see a proof using the so-called Szemerédi's regularity lemma, which I've mentioned a few times in the first lecture. Although, you don't actually need such heavy machinery. So the original proofs of Erdos and Stone, and also by Simonovits-- so Erdos and Stone first proved this result for H being a complete multipartite graph. And Simonovits observed that knowing H for a complete multipartite graph actually implies this result in general. So you will not find a paper with all three of them being authors. But this is what the theorem is called. So later on in the course, once we've developed the machinery of graph regularity lemmas, I will show you how you can deduce Erdos-Stone-Simonovits from Turán's theorem. So somehow you use Turán's theorem and bootstrap it to Erdos-Stone-Simonovits. So that should seem somewhat magical. So on one hand, you have cliques. On the other hand, you have things that somehow don't have cliques in them. They don't really look like cliques. But you can still bootstrap one to the other. And so after we develop some machinery, we'll do that. But there is a combinatorial proof which doesn't use any of this heavy machinery. I won't discuss it in lecture, but you can look it up. And it's quite nice. It has some combinatorial techniques and some double-counting arguments. So going forward, I want to show you some questions, mostly questions, but some answers as well-- so questions such as, what is the extremal number of-- well, so what are some of the basic bipartite graphs? One of them is just a complete bipartite graph. So this has a name. It's called the Zarankiewicz problem. And for some values of s and t, we know the answer. But for most values, we have no idea what the answer is. For example for K4,4, we do not know what is the correct exponent on the n. So even fairly small cases, it is very much open. So this is all to show you that these simple proofs that we did today, they are perhaps too deceptively simple. Because even if you change the question a little bit, we don't understand a lot. So questions like these will occupy the next several lectures. And I'll also show you some constructions. And there are some constructions that use nice algebraic ideas and some probabilistic ideas. So we'll see ideas coming from many different sources. I want to close off by just a cultural remark. So in this course, especially the first half, we'll encounter a lot of Hungarian names. So we already saw a couple of them. And I just want to give you a very quick tutorial on how to pronounce Hungarian names, just for cultural purposes now. In the past, when I took this CLASS there were no Hungarian speakers. But I think we do have at least one Hungarian speaker in the room. So I want you-- but you are a native Hungarian speaker. So you should tell us. So can you help us pronounce these names? AUDIENCE: Erdos. YUFEI ZHAO: Erdos. And this one? AUDIENCE: That doesn't seem like a Hungarian name. [LAUGHTER] YUFEI ZHAO: So this, Hungarian, it's Simonovits. So I'll just tell you two things about Hungarian names. One of them is that the S-- AUDIENCE: It's /sh/. YUFEI ZHAO: --is pronounced like /sh/. And another thing that comes up is S-Z, which we saw in Szemerédi. So this is pronounced like /s/. Forget the Z. So Erdos, another thing about Erdos that you should know is that-- what is this accent? It is not a double dot. So in LaTeX you would type it as slash H, and in particular, not like that. So just a few cultural remarks about names. Great. So we'll end today. And then next time, we'll start looking at other extremal numbers for more bipartite graphs.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
14_Graph_limits_I_introduction.txt
YUFEI ZHAO: So today we are going to start a new chapter on graph limits. So graph limits is a relatively new subject in graph theory. So as the name suggests, we're looking at some kind of an analytic limit of graphs, which sounds kind of like a strange idea because you think of graphs as fundamentally discrete objects. But let me begin with an example to motivate, at least pure mathematical motivation for graph limits. There are several other ways you can motivate graphs limits, especially coming from more applied perspectives. But let me stick with the following story. So suppose you lived in ancient Greece and you only knew rational numbers. You didn't know about real numbers. But you understand perfectly rational numbers. And we wish to maximize. So we wish to then minimize the following polynomial, x cubed minus x, let's say for x between 0 and 1. So you can do this. And suppose also the Greeks knew calculus and take the derivative and all of that. So you find that. You have a problem because we know-- so given our advanced state of mathematics, we know that the maximum-- so the minimizer is at x equals to 1 over root 3. But that number doesn't exist in the real numbers. So how might a civilization that only knew rational numbers express this answer? They could say, the minimum occurs not in Q. So there's not minimized in Q-- but not minimized by a single number, but by a sequence. And this is a sequence that a more advanced civilization would know, a sequence that converges to 1 over root of 3. But I can give you this sequence through some other means. And this is one of the ways of defining the complete set of real numbers, for instance. But you can define explicitly a sequence of real numbers that converges. But of course, this is all quite cumbersome if you have to actually write down this sequence of real numbers to express this answer. It will be much better if we knew the real numbers. And we do. And the real numbers, in some sense, in the very rigorous sense, is a completion of the rational numbers. That's the story that we're all familiar with. But now let's think about graphs which are some kind of a discrete set of objects, akin to the rational numbers. And the story now is, among graphs, suppose I have a fixed p between 0 and 1. And the problem now is to minimize the 4-cycle density among graphs with density, with edge density p. So this is some kind of optimization problem. So I don't restrict the number of vertices. You can use as many vertices as you like. And I would like to minimize the 4-cycle density. Now, we saw a few lectures ago this inequality that tells us that-- so we saw a few lectures ago that this density is always at least p to the fourth. So in the lecture on quasirandomness, so we saw this inequality. And we also saw that this minimum is approached by a sequence of quasirandom graphs. And in some sense, that-- so the answer is p to the fourth. And there's not a specific graph. There's no one graph that minimizes. This 4-cycle density is minimized by a sequence. And just like in the story with the rational numbers and the real numbers, it would be nice if we didn't have to write out the answer in this cumbersome, sequential way, but just have a single graphical-like object that depicts what the minimizer should be. And graph limits provides a language for us to do this. So one of the goals of the graph limits-- this gives us a single object for this minimizer instead of taking a sequence. So roughly that is the idea that you have a sequence of graphs. And I would like some analytic object to capture the behavior of the sequence in the limit. And these graph limits can be written actually in a fairly concrete form. And so now let me begin with some definitions. The main object that we'll look at is something called a graphon. So it merges the two words graph, function. A graphon is by definition a symmetric, measurable function, often denoted by the letter W from the unit squared to the 0, 1 interval. And here being symmetric means that if you exchange the two argument variables, this function remains the same. So that's it. So that's the definition of a graphon. And these are the objects that will play the role of limits for sequences of graphs. And I will give you lots of examples in a second. So that's the definition. This is the form of the graphons that we'll be looking at mostly. But just to mention a few remarks, that the domain can be instead any product of any square of a probability measure space-- so instead of taking the 0, 1 interval, I could also use any probability measure space. So it's only slightly more general. So there are some general theorems in measure theory that tells us that most probability measure spaces, if they're nice enough, they are in some sense equivalent or can be captured by this interval. So I don't want you to worry too much about all the measure where there are technicalities. I think they are not so important for the discussion of graph limits. But there are some subtle issues like that just lurking behind. But I just don't want to really talk about them. So for the most part, we'll be looking at graphons of this one. And also the-- so instead of the domain, so the values-- so instead of 0, 1 interval, you could also take a more general space, for example, the real numbers or even the complex numbers. I'm going to use the word graphon to reserve this word for when the values are between 0 and 1. And if it's in R, let me call this just a kernel, although that will not come up so much. So when I say graphon, I just mean the values between 0 and 1. Although if you do look up papers in the literature, sometimes they don't use these words so consistently. So be careful what they mean by a graphon. So that's the definition. But now let me give you some examples on how do we think of graphons and what do they have to do with graphs. So if we start with a graph, I want to show you how to turn it into a graphon. So let's start with this graph, which you've seen before. This is the half graph. So from this graph, I can label the vertices and form an adjacency matrix of this graph, where I label the rows and columns by the vertices and put in zeros and ones according to whether the edges are adjacent. So that's the adjacency matrix. And now I want you to view this matrix as a black and white picture. So think one of these pixelated images, where I turn the ones into black boxes. Of course, on the blackboard, black is white and white is black. So I turn the ones into black boxes. And I leave the zeros as empty white space. So I get this image. And I think of this image as a function. And this is the function going from 0, 1, squared to 0, 1, interval, taking only 0 and 1 values. So that's a function on the square. But now, so this is a single graph. So for any specific graph, I can turn it into a graphon like this. But now imagine you have a sequence of graphs. And in particular, consider a sequence of half graphs. So here is H3. So Hn is the general half graph. And you can imagine that, as n gets large, this picture looks like-- instead of the staircase you just have a straight line connecting the two ends. And indeed, this function here, this graphon, is the limit of the sequence of half graphs as n goes to infinity. So one way you can think about graphons is you have a sequence of graphs. You look at their adjacency matrix. You view it as a picture, a pixelated image, black and white according to the zeros and ones in its adjacency matrix. And as you take a sequence, you make your eyes a little bit blurry. And then you think about what the sequence of images converges to. So the resulting limit is the limit of this sequence of graphs. So that's an informal explanation. So I haven't done anything precisely. And in fact, one needs to be somewhat careful with this depiction because let me give you another example. Suppose I have a sequence of random or quasirandom graphs with edge density 1/2. So what does this look like? And I have this picture here. And I have a lot of-- so I have a lot of-- one-half of the pixels are black. And the other half pixels are white. And you can think, from far away, I cannot distinguish necessarily which ones are black and which ones are white. And in the limit, it looks like a grayscale image, with a grayscale being one-half density. And indeed, it converges to the constant function, 1/2. So the limits represented by this problem up here is the constant graphon with the constant value p. But now let me give you a different example. Consider a checkerboard. So here is a checkerboard, where I color the squares according to, in this alternating black and white manner, according to a usual checkerboard. And as the number of squares goes to infinity, what should this converge to? By the story I just told you, you might think that if you zoom out, everything looks density 1/2. And so you might guess that the image, the limit, is the 1/2 constant. But what is this graph? It's a complete bipartite graph. It is a complete bipartite graph between all the even rows. And there's a different way to draw the complete bipartite graph-- namely, that picture, just by permuting the rows and columns. And it's much more reasonable that this is the limit of the sequence of complete bipartite graphs with equal parts. So one needs to be very careful. And so it's not necessarily an intuitive definition. The idea that you just squint your eyes and think about what the image becomes, that works fine for intuition for some examples, but not for others. So we do really need to be careful in giving a precise definition. And here the rearrangement of the rows and columns needs to be taken care of. So let me be more precise. Starting with a graph G, I can-- so let me label the vertices by 1 through n. I can denote by W sub G this function, this graphon, obtained by the following procedure. First, you partition the interval into intervals of length exactly 1 over n. And you set W of x comma y to be basically what happened in the procedure above. If x and y lie in the box I sub I cross I sub J, then I put in 1 if I is adjacent to J and 0 otherwise-- so this picture, where we obtained by taking the adjacency matrix and transforming it into a pixelated image. What are some of the things that we would like to do with graph limits or graphs in general? Yeah? AUDIENCE: Is the range also 0, 1, squared or 0, 1? YUFEI ZHAO: Thank you. The range is 0, 1. So here are some quantities we are interested in when considering graph limits. So given two graphs, G and H, we say that a graph homomorphism-- so a graph homomorphism between from G to H is a map of their vertexes such that the edges are preserved. So you have-- and so whenever uv is an edge of H, your image vertices get mapped to an edge of G. And we are interested in the number of graph homomorphisms. So often I use uppercase to denote a set of homomorphisms G to H-- and lowercase to denote the number. So for example, the number of homomorphisms from a single vertex-- so a single vertex with no edge to a graph G, that's just the-- what is this quantity? So some number of vertices of G-- what about homomorphisms from an edge to G? AUDIENCE: The number of edges? YUFEI ZHAO: Not quite the number of edges, but twice the number of edges. What about the number of homomorphisms from a triangle to G? AUDIENCE: 6 times the number of triangles. YUFEI ZHAO: So yeah, you got the idea-- so the 6 times the number of triangles. So now let me ask a slightly more interesting question. What about the number of homomorphisms from H to a triangle? What's a different name for this quantity here? It's the number of proper three colorings. So it's the number of proper three colorings, the number of proper colorings of H with three labeled colors, red, green, and blue. So think about the three vertices. That's red, green, and blue. And whichever vertex of H can map to red, color that vertex red. So you see that there is a one-to-one correspondence between such homomorphisms and proper colorings. So many important graph parameters, graph quantities, can be encoded in terms of graph homomorphisms. And these are the ones that we're going to be looking at most of the time. When we're thinking about very large graphs, often it's not the number of homomorphisms that concern us, but the density of homomorphisms. And the difference between homomorphisms on one hand and subgraphs is that the homomorphisms are not quite the same as subgraphs, other than this multiplicity, because you might have non-injective homomorphisms. But these non-injective homomorphisms do not end up contributing very much because they only have n to the number of vertices of H minus 1 on that border where I think of n as the number of vertices of G. n is supposed to be large. So in terms of graph limits when n gets large, I don't need to distinguish so much between homomorphisms and subgraphs. We define the homomorphism density, denoted by the letter t, from H to G, by-- define it to be the fraction of all vertex maps that are homomorphisms. So this is also equivalent to be defined as the probability that a uniform random map from the vertex set of H to the vertex set of G is a homomorphism from H to G. So it's a graph homomorphism. And this quantity turns out to be quite important. So we're going to be seeing this a lot. And because of this remark over here, in the limit, this quantity of graph homomorphism densities in the limit as the number of vertices G goes to infinity and H fixed, the homomorphism densities approaches the same limit as subgraph densities. So you should regard these two quantities as basically the same thing. Any questions so far? So all of these quantities so far defined are for-- so everything is defined so far for graphs, so what happens between graphs and graphs. So what about for graphons? I'll give you this limit object, this analytic object. I can still define densities by integrals now. So suppose I start with a symmetric measurable function. So tell me, for example, a graphon. But I can let my range be even more generous. Starting with such a function, I define the graph homomorphism density from a fixed graph H to this graphon or kernel, more generally, to be the following integral, where I'm-- before writing down the full form, let me first give you an example. I think it will be more helpful. So if I'm looking at a triangle going to W, what I would like is the integral that captures the triangle density. So this quantity here, if I let x, y, and z vary over 0 and 1, 0 through 1, independently and uniformly, then this quantity here captures the triangle density in W. In fact, and I'll state this more precisely in a second-- if you look at the translation from graph to graphon and combine that translation with this definition here, you recover the triangle density. More generally, for H instead of a triangle, the H density in a graphon is defined to be the integral of-- instead of this product here, I take a product corresponding to the graph structure of H with one factor for each edge of H. And the variables go over the vertex set of H. So this is the definition of homomorphism densities, not for graphs, but for symmetric measurable functions, in particular, for graphons. And we define it this way because-- and we use the same symbols because these two definitions agree. If you start with a graph and look at the H density in G, then this quantity here is equal to the H density in the graphon associated to the graph G constructed as we did just now. So make sure you understand why this is true and why we defined the densities this way. Any questions so far? So we've given the definition of graph homomorphism density. And we've defined these objects, these graphons. And I mentioned even something about the idea of a limit. But in what sense can we have a limit of graphs? So here is an important definition on the convergence of graphs. So in what sense can we say that a sequence of graphs converge? So we say that a sequence of graphs G sub n-- graphs or graphons, so these two definitions are interchangeable for what I'm about to say regarding limits for graphons, in which case I'm going to denote them by W sub n. So we say the sequence is convergent if the sequence of subgraph densities-- of course, if you are looking at graphons, then you should look at the graphon, the subgraph density in-- homomorphism density in graphons if this sequence converges as n goes to infinity for every graph H. So that's the definition of what it means for a sequence of graphs to converge, which so far looks actually quite different from what we discussed intuitively. But I will state some theorems towards the end of this lecture explaining what the connections are. So intuitively what I said earlier is that you have a sequence of graphs that are convergent if you have some vague notion of one image morphing into a sequence of images morphing into this final image. Still hold that thought in your mind. But that's not a rigorous definition yet. The definition we will use for convergence is if all the subgraph-- all the homomorphism densities were equivalently subgraph densities, they converge. So this is the definition. It's not required. So this is basically rigorous as stated. Just as a remark, it's not required that the number of vertices goes to infinity, although you really should think that that is the case. So just to put it out there-- so I can have a sequence of constant graphs and they will still be convergent. And that's still OK. But you should think of the number of vertices going to infinity. Yeah? AUDIENCE: What is F in the definition? YUFEI ZHAO: F is H. Thank you. Any other questions? So there are some questions that we'd like to discuss. And this will occupy the next few lectures in terms of proving the following statements. One is do you always have graph limits? If you have a convergent sequence of graphs, do they always approach a limit? And just because something is convergent doesn't mean you can represent the limit necessarily. So it turns out the answer is yes. It turns out that-- and this makes it a good theory, a good, useful theory, and an easy theory to use, that there is always a limit object whenever you have convergence. And the other question is while we have described intuitively one notion of convergence and also defined more rigorously another definition of convergence, are these two notions compatible? And what does this even mean, this idea of image becoming closer and closer to a final image? What does that even mean? So these are some of the questions that I would like to address. So in the next few things that I would like to discuss, first, I want to give you a definition of a distance between two graphons or two graphs. If I give you two graphs, how similar or dissimilar are they-- so that we have this metric. And then we can talk about convergence in metric spaces. So let's take a quick break. So given this notion of convergence, I would like to define the notion of distance between graphs so that convergence corresponds to convergence in the metric space sense of distance going to 0. So how can we define distance? First, let me tell you that there's a trivial way. And so there's a way in which you look at that definition and produce a distance out. And here's what you can do. I can convert that definition to a metric by setting the distance between two graphs G and G prime to be the following quantity, obtained by-- what would I like to do? I would like to say the distance goes to 0 if and only if the homomorphism densities, they are all close to each other. And so I can sum up all the homomorphism densities and look at their differences between G and G prime. And I simply enumerate the list of all possible graphs. I want to be just slightly more careful with this definition here because I want something which-- so when I write this, this number might be infinite for all pairs G and G prime. So if I just add a scaling factor here, then-- and this is some distance. So this is some distance. And you see that it matches the definition up there. But it's completely useless. It might as well-- might as well not have said anything because it's tautologically the same as what happened up there. And if I give you two graphs, it doesn't really tell you all that much information except to encapsulate that definition into a single number. Great. So I'm just-- the point of this is just to tell you that there is always a trivial way to define distance. But we want some more interesting ways. So what can we do? So here is an attempt, which is that of an edit distance. So we have seen this before when we discussed removal lemmas. The edit distance is the number of edges you need to change to go from one graph to the other graph. And this seems like a pretty reasonable thing to do. And it is an important quantity for many applications, but turns out not the right one for all application. And here is the reason. So this is why the edit distance is-- by edit distance, I mean 1 over the number of vertex squared times the number of edge changes needed. So there's normalization so that the distance is always between 0 and 1. But this is not a very good notion for the following reason. If I take two copies of the Erdos-Reyni random graph G, n, 1/2, what do you think is the edit distance between two such random graphs? How many edges? Yeah? AUDIENCE: Isn't it roughly one-half of the number of edges because there's like a one-half probably that won't be there or not be there [INAUDIBLE]?? YUFEI ZHAO: So yeah, so let me try to rephrase what you are saying. So suppose I have this G and G prime both sitting on top of the vertex set n. So if I'm not allowed to rearrange the vertices, how many edge changes do I need to go from one to the other? I need about 1/2. So one-half the time, I'm going to have a wrong edge there. Now you can make this number just slightly smaller by permuting the vertices. But actually you will not improve that much. It is still going to be roughly that edit distance, which is quite large. This is almost as large as you can possibly get between two arbitrary graphs. So if we want to say that random graphs, they approach a limit, a single limit, then this is not a very good notion because they are quite far apart for every n. So this is the reason why the more obvious suggestion of an edit distance might not be such a great idea. So what should we use instead? So we should take inspiration from what we discussed in quasirandomness. You have a question. AUDIENCE: Is the edit distance only for two graphs of the same vertex set? YUFEI ZHAO: So the question is, is the edit distance only for two graphs with the same vertex set? Let's say yes. So we'll see later on, you can also compare graphs with different number of vertices. So hold onto that thought. So I would like to come up with a notion of distance between graphs that is inspired by our discussion of quasirandomness earlier. So think about the discussion of quasirandomness or quasirandom graphs. In what sense can G be close to a constant, let's say p? And so this was the Chung-Graham-Wilson theorem that we proved a few lectures ago. So in what sense can G be close to p? And one of those definitions was discrepancy. And discrepancy says that if the following quantity is small for all subsets x and y, which are subsets of vertices of G-- so you remember, all of you remember, this part, the discrepancy hypothesis for quasirandomness. And this is a kind of definition that we would like to describe when two graphs are similar to each other, when they are close in this discrepancy sense. So now, instead of a graph and a number, what if now I have two graphs? I'll give you two graphs of G and G prime. And what I would like to say is that, if for now, so if they have the same vertex set, I want to say that there are close if I have that the number of edges between x and y in G is very close to the number of edges between x and y in G prime. And I normalize by the number of vertices squared, so n this number of vertices. And I would like to find out the worst possible scenario, so overall, x and y subsets of the vertex set. If this quantity is small, then I would like to say that G and G prime are close to each other. So this is inspired by this discrepancy notion. Can you see anything wrong with this definition here? Yeah? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: So permutations are vertices. So just like in the checkerboard example we saw earlier, you have two graphs. And if they are indeed labeled graphs in the same labeled vertex set, then this is the definition more or less what we used. I will define it more precisely in a second. But if they are unlabeled vertices, we need to possibly optimize permutations over rearrangements of vertices, which actually turns out to be quite subtle. So I'm going to give precise definitions in a second. But this one here, so think about permuting vertices. But it's actually a bit more subtle than that. So here are some actual definitions. I'm going to define this quantity called a cut norm. So this chapter is all going to be somewhat functional analytic in nature. So get used to the analytic language. So the cut norm of W is defined to be the following quantity denoted by this norm with a box in the subscript, which is defined to be-- if I look at this W, and I integrate it over a box, and I would like to maximize this quantity here over choices of boxes S and T, they are subsets of the interval measurable subsets. So choose your-- so over all possible choices of measurable subsets S and T, if I integrate W over S cross T, what is the furthest I can get from 0? So this is the definition of cut norm. And you can already see that it has some relations to what were discussed up there. But while we're talking about norms, let me just mention a few other norms that might come up later on when we discuss graph limits. So there will be a lot of norms throughout. So in particular, the lp norm is going to play a frequent role. So lp norm is defined by looking at the peak norm of the absolute value, integrated and then raised to 1 over p. And so the infinity norm-- so this is almost, but not quite the same as the sup-- so almost the same as the supremum, but not quite because I need to ignore subsets of measure 0. So I can write down a formal definition in a second. But I need to-- if I change W on the subset of the measure 0, I shouldn't change any of these norms. And so the one way to define this essential supremum-- it's called an essential sup-- is that it is the largest-- so it is the smallest lambda such that-- so the smallest number m such that the measure of the set taking value bigger than m this set has measure 0. So it's the threshold above which you-- this, it has measure 0. And the l2 norm will play a particularly special role. And for the l2 norm, you're really in the Hilbert space, in which case we are going to have inner products. And we denote inner products using the square-- using these brackets. So everything is real. I don't have to worry about complex conjugates. So comparing with the discussion up there, we see that a sequence of-- so sequence Gn of quasirandom graphs has a property that the associated graphons converge to p in the cut norm. For quasirandom graphs, there is no issue having to do with permutations because the target is invariant upon permutations. But if I give you two different graphs, then I need to think about their permutations. And to study permutations of vertices, the right way to do this is to consider measure-preserving transformations. So we say that phi from the interval to the interval is measure-preserving because first of all, it has to be a measurable map. And everything I'm going to talk about are measurable. So sometimes I will even omit mentioning it. So it is measure-preserving if, for all measurable subsets A of this interval, one has that the pullback of A has the same measure as A itself. Let me give you an example. So you have to be also slightly careful with this definition if you think about the pushforward that's false. It has to be the pullback. So for example, the map which sends-- so an easy example, the map which sends x to x plus 1/2-- so think about a circle as your space. And here I am just rotating the circle by one-half rotation. So it's obviously measure-preserving. I am not changing any measures. Slightly more interesting example, quite a bit more interesting example is setting x to 2x. This is also measure-preserving. And you might be puzzled for a second why it's measure-preserving because it sounds like it's dilating everything by a factor of 2. But if you look at the definition-- and so here is again mod 1. If you look at the definition, if you look at, let's say, a subset A, which is-- so what should I think? For example, so if that is my A, so what's the inverse of A? So it's this set. So the measure is preserved upon this pullback. And so if you pushforward, then you might dilate by a factor of 2. But when you pullback, the measure gets preserved. So these measure-preserving transformations are going to play role of permutations of vertices. So it turns out that these things are actually-- they are quite subtle technically. And I am going to, as much as I can, ignore some of the measure theoretic technicalities. But they are quite subtle. So for example, so now let me give you a definition for the distance between two graphons. I write, starting with a symmetric measurable function W, so I write W superscript phi to denote the function obtained as follows. So I think of this as relabeling the vertices of a graph. And now I define this distance. So this is going to be called the cut distance between two symmetric measurable functions, U and W, to be the infimum over all measure-preserving bijections. So this is the definition for the distance between two graphons. To take the optimal-- and my question does it-- I am looking at nth. So I haven't told you yet whether you can take a single one. And it turns out that's a subtle issue. And generally it doesn't exist. But I look over all measure-preserving bijections phi. And I look at the distance between W and Wv, optimized over the best possible measure-preserving bijection. So this nth is really an nth. It's not always obtained. And actually, this example here is a great example for-- you can create an example for why nth is not always obtained from the discussion over here. For example, if U is the function x times y, this is a graphon xy and W is Uv, where v is the map distance x to 2x, then in your mind, you should think of these two as really the same graphons. You are applying the measure-preserving transformation. It's like doing a permutation. But because phi is not bijective, you cannot putting phi here to get these two things to be the same. So there are some subtleties. So this is really an example just to highlight there's some subtleties here, which I am going to try to ignore as much as possible. But I will always give you correct definitions. Any questions? Yeah? Yeah? AUDIENCE: So can we expect the cut distance between these two sets to be 0 [INAUDIBLE]? YUFEI ZHAO: So the question, do we expect the cut distance between these two to be 0? And the answer is yes. So we do expect them to be 0. And they are 0. They are equal to 0. And let me just tell you one something that is new. And this is one of those statements that has a lot of measure theoretic technicalities. For all graphons U and W, it turns out that there exist measure-preserving maps-- so not necessarily bijections, but measure-preserving maps from 0, 1 interval to itself, such that the distance between U and W, the cut distance, is obtained by the cut norm difference between-- the difference between U phi and W psi. So don't worry about it. So far, we have defined this notion of a cut distance between two graphons. But now I'll give you two graphs. So what do you do for two graphs? Or I can-- yeah? AUDIENCE: You can take the graphon associated it. YUFEI ZHAO: Great. So take the graphon associated with these graphs and consider their cut distance. So for graphs G and G prime, and potentially even a different number of vertices, I can define the distance, the cut distance between these two graphs to be the distance between the associated graphons. And similarly, if I have a graph and a graphon, I can also compare their distance. So what does this actually mean? So if I give you two graphs, even with the same number of vertices, it's not quite the same thing as a permutation of vertices. It's a bit more subtle. Now why is it more subtle than just permuting the vertices? So here we are using measure-preserving transformations, which doesn't see your atomic vertices. So we might split up your vertices. So you might take a vertex and chop it in half and send one half somewhere and another half somewhere else because these guys, they don't care about your vertices anymore. So it's not quite the same as permuting vertices. But it's some kind of-- so you allow some kind of splitting and rearrangement and overlays. So you can write out this distance in this format, find out the best way to split and overlay and to rearrange that way. But it's much cleaner to define it in terms of graphons. Yes? AUDIENCE: Is this why we take bijections up there [INAUDIBLE]? YUFEI ZHAO: The question is, is that why we take bijections up there? And no, so up there, if I wrote instead measure-preserving maps, it's still a correct definition and it's the same definition. And the fact that these two are equivalent goes to some measure theory, which I will not-- do not want to indulge yo Great. But the moral of the story is you take two graphons and rearrange the vertices in some way, in the best way, overlay them on top of each other and take the difference and look at the cut norm. And so that's the distance. So I want to finish by stating the main theorems that form graph limit theory. And these address the questions I mentioned right before the break. So do there exist limits? And do these two different notions of one having to do with distance and another having to do with homomorphism densities, how do they relate to each other? Are they consistent? So the first theorem, Theorem 1, has to do with the equivalence of the convergence, namely, that if you have a sequence of graphs or graphons, the sequence is convergent in the sense, up there, if and only if they are convergent in the sense of-- in this metric space. So remember what convergence means in the metric space is that of a Cauchy sequence-- so if and only if it is a Cauchy sequence with respect to this cut distance. So it's just-- maybe for many of you, it's been a while since you took 18-100. So let remind you a Cauchy sequence, in this case, it means that, if I look at the distance between two graphs, if I look far enough out, then I can contain the rest of the sequence in an arbitrarily small ball. So the sup positive m of this guy here, goes to 0 as n goes to infinity. But because we don't know yet whether the limit exists, so I can't talk about them getting closer and closer to a limit. But they mutually get closer to each other. So Theorem 1 tells us that these two notions, one having to do with homomorphism densities, is consistent and in fact equivalent to the appropriate notion in the metric space. So let's use a symbol. So we say that G sub n converges to W, or in the case of a sequence of graphons. So we can do that as well. So here we say that G sub n converges with W, if whenever you look at the F density in G sub n, this sequence converges to the corresponding f density in W for every f, and similarly, if you have a graphon instead of a graph. So that definition was just whether a sequence is convergent. Here it converges to this graphon W. And the question is, if you give me a convergent sequence, is there a limit? Does it converge to some limit? And the answer is yes. And that's the second theorem, which tells us the existence of a limit, of the limit object. So the statement is that every convergent sequence of graph or graphons has a limit graphon. So now I want you to imagine this space of graphons. So we'll have this space containing all the graphons. And let me denote this space by this curly W0. So this, the 0 is-- don't worry about it. It's more just convention. But let me also put a tilde on top for the following reason. Let this be the space of graphons where we identify graphons with distance 0. So then the space combined with this metric is a metric space. It is the space of graphons. And so the third theorem is that it's the compactness of the space of graphons, namely, that this space is compact. Because we're in the metric space, compactness in the usual sense of every open cover has a finite subcover is equivalent to the slightly more intuitive notion of sequential compactness-- every sequence has a convergent subsequence. And then it's also, if you have a limit, so it converges to some limit. So how should you think of Theorem 3? So it's about compactness and some tautological notion. But intuitively, you should think of compactness as saying-- and the English word, the English meaning of the word compact is small. You should think of this space as being quite small, which is rather counterintuitive because we're looking at the space of graphons, certainly at least as large as the space of graphs, but really all functions from the square to the interval. This seems like a pretty large space. But this theorem here says that, in fact, that space is quite small. And where have we also seen that philosophy before? So in Szemeredi's Graph Regularity Lemma, the underlying philosophy there is that, even though the possibilities, the space of possibilities for graph is quite large, once you apply Szemeredi's Regularity Lemma, and once you are OK with some epsilon approximations, there is only a small description, this bounded description, of a graph. And you can work with that description. And these two philosophies, it's no coincidence that they are consistent with each other because we will use Szemeredi's Regularity Lemma to prove this compactness. In fact, we will use a slightly weaker version of Szemeredi's Regularity Lemma to prove compactness. And then you will see that, from the compactness, one can use properties of the compactness to boost to a stronger version of regularity. But the underlying philosophy here is that this compactness is in some sense a quantity. It's a qualitative reformulation, analytic reformulation of Szemeredi's Graph Regularity Lemma. OK, so-- So this topic, this graph limits, which we'll explore for the next few lecturers, including giving a proof of all three of these main theorems, nicely encapsulates the past couple of topics we have done. So on one hand, Szemeredi's Regularity Lemma, or some version of that, will be used in proving the existence of the limit and also the compactness. And also it's philosophically and in some sense related and very much equivalent in some sense and related to these notions. It is also related to quasirandomness-- in particular, quasirandom graphs that we did a few lectures ago, where in quasirandom graphs, we are really looking at the constant graphon in this language. And now we expand our horizons. And instead of just looking at the constant graphon, we can now consider arbitrary graphons. They are also this model for a very large graph. Any questions? Yeah? AUDIENCE: Can we prove the theorem analytically and then deduce the Regularity Lemma with it? YUFEI ZHAO: The question is, can we prove Theorem 3 analytically and deduce the Regularity Lemma? So you will see once you see the proof. It depends on what you mean. But roughly, the answer is yes. But there's a very important caveat. It's that, because we are using compactness, any argument involving compactness gives no quantitative bounds. So you will have a proof of the Szemeredi Regularity Lemma that tells you there is a bound for each epsilon. But it doesn't tell you what the bound is. Yeah? AUDIENCE: Doesn't Theorem 3 imply Theorem 1 because of the [INAUDIBLE]? YUFEI ZHAO: Does Code Theorem 3 imply Theorem 1? And the answer is no because in Theorem 1, the notion of convergence is about homomorphism densities. So Theorem 1 is about these two different notions of convergence and that they are equivalent to each other. Theorem 3 is just about the metric. It's about the cut metric. And so Theorem 1 is-- the point of Theorem 1 is that you have these two-- you have these two notions of convergence, one having to do with subgraph densities and the other having to do with a cut distance. And in fact, they are equivalent notions. So all great questions-- any others? AUDIENCE: And for that F, is F a graphon because the [INAUDIBLE]? Is F a graphon or a graph? YUFEI ZHAO: The question is, is F a graph or a graphon? F is always a graph. So in t F, W, I do not define this quantity for graphon F. So this quantity here, I have only allowed the second argument to be a graphon. The first argument is not allowed to be a graphon. It doesn't make sense. Yeah? AUDIENCE: Doesn't Theorem 1 and 2 together imply Theorem 3? YUFEI ZHAO: The question is, doesn't Theorem 1 and Theorem 2 together imply Theorem 3? So first of all, Theorem 1 is really-- it's not about compactness. So it's really about the equivalence of two different notions of convergence. It's like you have two different metrics. I am showing that these two metrics are equivalent to each other. Theorem 2 and Theorem 3 are quite intimately related. So Theorem 2 is about-- Theorem 2, so they are quite related. But they're not quite the same. So let me just give you the real line analogy, going back to what we said in the beginning. So Theorem 2 is kind of like saying that the real numbers is complete. Every convergent sequence has a limit, whereas Theorem 3 is more than that. It's also bounded in some sense. But here, there is no notion of bounded. It's compact. But the main-- you should think of these two are very much related to each other. But here it's-- but they are not equivalent. Anything else? Great. So that's all for today.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
16_Graph_limits_III_compactness_and_applications.txt
YUFEI ZHAO: So we've been discussing graph limits for a couple of lecturers now. In the first lecture on graph limits, so two lectures ago, I stated a number of main theorems. And today, we will prove these theorems using some of the tools that we developed last time, namely the regularity lemma. And also we proved this Martingale convergence theorem, which will also come in. So let me recall what were the three main theorems that we stated at the end of two lectures ago. So one of them was the equivalence of convergence. On one hand, we defined a notion of convergence where we say that Wn approaches W, by definition, if the F densities converge. We can say convergence even without a limit in mind, where we see a sequence converges if all of these F densities converge. So the first main theorem was that the two notions of convergence are equivalent, so one notion being convergence in terms of F densities, and the second notion being convergence in the sense of the cut norm, the cut distance. There was a second term that tells us that limits always exist. If you have a convergent sequence, then you can represent a limit by a graphon. And the third statement was about compactness of the space of graphons. So we're actually going to prove these theorems in reverse order. We're going to start with the compactness and work backwards. So this is not how these theorems were originally proved, or not in that order, but it will be helpful for us to do this by first considering the compactness statement. So remember we're compactness says. I start with this space W tilda, which is the space of graphons, where I identify graphons that have distance 0. So if they have cut distance 0, then I refer to them as the same point. So this is now a metric space. And the theorem is that this space is compact. So I think this, it's a really nice theorem. It's a beautiful theorem that encapsulates a lot of what we've been talking about so far with similarities, regularity, and what not, in a qualitatively succinct way, just that this space of graphons is compact. You may not have some intuition about what the space looks like at the moment, but we'll see the proof and hopefully that will give you some more intuition I first learned about this theorem when Laszlo Lovasz, who was one of the pioneers in the subject, when he came to MIT to give a talk when I was a graduate student. And he said that analysts thought that they pretty much knew all the naturally occurring compact spaces out there. So there are lots of spaces that occur in analysis and topology that are compact. I mean, the first one you learn in analysis undergraduate is probably that an interval is compact. But there are also many other spaces. But this one here doesn't seem to be any of these classical notions of compactness. So it's, in some sense, a new compact space. So let's see how the proof goes. Now, because we are working in a metric space, it suffices to show, due to the equivalence between compactness in the sense of finite open covers and sequential compactness in the metric space, so it suffices to show sequential compactness, that every sequence of graphons has a convergent subsequence with respect to this cut metric and also will produce a limit as a convergence subsequence with a limit point. So that's what we'll do. I give you an arbitrary sequence of graphons. I want to construct by taking subsequences a convergent sequence. And I will tell you what that limit is. So here is what we're going to do, given this sequence. As I hinted before, it has to do with the regularity lemma. So we're going to apply the regularity lemma in the above form, which we did last time. So apply the weak regularity lemma, which will tell us that for each Wn, there exists a partition, in fact, a sequence of partitions, each one refining the next. So what's going to happen is, I'm going to start with Wn and starting with a trivial partition, apply that lemma, and obtain a partition P sub n, 1. And then starting with that as my P0, I'm going to apply regularity lemma again and obtain a refinement. I will have this sequence of partitions, each one refining the next. So all of these are going to be partitions of the 0, 1 interval. And as I mentioned last time, everything's going to be measurable. I'm not going to even mention measurability. Everything will be measurable such that they satisfy the following conditions. So the first one is what I mentioned earlier, is that you have a sequence of refinements. So each P sub n k plus 1 refines the previous one for all n and k. And the second condition, as given by the regularity lemma, you get to control the number of parts. So I will say in the third part what the error of approximation is. But you get to control the number of parts. So in particular, I can make sure that this number here, the number of parts in the k'th partition depends only on k. Now, you might complain somewhat, because the regularity lemma only tells you an upper bound on the number of parts. But that's OK. I can allow empty parts. So now I make sure that the k'th partition has exactly nk parts. And the third one has to do with the area of approximation. OK, so suppose we write W sub nk as the graphon obtained by applying the stepping operator. So this is the averaging operator on this partition, corresponding to the k'th partition. I apply that partition, do a stepping averaging operator on the n'th graphon. I get W sub nk. The third condition is that the k'th partition approximates-- it's a good approximation in the cut norm up to error 1 over k. So 1 over k is some arbitrary sequence going to 0 as k goes to infinity. So I obtained a sequence of partitions, so by applying the regularity lemma to each to each graphon in the sequence. Now, these graphons, I mean, they each have their own vertex set. And so far, they're not related to each other. But to make the visualization easier and also in order to do the next step in the proof, I am going to do some measure-preserving bisection So think of this as permuting the vertex labels. So by replacing each Wn by some W sub n of phi, where phi is a measure-preserving bisection, we can assume that all these partitions are partitions into intervals. So initially, you might have a partition into arbitrary measurable sets. Well, what I can do is to push over the first set to the left, and so on, so do a measure-preserving bisection in a way so that I can maintain that all the partitions are visually chopping up into intervals. Yeah? AUDIENCE: So at some point, we need just one measure for the projection, like all of them be in k? YUFEI ZHAO: OK, so the question is, it may be the case that, for a given k, I can do this arrangement, but it's not clear to you at the moment why you can do this uniformly for all k. So one way to get around this is, for now, just think of for each given k. And then you'll see at the end that that's already enough. OK, any more questions? So now assume all of these P sub nk's are intervals. So in fact, what you said may be a better way to go. But to make our life a little bit easier, let's just assume for now that you can do this. OK, and what's going to happen next is some kind of a diagonalization argument. We're going to be picking subsequences. So I'm going to be picking subsequences so that they are going to have very nice convergence properties. And so I'm going to repeatedly throw out a lot of the sequence. So this is a diagonalization argument. And basically what happens is that, by passing two subsequences-- and we're going to do this repeatedly, many times-- we can assume, first, that the end points of P sub n1, they converge as n goes to infinity. So each P sub n1 is some partition of interval into some fixed number of parts. So by passing to a subsequence, I make sure that the division points all converge. And now, by passing one more time, so by passing to subsequence one more time, let's assume that also, W sub n1 converges to some function, some graphon u1, point-wise. So initially, I have these graphons. Each one of them is an m by n block. They have various division points. By passing to a subsequence, I assume that the points of division, they converge. And now by passing to an additional subsequence, I can make sure the individual values, they converge. So as a result, W sub n, 1 converges to W1-- converges to some graphon, u1, point-wise, almost everywhere. And we repeat for W sub nk for each k. So do this sequentially. So we just did it for k equals to 1. Now do it for 2, 3, 4, and so on. So this is a diagonalization argument. We do this countably many times. At the end, what do we get? We pass down to the following subsequence. And just to make my life a bit more convenient, instead of labeling the indices of the subsequence, I'm going to relabel the sequence so that it's still labeled by 1, 2, 3, 4, and so on. So we now pass to a sequence W1, W2, W3, and so on, such that if you look at the first partition, the first weak regularity partition, they produce W1,1, W2,1, W3,1, and so on. And these guys, they converge to u1, point-wise. The second level, W2,1-- sorry, W1,2, W2,2, W3,2, they converge to u2, point-wise, and so on. OK, so far so good? Question? AUDIENCE: Sorry, earlier, why did that converge to u1 point-wise? YUFEI ZHAO: OK, so the question is, why is this true? Why is Wn,1 converge to u1 point-wise? Initially, it might not. But what I'm saying is, you can pass to a subsequence. AUDIENCE: Yes. YUFEI ZHAO: You can pass to subsequence, because there are only n1 parts. So it's an n1 by m1 matrix of real numbers. And so you only have finite bounded many of them. So you can pick a subsequence so that they converge. Yeah? AUDIENCE: So how do you make sure that your subsequence is not empty at the end-- like, could you fix the first k? YUFEI ZHAO: OK, so you're asking, if we do this slightly not so carefully, we might end up with an empty sequence. So this is why I say, you have to do a diagonalization argument. Each step, you keep the first term, the sequence, so that you always maintain some sequence. You have to be slightly careful with diagonalization. Any more questions? So by passing to a subsequence, we obtain this very nice sequence, this nice subsequence, such that each row corresponding to each level of regularization converges point-wise to some u. So what do this u's look like? So they are step graphons. So let's explore the structure of u a bit more. OK, so since we have that each-- OK, so we have the property that each partition refines the previous partition. And as a result, if you look at the k plus 1'th stepping, and I step it by the previous partition in the sequence, I should get back, I should go back one in the sequence. So this was this graphon obtained by averaging over the k'th partition. And this is the graphon obtained by averaging over the k'th plus 1st partition. So if I go back one more, I should go back in the sequence. And since the u's are the point-wise limit of these W's, the same relationships should also hold for the u's, namely that u sub k should equal to u sub k plus 1 if I step it with Pk, where Pk is the-- so if you look at, all these endpoints converge. And these partitions, they converge to P1. So if you look at the partitions that correspond to P1,1, P2,1, and so on, I want these partitions to converge to P1. I want these partitions to converge to P2. So all these partitions, they are partitions into intervals. So I'm just saying, if you look at where the intervals, where the divisions of intervals go, they converge. And then I'm calling the limit partition P sub k. And here we're using that P sub k plus 1 refines P sub k, because the same is true for each end. So in the limit, the same must be true as well. So you have this column of u's. So let me draw you a picture of what these u's could look like. So here is an illustration that may be helpful. So what could these u's look like? Each one of them is represented by values on the unit square. And I write this in matrix notation so that inversion is in the top left corner. Well, maybe P1 is just the trivial partition, in which case u1 is going to be a constant graphon. Let's say it has value 0.5. u2 came from u1 by some partitioning. And suppose just for the sake of illustration, there was only a partitioning into two parts. And OK, so it doesn't have to be at the origin. It doesn't have to be at midpoint. But just for illustration, suppose the division were at the midpoint. Because u1 needs to have-- so this 0.5 value should be the average value in all of these four squares. So for instance, the points may be like 0.6, 0.6, 0.4, 0.4, so for example. And in u3, the partition, the P3 partition-- so here are the partition is P1. The partition is P2. There's two parts. And suppose P3 three now has four parts. And again, for illustration's sake, suppose it is equally dividing the interval into four intervals. It could be that now each of these parts is split up into four different values in a way that so you can obtain the original numbers by averaging. So that's one possible example. Likewise, you can have something like that. Sorry, 4, 7-- so and I should maintain symmetry in this matrix. And the last one, I'm just going to be lazy and say that it's still 0.4 throughout. OK, so this is what the sequence of u's are going to look like. Each one of them splits up a box in the previous u in such way that the local averages, the step averages are preserved. Any questions so far? All right, so now we get to Martingales. So I claim that this is basically a Martingale. So and suppose you let x, y be a uniform point in the unit square. And consider this sequence. So this is now a random sequence, because x, y are random. I evaluate these u's on this uniform random point, x, y. So this is a random sequence. And the main observation is that this is a Martingale. So remember the definition of a Martingale from last time. Martingale is one where, if you look at the value of u sub k conditioned on the previous values, the expectation is just the previous term. And I claim this is true for the sequence, because of the way we constructed it, it's splitting up each box in an averaging-preserving way. A different way to see this, and for those of you who actually know what the definition of a random variable is in the sense of probability theory, is that you should view this 0, 1 squared as the probability space, in which case u itself is the random variable. And this partitioning gives you a filtration of the space. It's a sequence of sigma algebras dividing up the space into finer and finer pieces. So this is really what a martingale is. So we have a Martingale. It's bounded because the values take place in 0, 1. So by the Martingale convergence theorem, which we proved last time, we find that this sequence must converge to some limit. So this sequence of Martingale converges, which means, so if you think about the interpretation up there, so there exists a u which is a graphon such that uk converges to u point-wise almost everywhere as k goes to infinity. That's the limit. So this is the limit. And we're going to show that it is indeed a limit. But you see, this is a construction of the limit, where we took regularity, got all these nice pieces, found convergent subsequences, and then applied the martingale convergence theorem to produce for us this candidate for the limit, this u. So now let us show that it is indeed the limit that we're looking for in the subsequence. So again, I've tossed out all the terms which we removed in passing to subsequences. So in the remaining subsequence, I want to show that the Wn's indeed converge to u. And this is now a fairly straightforward three epsilons argument, the standard analysis type argument. But OK, so let's carry it through. So for every epsilon bigger than 0, suppose you pick a sufficiently large k. There exists a sufficiently large k. And we make sure k is large enough such that u differs from u sub k in l1 norm by, at most, epsilon over 3, because the uk's, they converge to u point-wise almost everywhere. So we find this k. So let's fix this k. Then there exists an n0 such that if you look at this u sub k, it does not-- it is very close to W sub nk for all n large enough because of what happened up there. So we can now compute the difference between-- in fact, let's do it this way. So now let's compute the difference, the cut norm of the difference between the term in the sequence W sub n and u. So by triangle inequality, we have that the following is true. The cut norm is upperbounded by the l1 norm. Look at the definitions. So I'm going to replace the first couple of these cut norms by l1 norms and leave the last one in tact. The first term, I claim is, at most, epsilon over 3, because up there. The second term is going to be at least epsilon over 3, because over here. And the third term is going to be also, at most, epsilon over 3, because well, from the regularity approximation, I know that it is, at most, 1 over k. And I chose k large enough so that there is also, at most, epsilon over 3. Put everything together, we find that these two are-- they different by, at most, epsilon if n is large enough. But now, since epsilon can be arbitrarily small, we find that you indeed have convergence, as claimed. And this finishes the proof of compactness. So there are a few components. One is passing to-- so applying regularity, passing to subsequences, and obtaining this limit from the regularity approximations, these u's. And then we observe that these u's, they form a Martingale. So we can apply the Martingale convergence theorem to get us a candidate for the limit. And then the rest is fairly straightforward, because all the steps are good approximations. You put them together, you prove the limit. Any questions? All right, so you may ask, well, now we have compactness. What is compactness good for? So it may seem like a somewhat abstract concept. So in the second half of today's lecture, I want to show you how to use this compactness claim combined with the first definition of compactness that you've seen, namely every open cover contains a finite sub cover, and to use that to prove many consequences about the space of graphons. And some things that we had to work a bit hard at, but they turn out to fall from the compactness statement. So let's take a quick break. In the first part of this lecture, we proved that a space of graphons is compact. So now let me show you what we can reap as consequences from the compactness result. So I want to show you how to apply compactness and prove some consequences. As I mentioned earlier, the compactness result is related to regularity. And in fact, many of the results I'm going to state, you can prove maybe with some more work using the regularity lemma. But I also want to show you how to deduce them directly from compactness. In fact, we'll deduce the regularity lemma from compactness. So these two ideas, compactness and regularity, they go hand-in-hand. So first, more so as a warm up, but as also an interesting result, statement, an interesting statement on its own, so let me prove the following. So here is a statement that we can deduce from compactness. So for every epsilon, there exists some number N which depend only on epsilon, such that for every W graphon, there exists a graph G with N vertices, such that the cut distance between G and W is, at most, epsilon. So think about what this says. So for every epsilon, there is some bound N such that every graphon-- so a graphon is some real-value function, so taking values between 0 and 1. You can approximate it in the distance that we care about by a graph with a bounded number of vertices. This is kind of like regularity lemma. If you are allowed edge weights on this graph G, then it immediately follows from the weak regularity lemma that we already proved. And from that weak regularity lemma which allows you to get some G with edge weights, you can think about how you might turn an edge-weighted graph into an unweighted graph. So that can also be done. But I want to show you a completely different way of proving this result that follows from compactness. And so I say it's a warm up, because it's really a warm up for the next thing we're going to do. This is an easier example showing you how to use compactness. So the idea is, I have this compact space. I'm going to cover this space by open sets, by open balls. So the open balls are going to be this B sub epsilon G. So for each graph, G, I'm going to consider the set of graphons that are within epsilon of G. So this is you have some topological space or some metric space. I have a point G. And I look at its open ball. This is the ball. So I claim that these open balls, they form an open cover, of the space. Where is that? So I want to show every point W is covered. So this follows from the claim that every W is the limit of some sequence of graphs. So we didn't technically actually prove this claim. I said that if you take W random graphs, you get this. So we didn't technically prove that. But OK, so it turns out to be true. There are easier ways to establish it as well by taking l1 approximations. But the point is that, if you use this claim here, you do not get a bound on the number of vertices. It could be that for very bizarre-looking W's, you might require much more number of vertices. And a priori, you do not know that it is bounded as a function of epsilon. But now we have this open cover. So by compactness of this space of graphons, we can find an open cover using a finite subset of these graphs, so G1 to Gk, so a finite subset to do an open cover. And now we let N to be the least-common multiple of all of these vertex set sizes. So all of these graphs, they are within-- so for each of these graphs, I can replace it by a graph on exactly N vertices. There exists a graph Gi prime of exactly N vertices, such that they represent the same point in the space of graphons. So why is this? Think about the representation of a graphon using from a graph. If I start with G and I blow up each vertex into some k vertices, then it turns out-- I mean, you should think about why this is true. But it's really not hard to see if you draw the picture. So remember, this black and white picture, that actually, they're the same point. They are represented by the same graphon. OK, and that's it. So we found these G's. All of them have exactly N vertices such that their epsilon open balls form an open cover of the space of graphons. So every graphon can be approximated by one of these graphs. So you get that from compactness. The statement says, for every epsilon, their exists an N. So N is a function of epsilon.l What's the function? This proof doesn't tell you anything about that. So this proof gives no information about the dependence of N on epsilon. So in some sense, it's even worse than some of the things we've seen in the earlier discussion on Szemerédi's regularity lemma where there were tower or Wowzer-types. Here there is no information, because it comes from a compactness statement. So you just know there exists a finite open cover, no bounds. OK, any questions about this warm-up application? So it feels a bit magical. So you have compactness. And then you have all of these consequences. So now let me show you how you can deduce the regularity lemma itself from compactness. In fact, in the proof of the existence, in the proof of compactness, we only used weak regularity. And now let me show you how you can use the weak regularity consequence of namely compactness to bootstrap itself to strong regularity. So we saw a version of strong regularity in the earlier chapter when we discussed Szemerédi's regularity lemma. So let me state it in a somewhat different-looking form, but that turns out to be morally equivalent. Suppose I have a vector of epsilons. So all of these are positive real numbers. The claim is that there exists an M which only depends on this vector such that for every graphon W, one can-- so every graphon W can be written as the following, decomposing the following way. We write W as a sum of a structured part, a pseudo-random part, and a small part, where the structured part is a step function with k parts, but k is, at most, M, this claimed bound M. The pseudo-random part has a very small cut norm, so its cut norm, very small, even compared to the number of parts. And finally, the small part has l1 norm bounded by epsilon 1. So that's the claim. You can always-- there exists some bound M in terms of these error parameters so that you have this decomposition. So we saw some version of this earlier when we discussed the spectral proof of regularity lemma. And I don't want to go into details of how these two things are related, but just to comment that depending on your choice of the epsilon parameters, it relates to some of the different versions of regularity lemma that we've seen before. So for example, if epsilon k is roughly epsilon, some fixed epsilon over k squared, then this is basically the same as Szemerédi's regularity lemma, whereas if all the k's are the same epsilon, then this is roughly the same as the weak regularity lemma. All right, so how to prove this claim? We're going to use compactness again. So first, there always exists an l1 approximation so that every W has some step function u associated to it such that the l1 distance between W and u is, at most, epsilon 1. So again, this is one of these more measured theoretic technicalities I don't want to get into, but so it's not hard to prove. So roughly speaking, you have some function. You can approximate it using steps. So similar to what we did just now, if you just do that, the number of steps might not be a function of epsilon, so you might need much more steps just doing that if your W looks more pathological. So now what we're going to do is consider the following function, k of W. And I define it to be the minimum k such that there exists a k step graphon u such that u you minus W is, at most, epsilon 1. So among all the step function approximations, pick one that has the minimum number of steps and call the number of steps k of W. So now, as before, we're going to come up with an open cover of the space of graphons. So the open cover is going to be consisting of the cut norm balls of-- actually, what notation did I use over there? So this is a ball centered around W with radius epsilon sub kW. This is an open cover of the space of graphons as W ranges over all graphons. So I'm literally looking at every point in the space and putting an open ball around it. So obviously, this is an open cover. And because of compactness, there exists a finite sub cover. So there exists a finite set, we write curly s, of graphons such that these balls, as I range over W and curly s, they cover the space of graphons. Now the goal is, given the W, I want to approximate it in some way. So having a finite set of things to work with allows us to do some kind of approximations. So thus, for every W graphon, there exists a W prime in s whose ball in that collection covers the point W, such that W is contained in this ball. And OK, so given this W prime, because of this definition over here, so there exists a u which is a k step graphon with k, at most, the maximum over all such possible number of steps, such that W and W prime, they are close in cut norm because you have this open cover. And furthermore, W prime is close to a graphon with a small number of steps. So suppose we now write W as u plus W minus W prime and then plus W prime minus u. We find that this is the decomposition that we are looking for because the u-- so this is the structural component-- has k steps, where k is less than this quantity here. And that quantity there is just some function of epsilons. So it's, at most, some function of the epsilons. It doesn't depend on the specific choice of W. The second term, this is this pseudo-random piece, because it's cut norm is small, so what we have here. Yeah, so this entire thing should be subscript. And finally, the third term here is the small term, because it's l1 norm is small. So putting them together, we get the regularity lemma. So again, the proof gives you no information whatsoever about the bound M as a function of the input parameters, the epsilons. So it turns out you can use a different method to get the bounds. Namely, we actually more or less did this proof when we discussed regularity lemma, the strong regularity lemma. So we did a different proof where we iterated an energy increment argument. And that gave you some concrete bounds, some bounds which iterates on these epsilons. But here is a different proof. It gives you less information, but it elegantly uses this compactness feature of the space of graphons. Any questions? OK, so we proved compactness. So now let's go on to the other two claims, namely the existence of the limit and that equivalences of convergence. The existence of the limit more or less is a consequence of compactness. So you have this sequence of graphons, W1, W2, and so on. And the claim is that, if this sequence of F densities converges for each F, then there exists some limit W such that all of these sequences of F densities converge to the limit density. So that was the claim, so nothing about cut norms in at least as far as the statement goes. Well OK, from compactness, we know that you can produce always a subsequential limit. So by compactness or sequential compactness, there exists some limit point which we call W. And this W has the property that, for some subsequence, the cut distance from the subsequence converges to W. So for some subsequence n0 as ni going to infinity. But now, by the counting lemma, the sequence of F densities-- so the counting Lemma tells you, if you have cut distance going to 0, then the F density should also go to 0. So indeed, that's what we have here. So this is so far just for the subsequence. But we assumed already that the entire sequence converges in respect to every F densities. So it must be the same limit. And that finishes the proof of convergence, so proof of the existence of the limit. So we obtain this limit from compactness. Next, let's prove the equivalence of convergence. And this one is somewhat trickier. So what happens here is that we would like to show that these two notions of convergence, one having to do with F densities and another having to do with cut distance, that these two notions are equivalent to each other. So the goal here is to show that this F density convergence is equivalent to the statement that W sub n is Cauchy with respect to the cut distance. All right, claim one of the directions is easy. Which direction is that? So which direction is the easy direction? So which way, left, going left, going right? OK, so going left? So I claim that this is easy, because it follows from counting lemma. Counting lemma, remember the spirit of the counting lemma, at least qualitatively, is that if you have two graphons that are close in cut distance, then they are close in F densities. So if you have Cauchy with respect to cut distance, then they are Cauchy, and hence, convergent in F densities. And it's the other direction that will require some work. And this one is actually genuinely tricky. So and it's almost kind of a miraculous statement, that somehow if you only knew the F densities-- so somebody gives you this very large sequence of graphs and only tells you that the triangle densities, the C4 densities, all of these graph densities, they converge. Somehow from these small statistics, you conclude that the graphs globally look very similar to each other. That's actually, if you think about, this is an amazing statement. OK, so let's see the proof. The proof method here is somewhat representative of these graph-limit-type arguments. So it's worth paying attention to see how this one goes. So by compactness, if-- OK, we're going to set up by contradiction. If the sequence is not Cauchy, then there exists two limit points. So there exists at least two distinct limit points. And call them u and W, and such that-- so because you have two separate limit points, you must have that this sequence, at least along a subsequence that converges to W, converges in F densities to W. So initially, this is true along subsequence. But the left-hand side is convergent, so this is true along the sequence. But u is also a limit point. So the same is true for u. And therefore, the F density in W must equal to the F density in u for all F. So we would be done if we can prove that the F densities, the collection of all these F densities, they determine the graphon. And that's indeed the case. And so this is the next claim. So it's what I will call a moment lemma, is that if u and W are graphons such that the F densities agree for all F, then the cut distance between u and W is equal to 0. Somehow the local statistics tells you globally that these two graphons must agree with each other. Does anyone know why I call it a moment lemma? There is something else which this should remind you of. So there are some classical results in probability that tells you, if you have two probability distributions, both, assume are nice enough, then if they have the same k'th moment for every k, so first moment, second moment, third moment, if all the moments agree, that these two probability distributions should agree. And this is some graphical version of that. So instead of looking at the probability distribution, we're looking at graphons, which are two-dimensional. These are two-dimensional objects. And this moments lemma tells you that in these two-dimensional, in the corresponding two-dimensional moments, namely these F moments, if they agree, then the two graphons must agree. So it's the analog of the probability theory statement about moments. The proof is actually somewhat tricky. So I'm only going to give you a sketch. And the key here is to consider the W random graph, which we saw last lecture. So this is W random graph with k vertices sampled using the graphon W. So a key observation here is that, for every F, the probability that the sampled W random graph agrees with F-- and here, there is a bit of a technicality. I want them to agree as labeled graphs. So the vertices of W are a priori labeled 1 through k. And this kW random graph is generated with vertices labeled 1 through k. They agree with some probability that is completely determined by the F densities. Yes? AUDIENCE: Is k the number of vertices of F? PROFESSOR: Yeah, so k is the number of vertices of F. And the specific formula is not so important. Let me just write it down. But the point is that, if you know all the F densities, then you have all the information about the distribution of this W random graph. And the way you can calculate the actual probability is via an inclusion exclusion. And the reason we have to do this inclusion exclusion is just because this is more like counting induced subgraphs. And this is counting actual subgraphs. So there is an extra step. But the point is that, if you knew this data, the moment's data then you immediately know the distribution of the W random graphs. OK, so if I have two graphons for which I know that their F densities agree, then I should be able to conclude that the corresponding W random graphs also have the same distribution, in particular, this, the W random graph and the u random graph have the same distribution. I am going to create a variant of the W random graph which is something called an H random graph. It's kind of like the W random graph except I forget the very last step. So I only keep a weighted-- think of it as, so think this is an edge-weighted graph where you sample x1 through xk uniformly between 0 and 1. And I put edge weight between i and j to be W of xi xj. So the difference between this H version and the G version is that the G version is obtained by turning this weight into an actual edge with that probability, but if I don't do the last step, I obtain this intermediate object. So the following are true. And this is where I'm going to skip the proofs. If I look at this H random graph and the G random graph, they are very close in cut distance. You can think of this as the claim that G and P is very close to G in cut distance. So they are very close in cut distance. As k going to infinity with probability 1-- so now I'm going to do the proof. But it's some kind of a concentration argument. And the second claim is that the H random graph is actually very close to the original graphon W, as well. This is also little l1 in distance as k goes to infinity. So this one is, again, not so obvious. But it's easier in the case when W itself is a step function, in which case, the produced H is almost the same as W, except the boundaries are slightly shifted, perhaps. And so you first approximate W by a step function, and prove this up to an epsilon approximation, and then let the steps go to infinity. So if you have these two claims, so then we see that this one here is identically distributed as ku. So it should follow that the corresponding H random graph for u, if you place the same inequalities by the u versions, it should also be true. So because these two are the same distribution, if you follow this chain, you obtain that the cut distance between u and w is equal to 0. I want to close by mentioning, in some sense-- so here, you have two graphons that have exactly the same F moments. But what if I give you two graphons which have very similar moments to each other? Can you conclude that the two graphons are close to each other? And that will be some kind of an inverse counting lemma. And in fact, it does follow as a corollary. And the statement is that, for every epsilon, there exists k and eta such that if the two graphons u and W are such that the F densities do not differ by more than eta for every F on, at most, k vertices, then the cut distance between u and W is, at most, epsilon. So the counting lemma tells you, if the cut distance is small, then all the F moments are close to each other. And the inverse tells you this converse. So it tells you this, if you have similar F moments, up to a certain point, then this is small. You can deduce the inverse counting lemma from the moments lemma via a compactness argument similar to the one that we did in class today. And I want to give you a chance to practice with that argument. So this will be on the homework, for the next homework. I'll give you some practice with using these compactness arguments. But you see, just with the other compactness statements, it doesn't tell you anything about the k and the epsilon as a function of-- the k and eta as a function of epsilon. So there are other proofs that gives you concrete bounds, but this proof here is much simpler if you assume the corresponding results about compactness. And finally, I want to mention that in the moments lemma, in order to deduce that u and w have the same-- that they are basically the same graphon, we need to consider F moments for all F's. So you might ask, could it be the case that we only need some finite set of F's to deduce-- to recover the graphon? Is it the case that you can recover W from only a finite number of F moments? And this is, it's actually a very interesting problem for which we already saw one instance. Namely, when we discussed quasi-random graphs, we saw that if you know that the k2 moment is p and also the C4 moment is p to the 4, then we can deduce that the graphon must be the constant graphon, p. OK, so we didn't do it in this language, but that's what the proof does. And likewise, you can use this to deduce a qualitative version where you have an extra slack and an extra slack over here. So you might ask, except for the constant graphons, are there other graphons for which you can similarly deduce-- recover this graphon from just a finite amount of moments data? And such graphons are known as finitely forcible. So finitely forcible graphons W such that a finite number of moments can uniquely recover-- can uniquely identify this graphon, W. And a very interesting question is, what is the set of all finitely forcible graphons? And it turns out, this is not at all obvious. And let me just give you some examples, highly non-trivial, that turned out to be finitely forcible. For example, anything which is a step graphon is finitely forcible. The half graphon which corresponds to the limit of a sequence of half graphs is finitely forcible. I mean, already, I think neither of these two examples are easy at all. And this example here can be generalized where you have any polynomial curve. I think this has to be-- so if it's a polynomial curve, it's also finitely forcible. But turns out finitely forcible graphons can get quite complicated. And there is still rather quite a bit of mystery around them. OK, so next time, I want to discuss some inequalities that come out of-- you can state between different F densities. OK, great. That's all for today.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
5_Forbidding_a_subgraph_IV_dependent_random_choice.txt
YUFEI ZHAO: For the last few lectures, we've been talking about the extremal problem of forbidding a complete bipartite graph. So today I want to move beyond the complete bipartite graph and look at other sparser bipartite graphs. So we'll be looking at what happens to the extremal problem if you forbid a sparse bipartite graph. Recall the Kovari-Sos-Turan theorem, which tells you that the extremal number for KSC is upper bounded by something on the order of m to the 2 minus 1 over m. So if I give you some bipartite H, we know that because it's a bipartite graph, it is always contained in some KST for some values of S and P. So you already automatically have an upper bound, the extremal number for this H from the Kovari-Sos-Turan theorem. But as you might expect, we might be very wasteful in this step. And the question is, are there situations where we can do much better than what is just given by applying Kovari-Sos-Turan? And today, I want to show you several examples where you can significantly improve the bound given by Kovari-Sos-Turan for various sparser bipartite graphs. The first result is the following theorem, which is initially due to Faraday. And then later, a different proof was given by Alon-Krivelevich-Sudov. And the latter proof is the one I want to present, because it presents an important and intricate probabilistic technique, which is the main reason for showing you this theorem. But let me tell you the theorem first. Here, H will be a bipartite graph. And it has vertex bipartitions A and B, such that every vertex in A has degree at most R. The bipartition is A and B, and the degree in A is at most R for every vertex on the left side in the set A. And I want to understand, is there some upper bound on the extremal number that does better than Kovari-Sos-Turan theorem? And the theorem guarantees such a bound. So then there exists some constant depending on H, such that the extremal number is upper bounded by something to the order of n to the 2 minus 1 over R. Compared with Kovari-Sos-Turan theorem, on one hand, if your H is the complete bipartite graph KST, then this is the same bound as Kovari-Sos-Turan theorem. On the other hand, you might have a lot more vertices in A and a lot more vertices in B. The hypothesis only requires what the degrees in A, max degrees, at most are. So it could be a much bigger graph. So if you apply Kovari-Sos-Turan, you would get a much worse bound compared to what this theorem guarantees. This 1 of R is optimal. In this given statement, you cannot improve this 1 of R, because we know that from the KST example and the lower bounds I showed you last time, you cannot improve upon this 1 over R. Also, in this form, this theorem is best possible. I want to show you a probabilistic technique for proving this theorem. And this is an important idea called dependent random choice. So let me first give you an informal interpretation of what's going on. So the idea is that if you have a graph G with many edges, then inside G, I can find a large subset of vertices U, such that all small subsets of vertices in U have many common neighbors. I won't tell you what the small and many are just yet, but we'll see it through the proof of the theorem, but that's the idea. So I give you a graph. It's not too sparse. It's relatively dense. Then I should be able to find some subset that is fairly large so that, let's say, every pair of vertices in the subset has many common neighbors. So let me write down-- or at least attempt to write down-- the formal statement of dependent random choice. The statement of the theorem, as I will present, has a lot of parameters, but don't get-- I don't want you to be scared off by the parameters, so I won't even tell you what they are, but first tell you what the statement of the conclusion is. And then we'll derive what is the dependencies on the parameters, so the proof or the technique is much more important than the statement of the theorem itself. So I'll leave some space here. The conclusion is that every graph with n vertices and at least alpha n squared over 2 edges contains a subset U of vertices. U is not too small, so the size of U is at least little u. And such that for every subset S of U with R elements, the set S has at least M common neighbors. So what's the idea here? I give you this graph, and I want you to produce the set U that has this property. How might you go about finding the set U? Let me give you an analogy. Let's suppose you have the friendship graph on MIT campus. And I want to select a large set, let's say a hundred students, such that every pair of them or maybe even just most pairs of them have many common friends. Well, how might you go about doing that? Well, if you select a hundred students at random, you're unlikely to achieve that outcome. They're going to be pretty dispersed across campus. But if you focus on some commonalities-- so, for example, you go to someone, some specific individual, and look at their circle of close friends, then it seems more likely you'll be able to identify a group of people who are very well-connected in that they, peer-wise, have lots of common friends. And that's the idea here. We're going to make the random choice by picking that core individual-- that's the random choice-- but then make the subsequent dependent random choice by looking at the group of friends from that specific random individual instead of choosing the hundred people uniformly at random, which is not going to work. So let's execute that strategy on this graph. Let me take T to be the random set, so that's the core set. So let T be-- for convenience, I'm going to choose with repetition. Instead of just choosing one person, I'm going to use two T vertices, so a list of T vertices chosen uniformly at random from the vertex set. So G is going to be my graph. So the graph is G. And what we are going to do is look at the set A, which is the set of common neighbors of T, the vertices that are adjacent to all of T. And that's the set I want to think about. So I want to basically argue that this set A has more or less the properties that we desire, maybe with some small number of blemishes, which we will fix by cleaning up. First, I want to guarantee that A has a big size-- we are actually choosing a lot of vertices. So let's evaluate the size of A in expectation. By linearity of expectation, we need to compute the sum of the probabilities that individual vertices fall in this random A. For each particular vertex V, when is it in A? Well, it is in A if T is contained in the neighborhood of this V. So all of your children T fall into the neighborhood. Otherwise, V is not going to be contained in A. So each individual probability can be computed easily by looking at the size, so the degree of V divided by the number of vertices raised to the power T-- T independent choices chosen with replacements. And by convexity applied to the final expression, we find that it is at least this quantity here where essentially we're taking the averages of the degrees. So this is by convexity. And finally, the graph has at least that many edges, so the final quantity there is at least N alpha to the T. Yes, question? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: The question is, in our original list, are we allowed to have repeated vertices? Yes. So it's a list, and we're choosing every element independently at random, allowing repetition. So it's sometimes called choosing with replacement. You choose a T. You throw it back. Choose another one. The property that we're looking for is that for every S subset, for every r-element subset of U, it has many common neighbors. So let's look at such an S. So for each S which is an r-element subset of the vertex set. So r-elements subset of V, what is the probability that this set F is contained in A? I'll give you this set S. It is contained in A if-- well, let's think about how A is chosen. You want this to happen. An A is chosen as a common neighborhood of this T-- so assets containing A, if and only if, P is contained in the common neighborhood of f. So f is fixed for now. P is random. So we draw the elements of T independently, uniformly, at random. Therefore, this probability is equal to the number of common neighbors of S as a fraction of the total number of vertices. This fraction raised to the power T. We want all S subsets of A-- all r-element subsets of this A-- to have at least m common neighbors. But maybe we cannot get that. So let's figure out how many bad S's are there. How many S's do not satisfy this condition here? So let's call such a set S bad if it has fewer than m common neighbors. And from this equation, we see that for each fixed S that is in r-element subset of vertices, it is bad with probability-- it is bad if it has few common neighbors. Because if you have few common neighbors, then this probability is small. So it is bad with probability strictly less than m over n raised to the power T. We chose this A in this dependent random way. And basically, we want all subsets. All r-element subsets have many common neighbors, but maybe we cannot get that at the first try. But we only have a small number of blemishes, so we can fix those blemishes by getting rid of the possible bad r subsets. And that's fine to do, as long as there are not too many of them. And indeed, because S is bad with small probability, the expected number of bad r-element subsets of A is, at most-- well, I look over all possible r-element subsets of vertices. Each one of them is bad with this probability here. So we have that bound. And the point now is that if this number is significantly smaller than the expected size of A, then I can clean up all the bad subsets by plucking out one vertex from each bad subset. So indeed, that is the case, because the expectation of the size of A minus-- so let me call this quantity here star-- so the size of A minus star by what we have shown is at least n alpha to the t minus the quantity just now. And I want this number to be somewhat large. And now let me put that as a hypothesis of the theory. So dependent random choice-- let u, n, r, m, t be positive integers, alpha positive real, and suppose n alpha to the t minus n to the r. Cancel the n to the t is at least w. That's where that inequality comes from. So if this is at least w, then what we can do is delete one vertex from each bad subset S. And after deleting, A then becomes some smaller set A prime with at least u elements in expectation, but then there exists some new elements-- Let me put it this way. We know that this is true in expectation, thus there exists some T, such that that inequality is true without the expectation, such that there exists some T, such that-- so this quantity is at least that. Now, to delete a vertex from each bad subset, we obtain this A prime with at least u elements. And we have gotten rid of all the possible bad subsets-- so no bad r-element subsets. And that finishes off the proof of dependent random choice. Just to recap, the idea is that if you have a dense enough graph, then the conclusion is that you can find a fairly large subset of vertices, so every pair, every r-element, have many common neighbors. And the way you do this, instead of choosing your set at random, which is not going to work, you choose a small set of anchors. T, you think of that as the anchors. You choose a bunch of anchors. And then you look at their common neighborhoods and use that as a starting point. That will almost work. It might not work perfectly, but then you fix things up by removing the blemishes. So this is a very tricky probabilistic idea. It's also a very important one. It will allow us to prove the theorem over there about the extremal numbers of standard degree graphs. Question? AUDIENCE: Is the definition of bad set for any subset of V or is it only for subsets of A? YUFEI ZHAO: The question is, in the definition of bad, do I use this definition for all subsets of V, all r-element subsets, or just subsets of A? So I use it to mean all subsets of V, because A is random. A is random, so the definition of bad does not depend on the randomness. It only depends on the original graph. AUDIENCE: How is the badness dependent on any probability? YUFEI ZHAO: The badness does not-- so the question us, how does the badness depend on any probability? The badness does not depend on the probability, but A is random. So the number of bad r-element subsets of A is a random variable. So each individual-- so you start with a graph. Some r-element subsets of graphs. Some are not. And now I choose this random A in this dependent random manner. And A might contain some bad subsets. I'm trying to calculate how many bad subsets does A have. Question? AUDIENCE: [INAUDIBLE],, because an S is neither bad or not bad. You said it's bad with probability. YUFEI ZHAO: So your concern is each S is bad with probability-- ah. Sorry. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: Fine. Yeah. Thank you. So each bad subset-- OK, so for each fixed bad subset, it is contained in A with probability. Thank you. I hope this makes it clear. So the probability of the-- the property of being bad is not random, but it containing A is random. Question? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: So the question is, why is this true? So why are these two events the same? And it's kind of hard to steer the definition a bit. So T is chosen as the common neighborhoods-- sorry. You choose T at random. And you choose A to be the common neighborhood of T. And so how do you characterize subsets of A if every element is connected to all of T? You have to think about it. Any more questions? Yes? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: We pick T at random. T is uniform. So the question is, how are we picking T? T is uniform at random. Oh, great. So the question is, how do we pick the little t in the theorem statement? It depends on the application. And that's a little bit weird, because in the statement of the theorem, the little t shows up in the inequality, but not in the conclusion. So you think of t as an auxiliary parameter. So the little t comes up in the proof, but not really in the conclusion. Any more questions? It's a tricky lemma. It's a tricky idea. So now let me use it to prove the statement over there. And here, it's not so hard. So it's mostly an application of this dependent random choice lemma. So for all H that's in the theorem-- so let me prove this lemma-- so for all H that's in the theorem, there exists a constant C, such that every graph with at least C n to the 2 minus 1 over r edges contains a vertex U with-- so vertex subset U with the size of U equal to B-- so B comes from the vertex by partition of H. It's a constant-- such that every r-element subset in B has-- so r-element subset in U has lots of common neighbors. That's at least another constant, which is the number of vertices in H-- that many common neighbors. So you see, it's a direct corollary of the dependent random choice lemma by setting it in the right parameters and, indeed, by the dependent random choice lemma where we choose this T-- this auxiliary variable T in independent random choice lemma-- equal to r. So suffices to check that there exists of C such that-- plug it into that expression, that inequality, up there. So n2Cn to the minus 1 over r-- raised to the power r minus n choose r. And then this expression here. So I'm just plugging in the various graph parameters into the dependent random choice statement. And I want to show that you can find a constant C such that this inequality is true. And indeed, these exponents, they cancel out, so the first term is simply 2C raised to the r. And the second one, because, again, you notice that the exponents work out just fine, so it is, at most, a constant. So you can choose C big enough so that this is true. So it's a direct verification of the hypothesis of the lemma. And now we're ready to prove the theorem over there. Yes, question? AUDIENCE: How do you have size A plus B common neighbors? Isn't that the entire graph? Or can I just [INAUDIBLE]? YUFEI ZHAO: The question is, how do you have size A plus B commons neighbors? So H is fixed. And A and B are constants, so A plus B is the number of vertices of H. That's a constant. AUDIENCE: Oh, sorry. Oh, OK. YUFEI ZHAO: Yeah, I'm talking about common neighbors, not in H, but in the big graph G as n vertices. Now, I like questions. It's tricky. This is a tricky argument, so please do ask questions if you're confused. And there are times when I may not have explained it very well, so please do ask questions. So let's prove the theorem. And now we're almost there. The idea is that we embed the vertices of B into the big graph G one by one. Sorry. First, embed B into the vertices of G using U from the lemma-- so the lemma that we just stated. And they claim that once you have done that-- so the vertices of B-- and now I need to embed the remaining vertices of A. And I can do this one by one, because if I need to embed some vertex of A, has, at most, r neighbors to B. But I have embedded B in such a way that they have a lot of common neighbors inside G. So I can always do it. So I can always embed the vertices of A one at a time in such a way that I even avoid collisions. I don't allow vertices to be embedded into the same place. This is all using that B, embedding a B, which is U, has many common neighbors in G. So once you put that in. And the rest, you just make one choice at a time. And you need to embed a second vertex, or you can find somewhere in their common neighborhood that allows you to do it. So you embed the vertices one at a time, and then you finish embedding the whole graph. Any questions? It's a tricky argument. So let's take a break, and I want you to think about it. Any questions? Yes. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: So the question is, how do you embed A without having any collisions? So I put in the vertices of B so that every r of them have many neighbors. And now, I want to try to embed the vertices of A one by one in any order. Think about-- you pick the first vertex. Where can it go? It has-- let's say it's adjacent to the first three vertices of B. So, in the embedding, it has to go in the common neighborhood of those three vertices, which we know is large. So I put them anywhere. And I do the same for the second vertex, do the same for the fourth vertex. So I just keep on going. Because the common neighborhood is large, it may be that some of the potential vertices I might embed is already used by the previous steps in the process. But because I always have at least A plus B common neighbors, I always have some possibilities that remain. Yes, question. AUDIENCE: So [INAUDIBLE],, like the last line of the proof to [INAUDIBLE] delete a vertex from each bad subset, and then to make that [INAUDIBLE]. YUFEI ZHAO: Ah. The question is, how does this bad subset deletion work? So you have this A which is fairly large. And you know that there exists some instance-- there's some incidence of this randomness that produces for you a situation where A has very few bad r subsets. So then I take A and I delete from A one vertex in each bad subset. I haven't changed the size of A very much. A is still quite large after this deletion, but now A has no bad subsets remaining, because I've gotten rid of one vertex from each one, from each bad subset. So they're very similar to what we've seen before, the random process for creating an H-free graph. You generate a random H-free graph which has very few copies of H relative to the number of edges, and then you get rid of them by removing one edge from each copy of H. So with the theorem that we saw in the first part of the lecture, we saw how to improve on the bound of Kovari-Sos-Turan in some circumstances, namely one where the graph that you're forbidding, this H, essentially has bounded degree. We stated something a bit stronger, namely has bounded degree from one side. And that's a pretty general result. And now I want to look at some more specific situations where you might be able to improve further. So what are some nice bipartite graphs? One that comes up is kind of even cycles. So if you have C4, C6, and so on. And you see C4 is the same as K2,2, which we already saw before. But even C6, the techniques so far allow us to obtain, with the theorem that we just saw-- gives us a bound on C6 that's more or less the same as that of C4, namely n to the 3/2. So what's the truth for C6? Well, it turns out that you can do much better. So this is the theorem of Bondy and Simonovits that, for all integers k at least 2, there exists some constant C such that the extremal number-- well, I'm going to use C too many times. So I'll just say that the extremal number of C2 sub k is, at most, on the order of n to the 1 plus 1 over k. So, in particular, for 6 cycles, the upper bound is 4/3 in the exponent. It's better than the 3/2. So there's another class of graphs where there are some nice upper bounds. So you can ask, well, just like Kovari-Sos-Turan theorem for complete bipartite graphs, do we know matching lower bound constructions? And what is known is that it is tight only for a small number of cases. And the others, we do not know whether they are tight. So this Bondy-Simonovits theorem, it is tight for k being 2, 3, or 5, and open for others. So there are constructions for C4,3, C6,3 and C10,3, but not for C8,3. That's an open problem. The proof of the Bondy-Simonovits theorem is slightly involved, but I want to show you a weaker result that already contains a lot of interesting ideas. So a weaker result is this. That for every integer k at least 2, there exists a constant C such that every n-vertex graph G with at least C-- so this-- the correct-- the same order of number of edges-- so we know for Bondy-Simonovits, it contains an even cycle of length exactly 2k. So we'll show something slightly weaker that contains an even cycle of length, at most, 2k. Which, in other words, says that the extremal number, if you-- so we haven't introduced this notation, but, hopefully, you can guess what it means. Then, if you forbid all of these cycles, then it is this quantity there. So I'll show this weaker result. All right. Let's' do it. First, I want to show you a couple of easy preparatory lemmas. The first-- so every graph G contains a subgraph with min degree at least the half of-- at least half of the average degree of G. So you have a graph G. It has large average degree. It has lots of edges. But maybe there are some small degree vertices. And it will be useful to know that the minimum degree is actually quite large as well. So it turns out, by passing to a subgraph, you can guarantee that. How do you think we might prove this? I give you a graph. I know it has lots of edges. But maybe some vertices have small degree. Yes. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: So you suggest dependent random choice. That's a heavy hammer for such a simple statement. And so it turns out we can do something even simpler. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: So we'll just throw out the min degree vertices. So throw out the small degree vertices. So if the average degree is 2t then the number of edges is number of vertices times t. And you see that removing a vertex of degree t, or at most t, cannot decrease average degree. So I have this process where I remove vertices that are less than half of average degree. And the average degree never goes down, so I keep on doing this. Average degree stays the same. Well, it can go up, but it never goes down. And when I stop, I don't have any more small degree vertices to get rid of. Why does the process even terminate? Maybe we end up with the empty graph, which is not so useful. Yes. AUDIENCE: The average degree is [INAUDIBLE].. YUFEI ZHAO: He said the average degree is non-decreasing. But how do I know the graph has at least some number of vertices when I stop? Yes. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: So if you-- you must terminate. You must terminate because every graph, if you have way too small number of vertices-- because, for example, if-- we'll just notice that every graph with, at most, 2t vertices has average degree less than 2t. So you'll never get below 2t vertices. You'll run out of room if you go too far. Because that's the first preparation lemma. The second one is that every graph G has a bipartite subgraph with at least half of the number of edges as the original graph. This is a very nice and quick exercise in the probabilistic method. So we can color every vertex black and white uniformly at random. And the expected number of black to white edges is exactly half of the original number of edges by linearity of expectations. So there's some instance with at least that many black-white edges, and that's a bipartite subgraph. So now we can prove the theorem about even cycles, the weaker theorem at least. I start with a graph which has a lot of edges. But by these two lemmas, and changing constant somewhat, I can obtain a subgraph with-- that's bipartite and has min degree quite large. So I lose, at most, a constant factor, and I get basically within a factor of 2 of-- well, factor of 4 of the average degree. So let me call this quantity delta, the min degree in this subgraph, this bipartite subgraph. Let's think about what happens to this graph if I start with an arbitrary vertex. So now I have a min degree condition. It's, really, all vertices are now kind of the same to me. So pick an arbitrary vertex, and look at its neighborhood. It has at least delta edges coming out. So let me call the first vertex level 0, and the second set level 1. It's bipartite, so there are no edges within level 1. Let's expand out even further. Can there be some collisions where two of these edges go to the same vertex? Well, if there were, then I find a C4. So if I assume-- so let's assume that there's no C4, C6, and so on, C2k, for contradiction. So because there's no C4, all the endpoints of this path of length 2 are distinct in level 2. And you keep going, or you keep expanding further. And so on. And all the way to level t. And all of these guys must be distinct as well. Because, otherwise, you'll find a cycle of length, at most, 2t. So when you do this expansion, each step, you get distinct vertices. And you also have no edges inside each part because you are looking in the bipartite setting. So how many vertices do you get at the end? Well, I have a min degree condition. The min degree condition tells me that I expand by a factor of at least delta minus 1 each time. So the number of vertices in here is at least delta minus 1 raised to the power t-- so raised to the power k. Here's k. And expand all the way to the end. But, you see, that number there is quite large. So, in particular, if t is large enough, then it's bigger than n. And that will be a contradiction, because you only had n vertices in the graph to begin with. Therefore, the assumption that there are no cycles, even cycles of length, at most, 2k, is incorrect. And that finishes the proof. Any questions? Yes. AUDIENCE: Are the ideas for the full Bondy-Simonovits theorem similar? YUFEI ZHAO: So the question has to do with, in the original, in the full Bondy-Simonovits theorem, what do we need to do? Are they similar? So, certainly, you want to do something like this. But, then, you also want to think about, if you do have short cycles, how can you bypass them? So there's a more careful analysis of what happens with shorter cycles. And we will not get into that. Any more questions? All right. So the first thing that we did in today's lecture has to deal with what-- AUDIENCE: Quick question. YUFEI ZHAO: Yes. AUDIENCE: So why is-- so why are all vertices distinct for level k? Or are they just defined that way? YUFEI ZHAO: So question is, why are the vertices distinct for level k? So at level k, if you have some collapse, then it came from two different paths, therefore forming a C2k. AUDIENCE: OK. YUFEI ZHAO: So now I want to revisit the first theorem that happened in today's lecture, namely if you have this H, bipartite A and B. So we saw the hypothesis was that if every vertex in A had bounded degree, degree, at most, r, then we got the upper bound on the extremal number. That was n to the 2 minus 1 over r. And, in particular, for r equals to 2, suppose I have that situation there. Suppose that the degree for every vertex in A is, at most, 2. Then the first theorem guaranteed us that the extremal number is on the order, at most, n the 3/2, just like the extremal number for 4 cycles, for K2,2's. And, of course, this statement is tight, in the sense that if I change this 3/2 to any smaller number, then, well, just taking H to be K2,2 violates it. So I cannot replace in this generality 3/2 by any smaller number, because we know that the extremal number for K2,2 is this order. But is that the only obstruction? So if H is not K2,2, well, you can make some sillier examples too by taking K2,2 and add some more edges. So lets forbid H from having a K2,2 subgraph. Can you do better now? So in this case, can you improve for the specific H that exponent? And we already saw one case where you can do this, namely in Bondy-Simonovits theorem for cycles. Or if you only applied this theorem here, you get 3/2, but Bondy-Simonovits tells you a much better exponent. So let's explore the situation. And it turns out, in a very recent theorem that is only proved the last couple of years by David Conlon and Joonkyung Lee, they showed that for every H as above, there exists constants little c and big C such that the extremal number of H is upper bounded by something where I can decrease 3/2 to some even smaller number. So, somehow, this 3/2, now we understand is really because of the presence of K2,2. The graph has-- if H has no K2,2, then some smaller number suffices. And I want to use the rest of today to explain how to prove that theorem there. Yes, question. AUDIENCE: The C is not independent of H [INAUDIBLE].. YUFEI ZHAO: That's right. So the question is, is C independent of H? So C depends on H. So C, they are dependent on H. Questions? Let me put this question in a slightly different formulation that is also equivalent. So in graph theory, there is a notion of a subdivision. And, in particular, a one subdivision of graph H is this operation where you start with a graph-- let's say this graph here-- and you add a vertex to the middle of every edge of this graph. So, initially, it's 4 vertices. Now you add a new vertex to every edge. So you subdivide every edge into a path of two edges. That's called a subdivision. So for today's lecture, let me denote subdivisions by a prime. And, in particular, if this is K4, then I will denote this graph here K4 prime. So, for example, K3 prime, that's a triangle subdivided-- well, that's a C6, [INAUDIBLE]. So observe that every H that comes up in this theorem here is a subgraph of some subdivision, some one subdivision of a clique. Because the vertices on the left in A are degree 2, you think of them as midpoints of edges. But because you are K2,2 free, you-- if you collapse those path of lengths 2 to single edges, you do not end up with parallel edges. So it is the subgraph of this one subdivision of some graph, which, then, you can complete to a clique. So this theorem here is equivalent, at least qualitatively, to the statement that for every t, there exists some constants, again depending on t, such that the extremal number of the one subdivision of a clique is bounded by something where we can improve upon the exponent in the first theorem in today's lecture. So that theorem there. So these two theorems are equivalent because of this remark. Any questions so far about the statements? Yes. AUDIENCE: In the remark, how do you deal with, like, [INAUDIBLE]. YUFEI ZHAO: Question, in the remark, how do you deal with vertices with degree less than 2? Complete it to a vertex degree of-- vertex of degree 2. Add another edge. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: Add another edge to a new vertex. AUDIENCE: OK, sure. YUFEI ZHAO: Any more questions? All right. So the proof I want to show you is due to Oliver Janzer, and for this clique subdivision theorem. And this proof produces that C sub t equals to 1 over 4t minus 6. So if you plug in t equals to 3, you find that the exponent here is actually right for the 6 cycle. So it actually agrees with what we know. So I want to show you some of the main ideas from this proof. Just like the proof of-- that we saw for the cycles, even cycles theorem, it will be helpful to start with some preparation. You start with a graph that, even though it has a lot of edges, may have lots of vertices with high degree, lots of vertices with low degree. It's nice to clean it up somewhat. And so let me state a preparatory lemma, which we will not prove, but it's of a similar nature to this very easy lemma that we saw earlier but with a bit more work. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: Yes. Originally, I put a C over here, but now it's OK. So there exists a Ct. So the preparation is that we're going to pass through a large, almost regular subgraph. The lemma-- so don't worry too much about the details, and I'll tell you what the idea is. So for every alpha, there exists constants beta and K such that for every C and n sufficiently large, every n-vertex graph G with lots of edges-- so C n to the 1 plus alpha edges-- has a subgraph G prime such that-- so I want some properties. First, G prime has lots of vertices. So n to the beta, so it's still lots of vertices. You do some polynomial in n. And, 2, it has still lots of edges relative to the number of vertices it has. So, basically, changing the constants, if I start with n to the 1 plus alpha, I still have, roughly, number of vertices to the 1 plus alpha, number of edges. It is almost regular, in the sense that the max degree of G does not differ from its minimum degree by more than a constant factor. So you do have vertices that are too small degree. You don't have vertices that are too large degree. And, finally, G prime is bipartite, and the two parts of this bipartition have sizes differing by a factor, at most, 2. So if you like, think of G as a regular bipartite graph. But this is the preparation lemma. We'll just make our life a bit easier. So, from now on, let's treat G as a constant in our asymptotic notation to simplify the notation. So you have this graph G. It's a bipartite graph. And for a pair of vertices on one side A-- so there are no edges in A. But for a pair of vertices, I say that this u, v is light. So it's not an edge, but I talk about these pairs. I say that it is light if the number of common neighbors between-- of u and v is at least 1 and less than t choose 2. So it has some common neighbors, but not too many. And then we say that this pair is heavy if the number of common neighbors is at least t choose 2. So if a pair u, v has some common neighbors, then it's either light or heavy. I claim that if G is a Kt prime-- so this is the one subdivision of Kt-- free bipartite graph with vertex bipartition A union B-- so U. Not A, but U union B. U will eventually be a subset of A. Such that all the vertices on the left of U have to be at least delta, and U is not too small. So it's at least 4Bt over delta. Don't worry about it for now. So think of delta-- this is a min degree. So it is somewhat smaller than the average-- I mean, it's basically the average degree of your graph. And B, think of as n. It's more or less the whole set of vertices. Then the conclusion is that there exists a u in a lot of light pairs in the set U. There exists a vertex in many light pairs. It's important that we assume that this graph G if Kt prime free. Because, otherwise, you could imagine a situation where, essentially, you have a complete bipartite graph and every pair of vertices is heavy. So you don't have any light pairs at all. So having Kt prime free somehow allows us to find light pairs. So let's see the proof of this lemma. So you combine some nice ideas that we've seen earlier in the course, namely double counting, and also uses Turan's theorem. So, first, let's do a double counting argument similar to the proof of the Kovari-Sos-Turan theorem, where let's count the number of K1,2's like that. So the number of K1,2's like that between U and B. I claim one way to count this is to look through all the vertices on the right side, look at how many neighbors it has, and sum up the degrees choose 2. So skipping some-- I mean, I can tell you what comes out. So, by convexity, we find that it is at least this quantity here. And then, assuming the minimum degree condition, we find that this quantity is quite large. So this is a calculation very similar to what we did for the proof of Kovari-Sos-Turan theorem. The low degree vertices in B do not contribute much to the sum. So this sum is large, and it sums over all vertices in B, but the low degree vertices in B, they contribute very little. Because if we sum over all the vertices of B with degree less than 2t, then, for each summand, it's, at most, 2 t squared, which, again, by the assumption of delta, is less than half of the total sum. So the low degree vertices of B do not contribute very much to the sum. So let's look at the higher degree vertices. For the higher degree vertices, they contribute a substantial chunk. And the most important thing here is that, among these vertices, there are no t mutually heavy-- so not among these vertices, but in U. If you look at U, there are no t mutually heavy vertices in U. If you have t mutually heavy vertices in U, then what happens? So if you have t mutually heavy vertices. If you had, let's say, three vertices in U, they're mutually heavy, each one of them, because they're heavy, I can find many common neighbors. So I can build this path of length 2. I can build another path of length 2. And I don't run out of vertices because they're all heavy. They all have at least t choose 2 common neighbors. So I can build the subdivision of Kt. So there are no mutually heavy vertices in U. So where have we seen this before? So you have-- think about the neighborhood of a vertex in B. Because it's inside a neighborhood, all the pairs are either heavy or light, and there are no t mutually heavy vertices. So, then, Turan's theorem tells us that there must be many light vertices. That there are many light vertices in this neighborhood. So the number of light pairs in the neighborhood of this v, if it has at least t neighbors-- or else you run out of room-- is at least-- so if you think about what Turan's theorem says, that the number of known edges is at least this quantity here, which is at least the degree of v squared-- degree of v. So Turan's theorem tells us that there cannot be so many heavy pairs inside a neighborhood of a vertex in B, so there must be many light pairs. And now we sum over all vertices in B. We obtain that U has a lot of light pairs. We might have overcounted and a little bit, because each light pair is overcounted only by a bounded number of times because it's light. So it's overcounted by less than K choose 2 times. So that's just a constant factor, and we're OK with that. So that's the conclusion for now. This lemma tells us that you have lots of light pairs in U. And what we're going to do is to keep on shrinking this U. So U is going to be a subset of A. Initially, let's let U be the entire set A. It tells us that there's one vertex in A with lots of light neighbors. Take that vertex, choose its neighborhood. Apply the lemma again. Find another vertex with lots of internal light neighbors. Keep on going. And then we build a large light clique. So that's the idea. So we'll find that-- so we'll see that if this delta is bigger than, basically, the quantity claimed in the theorem-- so t minus 2 over 2t minus 3-- and sufficiently large C, then there exists the sequence U1, U2, U3, so on, all the way to Ut, and the sequence of vertices v1, v2, vt, such that, initially, I take A. And the idea is, initially, I take v1 to be whatever comes out of that lemma. And I want the property that all of the vi, vj's are light. And 2 is that no three of these v's have a common neighbor. And, I think, once you have these two properties, then you can find your clique subdivision. You find these t light vertices. So if you have v1, v2, v3, v4, you have these light vertices, and I can build a clique subdivision from these light vertices. Because they're light, they have at least one common neighbor for each pair. I just keep building them. So I build these common neighbors. Well, you should be somewhat worried that I end up using the same vertex twice. But, of course, that should not be a worry if I guarantee that no three of them have a common neighbor. They cannot collapse. And you cannot have two vertices end up being the same. Otherwise, I would violate property b. So these two properties alone allow you to build a Kt subdivision. But how do we find this sequence? Well, so we build it iteratively using that lemma. You start with one vertex guaranteed by that lemma. You look at its neighborhood. You pick another vertex guaranteed by the lemma. You look at its neighborhood. And so on. And you build this sequence of light clique. Yes. AUDIENCE: [INAUDIBLE]. YUFEI ZHAO: Ah. The light neighborhood, yes. So we're not in the graph any more. So everything's inside A. Everything's inside A. So let me finish off the list of properties, and we're almost there. Third property I want is that, when I do this operation, I do not reduce my space of possibilities by too much. Namely, that the size of U does not go down too substantially. And that's guaranteed by the lemma. And, 4 is that-- basically this picture over here-- vi is light to all of U i plus 1. So I claim that you can find the sequence satisfying these properties. And the reason is that you are repeatedly applying the lemma. So repeatedly apply the lemma. The lemma doesn't address this part about triple vertices having common neighbors. But I claim that's actually not so hard to deal with. Because if you think about how many vertices, how many possibilities, this restriction eliminates, b only eliminates at each step t choose 2, because this is coming from the light restriction, times, at most-- so another t choose 2. And this comes-- the first one comes from pairs of v1 through vt. So eliminates, at most, this many-- times the max degree.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
19_Roths_theorem_II_Fourier_analytic_proof_in_the_integers.txt
[SQUEAKING] [PAPER RUSTLING] [CLICKING] YUFEI ZHAO: Last time we started talking about Roth's theorem, and we showed a Fourier analytic proof of Roth's theorem in the finite field model. So Roth's theorem in F3 to the N. And I want to today show you how to modify that proof to work in integers. And this will be basically Roth's original proof of his theorem. OK. So what we'll prove today is the statement that the size of the largest 3AP-free subset of 1 through N is, at most, N divided by log log N. OK, so we'll prove a bound of this form. The strategy of this proof will be very similar to the one that we had from last time. So let me review for you what is the strategy. So from last time, the proof had three main steps. In the first step, we observed that if you are in the 3AP-free set then there exists a large Fourier coefficient. From this Fourier coefficient, we were able to extract a large subspace where there is a density increment. I want to modify that strategy so that we can work in the integers. Unlike an F2 to the N, where things were fairly nice and clean, because you have subspaces, you can take a Fourier coefficient, pass it down to a subspace. There is no subspaces, right? There are no subspaces in the integers. So we have to do something slightly different, but in the same spirit. So you'll find a large Fourier coefficient. And we will find that there is density increment when you restrict not to subspaces, but what could play the role of subspaces when it comes to the integers? So I want something which looks like a smaller version of the original space. So instead of it being integers, if we restrict to a subprogression, so to a smaller arithmetic progression. I will show that you can restrict to a subprogression where you can obtain density increment. So we'll restrict integers to something smaller. And then, same as last time, we can iterate this increment to obtain the conclusion that you have an upper bound on the size of this 3AP-free set. OK, so that's the strategy. So you see the same strategy as the one we did last time, and many of the ingredients will have parallels, but the execution will be slightly different, especially in the second step where, because we no longer have sub spaces, which are nice and clean, so that's why we started with a finite fuel model, just to show how things work in a slightly easier setting. And today, we'll see how to do a same kind of strategy here, where there is going to be a bit more work. Not too much more, but a bit more work. OK, so before we start, any questions? All right. So last time I used the proof Roth's theorem as an excuse to introduce Fourier analysis. And we're going to see basically the same kind of Fourier analysis, but it's going to take on a slightly different form, because we're not working at F3 to the N. We're working inside the integers. And there's a general theory of Fourier analysis on the group, on a billion groups. I don't want to go into that theory, because that's-- I want to focus on the specific case, but the point is that given the billion groups, you always have a dual group of characters. And they play the role of Fourier transform. Specifically in the case of Z, we have the following Fourier transform. So the dual group of Z turns out to be the torus. So real numbers mod one. And the Fourier transform is defined as follows, starting with a function on the integers, OK. If you'd like, let's say it's finitely supported, just to make our lives a bit easier. Don't have to deal with technicalities. But in general, the following formula holds. We have this Fourier transform defined by setting f hat of theta to be the following sum. OK, where this e is actually somewhat standard notation, additive combinatorics. It's e to the 2 pi i t, all right? So it goes a fraction, t, around the complex unit circle. OK. So that's the Fourier transform on the integers. OK, so you might have seen this before under a different name. This is usually called Fourier series. All right. You know, the notation may be slightly different. OK, so that's what we'll see today. And this Fourier transform plays the same role as the Fourier transform from last time, which was on the group F3 to the N. And just us in-- so last time, we had a number of important identities, and we'll have the same kinds of identities here. So let me remind you what they are. And the proofs are all basically the same, so I won't show you the proofs. f hat of 0 is simply the sum of f over the domain. We have this Plancherel Parseval identity, which tells us that if you look at the inner product by linear form in the physical space, it equals to the inner product in the Fourier space. OK. So in the physical space now, you sum. In the frequency space, you take integral over the torus, or the circle, in this case. It's a one-dimensional torus. There is also the Fourier inversion formula, which now says that f of x is equal to f hat of theta. E of x theta, you integrate theta from 0 to 1. Again, on the torus, on the circle. And third-- and finally, there was this identity last time that related three-term arithmetic progressions to the Fourier transform, OK? So this last one was slightly not as-- I mean, it's not as standard as the first several, which are standard Fourier identities. But this one will be useful to us. So the identity relating the Fourier transform, the 3AP, now has the following form. OK, so if we define lambda of f, g, and h to be the following sum, which sums over all 3APs in the integers, then one can write this expression in terms of the Fourier transform as follows. OK. All right. So comparing this formula to the one that we saw from last time, it's the same formula, where different domains, where you're summing or integrating, but it's the same formula. And the proof is the same. So go look at the proof. It's the same proof. OK. So these are the key Fourier things that we'll use. And then we'll try to follow on those with-- the same as the proof as last time, and see where we can get. So let me introduce one more notation. So I'll write lambda sub 3 of f to be lambda of f, f, f, three times. OK. So at this point, if you understood the lecture from last time, none of anything I've said so far should be surprising. We are working integers, so we should look at the corresponding Fourier transform in integers. And if you follow your notes, this is all the things that we're going to use. OK, so what was one of the first things we mentioned regarding the Fourier transform from last time after this point? OK. AUDIENCE: The counting lemma. YUFEI ZHAO: OK. So let's do a counting lemma. So what should the counting lemma say? Well, the spirit of the counting lemma is that if you have two functions that are close to each other-- and now, "close" means close in Fourier-- then their corresponding number of 3APs should be similar, OK? So that's what we want to say. And indeed, right, so the counting lemma for us will say that if f and g are functions on Z, and-- such that their L2 norms are both bounded by this M. OK, the sum of the squared absolute value entries are both bounded. Then the difference of their 3AP counts should not be so different from each other if f and g are close in Fourier, OK? And that means that if all the Fourier coefficients of f minus g are small, then lambda 3ff, which considers 3AP counts in f, is close to that of g. OK. Same kind of 3AP counting lemma from last time. OK, so let's prove it, OK? As with the counting lemma proofs you've seen several times already in this course, we will prove it by first writing this difference as a telescoping sum. The first term being f minus g f f, and then g of f minus g f and lambda g, g, f, f minus g. OK, and we will like to show that each of these terms is small if f minus g has small Fourier coefficients. OK. So let's bound the first term. OK, so let me bound this first term using the 3AP identity, relating 3AP to Fourier coefficients, we can write this lambda as the following integral over Fourier coefficients. And now, let me-- OK, so what was the trick last time? So we said let's pull out one of these guys and then use triangle inequality on the remaining factors. OK, so we'll do that. So far, so good. And now you see this integral. Apply Cauchy-Schwartz. OK, so apply Cauchy-Schwartz to the first factor, you got this l2 sum, this l2 integral. And then you apply Cauchy-Schwartz to the second factor. You get that integral. OK, now what do we do? Yep? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: OK, so yeah. So you see an l2 of Fourier, the first automatic reaction should be to use a Plancherel or Parseval, OK? So apply Plancherel identity to each of these factors. We find that each of those factors is equal to this l2 sum in the physical space. OK, so this square root, the same thing. Square root again. OK. And then we find that because there was an hypothesis-- in the hypothesis, there was a bound M on this sum of squares. You have that down there. And similarly, with the other two terms. OK. So that proves the counting lemma. Question? AUDIENCE: Last time, the term on the right-hand side was the maximum over non-zero frequency? YUFEI ZHAO: OK. OK. So the question is, last time we had a counting lemma that looked slightly different. But I claimed they're all really the same counting lemma. They're all the same proofs. If you run this proof, it won't work. If you take what we did last time, it's the same kind of proofs. So last time we had a counting lemma where we had the same f, f, f essentially. We know have-- I allow you to essentially take three different things, and-- OK, so both-- in both cases, you're running through this calculation, but they look slightly different. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: So, yeah. So I agree. It doesn't look exactly the same, but if you think about what's involved in the proof, they're the same proofs. OK. Any more questions? All right. So now, we have this counting lemma, so let's start our proof of Roth's theorem in the integers. As with last time, there will be three steps, as mentioned up there, OK? In the first step, let us show that if you are 3AP-free, then we can obtain a density-- a large Fourier coefficient. Yeah, so in this course, this counting lemma, we actually solved this-- basically this kind of proof for the first time when we discussed graph counting lemma, back in the chapter on Szemerédi's regularity lemma. And sure, they all look literally-- not exactly the same, but they're all really the same kind of proofs, right? So I want-- I'm showing you the same thing in many different guises. But they're all the same proofs. So if you are a set that is 3AP-free-- and as with last time, I'm going to call alpha the density of A now inside this progression, this length N progression. And suppose N is large enough. OK, so the conclusion now is that there exists some theta such that if you look at this sum over here as a sum over both integers-- actually, let me do the sum only from 1 to uppercase N. Claim that-- OK, so-- so it's saying what this title says. If you are 3AP-free and this N is large enough relative to the density, you think of this density alpha is a constant, then I can find a large Fourier coefficient. Now, there's a small difference, and this is related to what you were asking earlier, between how we set things up now versus what happened last time. So last time, we just looked for a Fourier coefficient corresponding to a non-zero r. Now, I'm not restricting non-zero, but I don't start with an indicator function. I start with the demeaned indicator function. I take out the mean so that the zeroth coefficient, so to speak, which corresponds to the mean, is already 0. So you don't get to use that for your coefficient. So if you didn't do this, if you just tried to do this last time, I mean, you can also do exactly the same setup. But if you don't demean it, then-- if you don't have this term, then this statement is trivially true, because I can take theta equal to 0, OK? But I don't want. I want an actual significant Fourier improvement. So I take-- I do this demean, and then I consider its Fourier coefficient. OK. Any questions about the statement? Yeah, so this demeaning is really important, right? So that's something that's a very common technique whenever you do these kind of analysis. So make sure you're-- so that you're-- yeah, so you're looking at functions with mean 0. Let's see the proof. We have the following information about all 3AP counts in A. Because A is 3AP-free, OK, so what is the value of lambda sub 3 of the indicator of A? Lambda of 3, if you look at the expression, it basically sums over all 3APs, but A has no 3APs, except for the trivial ones. So we'll only consider the trivial 3APs, which has size exactly the size of A, which is alpha N from trivial 3APs. On the other hand, what do we know about lambda 3 of this interval from 1 to N? OK, so how many 3APs are there? OK, so roughly, it's going to be about N squared over 2. And in fact, it will be at least N squared over 2, because to generate a 3AP, I just have to pick a first term and a third term, and I'm OK as long as they're the same parity. And then you have a 3AP. So the same parity cuts you down by half, so you have at least N squared over 2 3APs from 1 through N. So now, let's look at how to apply the counting lemma all to the setting. So we have the counting lemma up there, where I now want to apply it-- so apply counting to, on one hand, the indicator function of A so we get the count 3APs in A, but also compared to the normalized indicator on the interval. OK, so maybe this is a good point for me to pause and remind you that the spirit of this whole proof is understanding structure versus pseudorandomness, OK? So as was the case last time. So we want to understand, in what ways is A pseudorandom? And here, "pseudorandom," just as with last time, means having small Fourier coefficients, being Fourier uniform. If A is pseudorandom, which here, means f and g are close to each other. That's what being pseudorandom means, then the counting lemma will tell us that f and g should have similar AP counts. But A has basically no AP count, so they should not be close to each other. So that's the strategy, to show that A is not pseudorandom in this sense, and thereby extracting a large Fourier coefficient. So we apply counting to these two functions, and we obtain that. OK. So this quantity, which corresponds to lambda 3 of g, minus alpha N. So these were lambda 3 of g, lambda 3 of F. So it is upper-bounded. The difference is up rebounded by the-- using the counting lemma, we find that their difference is upper-bounded by the following quantity. Namely, you look at the difference between f and g and evaluate its maximum Fourier coefficient. OK. So if A is pseudorandom, meaning that Fourier uniform-- this l infinity norm is small, then I should expect lots and lots of 3APs in A, but because that is not the case, we should be able to conclude that there is some large Fourier coefficient. All right, so thus-- so rearranging the equation above, we have that-- so this should be a square. OK. So we have this expression here. And now we are-- OK, so let me simplify this expression slightly. And now we're using that N is sufficiently large, OK? So we're using N is sufficiently large. So this quantity is at least a tenth of alpha squared N. OK, and that's the conclusion, all right? So that's the conclusion of this step here. What does this mean? This means there exists some theta so that the Fourier coefficient at theta is at least the claimed quantity. Any questions? All right. So that finishes step 1. So now let me go on step 2. In step 2, we wish to show that if you have a large Fourier coefficient, then one can obtain a density increment. So last time, we were working in a finite field vector space. A Fourier coefficient, OK, so which is a dual vector, corresponds to some hyperplane. And having a large Fourier coefficient then implies that the density of A on the co-sets of those hyperplanes must be not all close to each other. All right, so one of the hyperplanes must have significantly higher density than the rest. OK, so we want to do something similar here, except we run into this technical difficulty where there are no subspaces anymore. So the Fourier character, namely corresponding to this theta, is just a real number. It doesn't divide up your space. It doesn't divide up your 1 through N very nicely into sub chunks. But we still want to use this theta to chop up 1 through N into smaller spaces so that we can iterate and do density increment. All right. So let's see what we can do. So given this theta, what we would like to do is to partition this 1 through N into subprogressions. OK, so chop up 1 through N into sub APs such that if you evaluate for-- so this theta is fixed. So on each sub AP, this function here is roughly constant on each of your parts. Last time, we had this Fourier character, and then we chopped it up using these three hyperplanes. And each hyperplane, the Fourier character is literally constant, OK? So you have-- and so that's what we work with. And now, you cannot get them to be exactly constant, but the next best thing we can hope for is to get this Fourier character to be roughly constant. OK, so we're going to do some positioning that allows us to achieve this characteristic. And let me give you some intuition about why this is true. And this is not exactly a surprising fact. The intuition is just that if you look at what this function behaves like-- all right, so what's going on here? You are on the unit circle, and you are jumping by theta. OK, so you just keep jumping by theta and so on. And I want to show that I can sharp up my progression into a bunch of almost periodic pieces, where in each part, I'm staying inside a small arc. So in the extreme case of this where it is very easy to see is if x is some rational number, a over b, with b fairly small, then we can-- so then, this character is actually constant on APs with common difference b. Yep? AUDIENCE: Is theta supposed to be [INAUDIBLE]?? YUFEI ZHAO: Ah, so theta, yes. so theta-- thank you. So theta 2 pi. AUDIENCE: Like, is x equal to your theta? YUFEI ZHAO: Yeah. Thank you. So theta equals-- yeah. So if theta is some rational with some small denominator-- so then you are literally jumping in periodic steps on the unit circle. So if you partition N according to the exact same periods, you have that this character is exactly constant in each of your progressions. Now, in general, the theta you get out of that proof might not have this very nice form, but we can at least approximately achieve the desired effect. OK. Any questions? OK. So to achieve approximately the desired effect, what we'll do is to find something so that b times theta is not quite an integer, but very close to an integer. OK, so this, probably many of you have seen before. It's a classic pigeonhole-type result. It's usually attributed to Dirichlet. So if you have theta, a real number, and a delta, kind of a tolerance, then there exists a positive integer d at most 1 over delta such that d times theta is very close to an integer. OK, so this norm here is distance to the closest integer. All right, so the proof is by pigeonhole principle. So if we let N be 1 over delta rounded down and consider the numbers 0, theta, 2 theta, 3 theta, and so on, to N theta-- so by pigeonhole, there exists i theta and j theta, so two different terms of the sequence such that they differ by less than-- at most delta in their fractional parts. OK, so now take d to be difference between i and j. OK, and that works. OK. So even though you don't have exactly rational, you have approximately rational. So this is a-- it's a simple rational approximation statement. And using this rational approximation, we can now try to do the intuition here, pretending that we're working with rational numbers, indeed. OK, so if we take eta between 0 and 1 and theta irrational and suppose N is large enough-- OK, so here, C means there exists some sufficiently large-- some constant C such that the statement is true, OK? So suppose you think a million here. That should be fine. So then there exists-- so then one can partition 1 through N into sub-APs, which we'll call P i. And each having length between cube root of N and twice the cube root of N such that this character that we want to stay roughly constant indeed does not change very much. If you look at two terms in the same AP, in the sub-AP, then the value of this character on each P sub i is roughly the same. So they don't vary by more than eta on each P i. So here, we're partitioning this 1 through N into a sub-A piece so that this guy here stays roughly constant. OK. Any questions? All right. So think about how you might prove this. Let's take a quick break. So you see, we are basically following the same strategy as the proof from last time, but this second step, which we're on right now, needs to be somewhat modified because you cannot cut this space up into pieces where your character is constant. Well, if they're roughly constant then we're go to go, so that's what we're doing now. So let's prove the statement up there. All right. So let's prove this statement over here. So using Dirichlet's lemma, we find that there exists some d. OK, so I'll write down some number for now. Don't worry about it. It will come up shortly why I write this specific quantity. So there exists some d which is not too big, such that d theta is very close to an integer. So now, I'm literally applying Dirichlet's lemma. OK. So given such d-- so how big is this d? You see that because I assumed that N is sufficiently large, if we choose that C large enough, d is at most root N. So given such d, which is at most root N, you can partition 1 through N into subprogressions with common difference d. Essentially, look at-- let's do classes mod d. So they're all going to have length by basically N over d. And I chop them up a little bit further to get-- so a piece of length between cube root of N and twice cube root of N. OK. So I'm going to make sure that all of my APs are roughly the same length. And now, inside each subprogression-- let me call this subprogression P prime, subprogression P, let's look at how much this character value can vary inside this progression. All right. OK, so how much can this vary? Well, because theta is such that d times theta is very close to an integer and the length of each progression is not too large-- so here's-- I want some control on the length. So we find that the maximum variation is, at most, the size of P, the length of P, times-- so this-- that difference over there. So all of these are exponential, so I can shift them. Well, the length of P is at most twice cube root of N. And-- OK, so what is this quantity? So the point is that if this fractional part here is very close to an integer, then e to that, e to the 2 pi times that i times some number should be very close to 1, because what is happening here? This is the distance between those two points on the circle, which is at most bounded by the length of the arc. OK, so cord length, at most of the arc. So now, you put everything here together, and apply the bound that we got on d theta. So this is the reason for choosing that weird number up there. We find that the variation within each progression is at most eta, right? So the variation of this character within each progression is not very large, OK? And that's the claim. Any questions? All right, so this is the analogous claim to the one that we had-- the one that we used last time, where we said that the character is constant on each coset of the hyperplane. They're not exactly constant, but almost good enough. All right. So the goal of step 2 is to show an energy-- show a density increment, that if you have a large Fourier coefficient, then we want to claim that the density goes up significantly on some subprogression. And the next part, the next lemma, will get us to that goal. And this part is very similar to the one that we saw from last time, but with this new partition in mind, like I said. If you have A that is 3AP-free with density alpha, and N is large enough, then there exists some subprogression P. So by subprogression, I just mean that I'm starting with original progression 1 through N, and I'm zooming into some subprogression, with the length of P fairly long, so the length of P is at least cube root of N, and such that A, when restricted to this subprogression, has a density increment. OK, so originally, the density of A is alpha, so we're zooming into some subprogression P, which is a pretty long subprogression, where the density goes up significantly from A to essentially A-- from alpha to roughly alpha plus alpha squared. OK. So we start with A, a 3AP-free set. So from step 1, there exists some theta with large-- so that corresponds to a large Fourier coefficient. So this sum here is large. OK, and now we use-- OK, so-- so step 1 obtains us, you know, this consequence. And from this theta, now we apply the lemma up there to-- so we apply lemma with, let's say, eta being alpha squared over 30. OK, so the exact constants are not so important. But when we apply the lemma to partition, and into a bunch of subprogressions, which we'll call P1 through Pk. And each of these progressions have length between cube root of N and twice cube root of N. And I want to understand what happens to the density of A when restricted to these progressions. So starting with this inequality over here, which suggests to us that there must be some deviation. OK, so starting with what we saw. And now, inside each progression this e x theta is roughly constant. So if you pretend them as actually constant, I can break up the sum, depending on where the x's lie. So i from 1 to k. And let me sum inside each progression. So by triangle inequality, I can upper bound the first sum by where I now cut the sum into progression by progression. And on each progression, this character is roughly constant. So let me take out the maximum possible deviations from them being constant. So upper bound-- again, you'll find that we can essentially pretend-- all right, so if each exponential is constant on each subprogression, then I might as well just have this sum here. But I lose a little bit, because it's not exactly constant. It's almost constant. So I loose a little bit. And that little bit is this eta. So you lose that little bit of eta. And so on each progression, P i, you lose at most something that's essentially of alpha squared times the length of P i. OK. Now, you see, I've chosen the error parameter so that everything I've lost is not so much more than the initial bound I began with. So in particular, we see that even if we had pretended that the characters were constant, on each progression we would have still obtained some lower bound of the total deviation. OK. And what is this quantity over here? Oh, you see, I'm restricting each sum to each subprogression, but the sum here, even though it's the sum, but it's really counting how many elements of A are in that progression. So this sum over here is the same thing. OK, so let me write it in a new board. Oh, we don't need step 1 anymore. All right. So what we have-- OK, so left-hand side over there is this quantity here, all right? We see that the right-hand side, even though you have that sum, it is really just counting how many elements of A are in each progression versus how many you should expect based on the overall density of A. OK, so that should look similar to what we got last time. I know the intuition should be that, well, if the average deviation is large, then one of them, one of these terms, should have the density increment. If you try to do the next step somewhat naively, you run into an issue, because it could be-- now, here you have k terms. It could be that you have all the densities except for one going up only slightly, and one density dropping dramatically, in which case you may not have a significant density increment, all right? So we want to show that on some progression the density increases significantly. So far, from this inequality, we just know that there is some subprogression where the density changes significantly. But of course, the overall density, the average density, should remain constant. So if some goes up, others must go down. But if you just try to do an averaging argument, you have to be careful, OK? So there was a trick last time, which we didn't really need last time, but now, it's much more useful, where I want to show that if this holds, then some P i sees a large energy-- sees a large density increment. And to do that, let me rewrite the sum as the following, so I keep the same expression. And I add a term, which is the same thing, but without the absolute value. OK, so you see these guys, they total to 0, so adding that term doesn't change my expression. But now, the summand is always non-negative. So it's either 0 or twice this number, depending on the sign of that number. OK. So comparing left-hand side and right-hand side, we see that there must be some i. So hence, there exists some eye such that the left-hand side-- the i-th term on the left hand side is less than or equal to the i-th term on the right-hand side. And in particular, that term should be positive, so it implies-- OK, so how can you get this inequality? It implies simply that the restriction of a to this P i is at least alpha plus alpha squared over 40 times P i. So this claim here just says that on the i-th progression, there's a significant energy increment. If it's more decrement, that term would have been 0. So remember that. OK. So this achieves what we were looking for in step 2, namely to find that there is a density increment on some long subprogression. OK, so now we can go to step 3, which is basically the same as what we saw last time, where now we want to iterate this density increment. OK, so it's basically the same argument as last time, but you start with density alpha, and each step in the iteration, the density goes up quite a bit. And we want to control the total number of steps, knowing that the final density is at most 1, always. OK. So how many steps can you take? Right, so this was the same argument that we saw last time. We see that starting with alpha, alpha 0 being alpha, it doubles after a certain number of steps, right? So we double after-- OK, so how many steps do you need? Well, I want to get from alpha to 2 alpha. So I need at most alpha over 40 steps. OK, so last time I was slightly sloppy. And so there's basically a floor, upper floor down-- rounding up or down situation. But I should add a plus 1. Yeah. AUDIENCE: Shouldn't it be 40 over alpha? YUFEI ZHAO: 40-- thank you. 40 over alpha, yeah. So you double after at most that many steps. And then now, you add density at least 2 alpha. So we double after at most 20 over alpha steps, and so on. And we double at most-- well, basically, log sub 2 of 1 over alpha times. OK, so anyway, putting everything together, we see that the total number of steps is at most on the order of 1 over alpha. When you stop, you can only stop for one reason, because in the-- yeah. So, yeah. So in step 1, remember, the iteration said that the process terminates. The process can always go on, and then terminates if the length-- so you're now at step i, so let N i be the length of the progression, that step i is at most C times alpha i to the minus 12th. So we have a-- right, so provided that N is large enough, you can always pass to a subprogression. And here, when you pass to subprogression, of course, you can re-label that subprogression. And it's now, you know, 1 through N i. Right, so I can-- it's-- all the progressions are basically the same as the first set of positive integers. I'm sorry, prefix of the positive integers. So when we stop a step i, you must have N sub i being at most this quantity over here, which is at most C times the initial density raised to this minus 12th. So therefore, the initial length N of the space is bounded by-- well, each time we went down by a cube root at most. Right, so the fine-- if you stop a step i, then the initial length is at most N sub i to the 3 times 3 to the power of i each time you're doing a cube root. OK, so you put everything together. At most that many iterations when you stop the length is at most this. So you put them together, and then you find that the N must be at most double exponential in 1 over the density. In other words, the density is at most 1 over log log N, which is what we claimed in Roth's theorem, so what we claimed up there. OK. So that finishes the proof. Any questions? So the message here is that it's the same proof as last time, but we need to do a bit more work. And none of this work is difficult, but there are more technical. And that's often the theme that you see in additive combinatorics. This is part of the reason why the finite field model is a really nice playground, because there, things tend to be often cleaner, but the idea's often similar, or the same ideas. Not always. Next lecture, we'll see one technique where there's a dramatic difference between the finite field vector space and over the integers. But for many things in additive combinatorics, the finite field vector space is just a nicer place to be in to try all your ideas and techniques. Let me comment on some analogies between these two approaches and compare the bounds. So on one hand-- OK, so we saw last time this proof in F3 to the N, and now in the integers inside this interval of length N. So let me write uppercase N in both cases to be the size of the overall ambient space. OK, so what kind of bounds do we get in both situations? So last time, for-- in F3 to the N, we got a bound which is of the order N over log N, whereas today, the bound is somewhat worse. It's a little bit worse. Now we lose an extra log. So where do we lose an extra log in this argument? So where does these two argument-- where do these two arguments differ, quantitatively? Yep? AUDIENCE: When you're dividing by 3 versus [INAUDIBLE]?? YUFEI ZHAO: OK, so you're dividing by-- so here, in each iteration, over here, your size of the iteration-- I mean, each iteration the size of the space goes down by a factor of 3, whereas over here, it could go down by a cube root. And that's precisely right. So that explains for this extra log in the balance. So while this is a great analogy, it's not a perfect analogy. You see there is this divergence here between the two situations. And so then you might ask, is there some way to avoid the loss, this extra log factor loss over here? Is there some way to carry out the strategy that we did last time in a way that is much more faithful to that strategy of passing down to subspaces. So here, we pass to progressions. And because we have to do this extra pigeonhole-type argument, it was somewhat lost-- we lost a power, which translated into this extra log. So it turns out there is some way to do this. So let me just briefly mention what's the idea that is involved, all right? So last time, we went down from-- so the main objects that we were passing would start with a vector space and pass down to a subspace, which is also a vector space, right? So you can define subspaces in F3 to the N by the following. So I can start with some set of characters U, and I define-- some set of characters S, and I define U sub S to be basically the orthogonal complement of S. OK, so this is a subspace. And these were the kind of subspace that we saw last time, because the S's or the R's that came out of the proof last time, every time we saw one, we threw it in. We cut down to a smaller subspace, and we repeat. But the progressions, they don't really look like this. So the question is, is there some way to do this argument so that you end up with progressions, and looked like that? And it turns out there is a way. And there are these objects, which we'll see more later in this course, called Bohr sets. OK, so they were used by Bourgain to mimic this Machoulin argument that we saw last time more faithfully into the integers, where we're going to come up with some set of integers that resemble-- much more closely resemble this notion of subspaces in the finite field setting. And for this, it's much easier to work inside a group. So instead of working in the integers, let's work inside and Z mod nZ. So we can do Fourier transform in Z mod nZ, so the discrete Fourier analysis here. So in Z mod nZ, we define-- so given a S, let's define this Bohr set to be the set of elements of Z mod nZ such that if you look at what really is supposed to resemble this thing over here, OK? If this quantity is small for all S-- OK, so we put that element into this Bohr set. OK, so these sets, they function much more like subspaces. So there are the analog of subspaces inside Z mod nZ, which, you know, n is prime, has no subgroups. It has no natural subspace structure. But by looking at these Bohr sets, they provide a natural way to set up this argument so that you can-- but with much more technicalities, repeat these kind of arguments more similar to last time, but passing not to subspaces but to Bohr sets. And then with quite a bit of extra work, one can obtain bounds of the quantity N over a poly log N. So the current best bound I mentioned last time is of this type, which is through further refinements of this technique. The last thing I want to mention today is, so far we've been talking about 3APs. So what about four term arithmetic progressions? OK, do any of the things that we talk about here work for 4APs? And there's an analogy to be made here compared to what we discussed with graphs. So in graphs, we had a triangle counting lemma and a triangle removal lemma. And then we said that to prove 4APs, we would need the hypergraph version, the simplex removal lemma, hypergraph regularity lemma. And that was much more difficult. And that analogy carries through, and the same kind of difficulties that come up. So it can be done, but you need something more. And the main message I want you to take away is that 4APs, while we had a counting lemma that says that the Fourier coefficients, so the Fourier transform, controls 3AP counts, it turns out the same is not true for 4APs. So the Fourier does not control 4AP counts. Let me give you some-- OK. So in fact, in the homework for this week, there's a specific example of a set where it has uniformly small Fourier coefficients. But that's the wrong number of 4APs. So the following-- it is true-- OK, so it is true that you have Szemerédi's term in-- let's just talk about the finite field setting, where things are a bit easier to discuss. So it is true that the size of the biggest subset of F5 to the N is a tiny fraction-- it's a little, one fraction of the entire space. OK, I use F5 here, because if I set F3, it doesn't make sense to talk about 4APs. So F5, but it doesn't really matter which specific field. So you can prove this using hypergraph removal, same proof, verbatim, that we saw earlier, if you have hypergraph removal. But if you want to try to prove it using Fourier analysis, well, it doesn't work quite using the same strategy. But in fact, there is a modification that would allow you to make it work. But you need an extension of Fourier analysis. And it is known as higher order Fourier analysis, which was an important development in modern additive combinatorics that initially arose in Gowers' work where he gave a new proof of similarities theorem. So Gowers didn't work in this setting. He worked in integers. But many of the ideas originated from his paper, and then subsequently developed by a lot of people in various settings. I just want to give you one specific statement, what this high-order Fourier analysis looks like. So it's a fancy term, and the statements often get very technical. But I just want to give you one concrete thing to take away. All right, so for a Fourier piece, higher-order Fourier analysis, roughly-- OK, so it also goes by the name quadratic Fourier analysis. OK, so let me give you a very specific instance of the theorem. And this can be sometimes called an inverse theorem for quadratic Fourier analysis. OK, so for every delta, there exists some c such that the following is true. If A is a subset of a F5 to the N with density alpha and such that it's-- OK, so now lambda sub 4, so this is the 4AP density, so similar to 3AP, but now you write four terms. The 4AP density of A differs from alpha to the fourth by a significant amount. OK, so for 3APs, then we said that now A has a large Fourier coefficient, right? So for-- OK. For 4APs, that may not be true, but the following is true, right? So then there exists a non-zero quadratic polynomial. F and N variables over F5 such that the indicator function of A correlates with this quadratic exponential face. So Fourier analysis, the conclusion that we got from counting lemma is that you have some linear function F, such that this quantity is large, this large Fourier coefficient. OK, so that is not true 4APs. But what is true is that now you can look at quadratic exponential faces, and then it is true. So that's the content of higher order Fourier. I mean, that's the example of higher-order Fourier analysis. And you can imagine with this type of result, and with quite a bit more work, you can try to follow a similar density increment strategy to prove similarities term for 4APs.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
18_Roths_theorem_I_Fourier_analytic_proof_over_finite_field.txt
YUFEI ZHAO: OK. So let's get started. So we spent quite a bit of time with graph theory in the first part of this course, and today I want to move beyond that. So we're going to talk about more central topics and additive combinatorics, starting with the Fourier analytic proof of Roth's theorem. We discussed Roth's theorem, and we gave a proof, during the course, using similarities, graph regularity lemma, as well as the triangle removal lemma. Today, I want to show you a different approach to proving Roth's theorem that goes through Fourier analysis. So this is a very important proof, and it's one of the main tools in additive combinatorics. Let me remind you what Roth's theorem says. So Roth proved, in 1953, that if we write our sub-3 of interval n to be the maximum size of a 3AP3 subset of 1 through n, then Roth showed that our 3 of n is little O of n. So in other words, if you have a positive density subset of the integers, then it must contain a three-term arithmetic progression then. So what I said is equivalent to the statement here. So previously, we gave a proof using regularity. Actually the regularity approach of Szemerédi was only found the '70s so Roth's original proof was through Fourier analysis. And we'll see that tomorrow. Today, we'll see a toy version of this proof. But it's not really a toy version. It has the same ideas, but in a slightly easier setting that has fewer technicalities. But before showing you that, let me just discuss a bit of history around Roth's theorem. We will show, next time, the next lecture, we'll show the bound. Also by regularity, we get some bound, which is little O then, but because of the use of regularity lemmas, it's pretty poor dependence. We got something like n over log star n. Next lecture, and basically Roth's original proof, gives you a bound which is n over log-log n. So it's a much more reasonable bound. The current best upper bound known has the form, essentially, n over log n raised to 1 plus little 1, roughly n over log n. We do not know, or even have great guesses on, what the answer should be. So the best lower bound, and this is a construction that we saw earlier in the course due to Behrend is of the form n over E to the c root log n. It seems it may be very difficult to improve this upper bound without some genuine new ideas. On the other hand, there is some evidence that the lower bound might be closer to the truth in that there are variants of the Roth problem for which we know that the lower bound is basically the truth. What I want to do today is look at a variant of this problem in what's called a finite field model. And that basically just means we're going to be looking at Roth's theorem, not in the integers, but in some finite field vector space, specifically F3 to the n. So we're going to define our sub-3 of this F3 to the n to be the maximum size of the 3AP3 subset of this finite field vector space. So the finite field model is really useful. We're going to see this again later in the course as well. Many of the ideas and techniques that work for the real problem, so to speak, many of those techniques also work in the finite field model, but they are technically simpler to execute. So this is often-- you view it as a sandbox, a playing ground, for testing out many of the ideas. And once you have those ideas, then you can see if you can bring them to the integer setting. And this is a very successful program, and we'll see one aspect of what happens when we do this. For this specific problem of Roth's theorem, in F3 to the n, there are some nice interpretations of what this problem means. So here's a pretty easy fact that for-- so n F3 to the n for three elements, x, y, z, the following interpretation, so what it means to be a 3AP, are equivalent. So x. y, z form a 3AP. So 3AP means the y is x plus D. Z is x plus 2D. Equivalently, they satisfy this equation, x minus 2y plus z equals zero. OK. In F3, minus 2 is plus 1. So it's the same as this even nicer looking equation, x plus y plus z equals 2. It also turns out to be the same as saying that x, y, z would lie on a line. But they are aligned. So the line has three points in F3. And if you look at the coordinate for every i, the i-th coordinate of x, y, z are all distinct or all equal. So easy to check all these things are equivalent to each other. And the last one is a nice interpretation in terms of a game that many of you know, Set. So in the game Set, you have a bunch of cards. There have some number of properties, n properties, like color, the number of symbols, the shape. And you want to form a set being three cards such that every property, they're all same or all different. So that's exactly this model over here. So what can we say about this problem? What's the size of the maximum subset of F3 to the n-th without 3-AP. If you look at the proof that we did earlier in this course, the one using triangle removal lemma, you see the proof works verbatim. Previously, we worked over Z mod n. Now, you work over a different group, same proof. So triangle removal lemma, it tells you that this r3 is always little o of the size of the space. But we would like to do better. So this gives you something like log star. It's not very good dependence. So we would like to do better. So what we will show today, so this theorem is attributed to Meshulam. So in this case, the order of history is somewhat reversed. So we'll see the finite field toy model, but it historically actually came afterwards. But you'll see that the Fourier analytic proof that we'll see today, it's basically the same proof in the two settings. So F is r3, we will prove a bound, well, just like that. So much better than what you get from the regularity method. In terms of-- OK, so let me tell you a bit more about the history of this problem in terms of what we know in terms of upper bounds and lower bounds. So let me say more about F3. So what's the best that you might hope for? So the best lower bound is due to E del. And it's some construction, some very specific construction, which gives you a bound that's something like 2.21 to the n-th. And the upper bound is 3 to the n-th-- on the-- 3 minus little 1 to the n-th. So for a long time, it was open whether the answer should be basically roughly like 3 to the n-th or some constant less than 3 to the n-th. And improvements on the upper bound were very slow, and/or some very difficult works that nudge that down below just a little bit. And then a couple years ago, a few years ago, there was this incredible breakthrough, where in this paper that was just a couple pages long, they managed to significantly improve the upper bound to basically 2.76 to the n-th. So this was an incredible breakthrough that happened just a few years ago. And we'll talk about this proof in a couple of lectures. It turns out this proof, which uses what's now called the polynomial method-- so not Fourier analytic, but a different method-- unfortunately does not seem to generalize to the original Roth's theorem. In fact, you shouldn't expect it to generalize in a straightforward way, because up there you we know that you do not have a power saving, whereas here you have a power saving. So the exponent goes down. OK, so this is roughly the history of this theorem. Any questions? AUDIENCE: Do we have to have [INAUDIBLE]?? YUFEI ZHAO: Yeah. So I'll-- So I can tell you this is known as a Croot-Lev-Pach. So I'll say more about it in a couple lectures. But this for F3 is due to Ellenberg and Gijswijt. So I'll tell you more about it in a couple lectures. What I want to focus on today is the Fourier analytic nature of the proof that gives you this bound up there, 3 to the n over n. And it may seem like a completely different topic compared to what we've been doing so far in the course, which is more about graph theory. But I want you to think about what are the relationships between what we'll see today and what we've seen so far. And there are lots of connections. So even though the proof may superficially look quite different, many of these ideas about quasirandomness versus structure will come up. And I want to present the proof in a way that highlights the similarities between what we did previously and this Fourier analytic proof. So let's talk about the strategy. In the proof of the Szemerédi graph regularity lemma, we had the strategy that we called the energy increment strategy. So you start-- you want to find a good partition. You start doing partitioning. And you keep track of this thing called the energy-- must go up at every step, cannot go up forever, so has a bounded number of steps. This strategy for Roth's theorem is also an important strategy. It's a variant of energy increment, but now density increment. So we start with a set-- A subset of F3 to the n-th, and we would like to understand something about its structure versus pseudorandomness in a way that is similar to when we discussed the similar issue for graphs. In particular, there will be this dichotomy that if A is in some sense pseudorandom-- so earlier, we saw what it means for a graph to be pseudorandom. So now, what does it mean for a subset of F3 to the n-th to be pseudorandom. So we'll address that today. If A pseudorandom-- OK, so the short answer is that it is Fourier uniform-- in other words, all Fourier coefficients small. That's what pseudorandom will refer to. So then there is a counting lemma. And the counting lemma will in particular imply that A has lots of 3-APs. So then you find your 3-AP. If this is not the case, then-- so what's the opposite of Fourier uniform-- is that A now has some large Fourier coefficient. And what we'll do is to use this Fourier coefficient to extract some codimension 1 affine subspace-- it's also called a hyperplane-- where the density of A goes up significantly, if you restrict to that sub hyperplane. And you can repeat this process. Now, restrict to this hyperplane and ask yourself the same question. Is A, when restricted to this hyperplane, pseudorandom? In which case, we find APs. Or is A restricted to this hyperplane, does it have a large Fourier coefficient? In which case, we restrict further. And each time you iterate, you obtain a density increment. And the density increment cannot go on forever, because your total density is at most 1. So the number of steps must be bounded. So that's the strategy. So this should remind you somewhat of the energy increment strategy from Szemerédi's regularity lemma, although there are some fundamental differences. We're not doing partitionings. Any questions about this strategy? OK. I want to tell you about Fourier analysis. So probably, all of you have seen some version of Fourier analysis, maybe in your calculus class with Fourier series and whatnot. So you play with formulas, and solve some differential equations. So I want to give you more than just a bunch of ways about handling Fourier coefficients, a way to think about Fourier analysis. So think of this as a crash course about Fourier analysis from the perspective of combinatorics. And Fourier analysis, I think, it's much easier if you work in a finite group, in a finite abelian group, which is what we're doing here. Many of the technicalities go away. So we'll be looking specifically at Fourier analysis in F3 to the n-th, although the 3 can be any prime. So it's really the same. So the main actors in Fourier analysis are the Fourier characters. The Fourier characters are denoted gamma sub r. And they're characters on the group, meaning that they are maps which-- so they turn out, happen to be homomorphisms for the multiplicative group under-- so C under multiplication. And they're indexed by r, which also elements of F3 to the n-th. So I'm going to be fairly concrete here. There are ways to do this more abstractly. But I'll be fairly concrete. So it's defined by gamma sub r evaluated on x equals to omega raised to r dot product x, where here omega is a third root of unity and the dot is a dot product. So-- So that's the definition of the Fourier transform-- sorry, that's the definition of the Fourier characters. And once you have the Fourier characters, you can have this Fourier transform, just defined as follows. If you start with a function-- let's say, a complex-valued function on your space-- then I define the Fourier transform to be another function, like that, defined by the following formula. So that's the formula for the Fourier transform. It is basically the inner product between F and the Fourier character. So let me make the comment here, I think this is actually a pretty important comment, about the normalization. Now, when you first learn Fourier transforms, usually in the reals, there are all these questions about what number to put in the exponent. Is it 2 pi? Is it root 2 pi? Is it some other thing? And somehow, one answer is better than the others. And the same thing is true here in groups. So we'll stick with the following convention-- and I want all of you to stick with this convention, otherwise, we'll confuse ourselves to no end-- is that for a finite group-- actually, let me, start of a board. So the convention is that, in a finite group, the Fourier transform is defined-- and more generally, anything you do in the physical space, we always use the averaging measure. Don't sum, always average in the physical space. And in the frequency space, always use sums, use the counting measure. Keep this in mind. Any of these questions about normalization, if you stick with this convention, things will become much easier. So there won't be any of these questions about when you take the inverse Fourier transform, do I put an extra factor in front or not, if you stick with this correct convention. So with that convention in mind, what the Fourier transform really is is inner product between F and a Fourier character. There are some important properties of the Fourier transform. So let me go through a few of the key properties that we'll need. The first one is pretty easy. What is the meaning of the 0-th Fourier coefficient? You plug it in, and you see that it is just the average of F. So 0-th coefficient is the average of F. The second fact goes under one of two names, and they're often used interchangeably-- Plancherel or Parseval. And it says that if you look at the inner product in the physical space, then this product is preserved, if you take the Fourier transform. But now, of course, you're in the frequency space, so you should sum instead of doing the inner product. So this identity can be proved in a fairly straightforward way by plugging in what the definition is for the Fourier transform. This is a straightforward computation I'm not going to do on the board, but I highly encourage you to actually do at home, just to do it once to make sure you understand how it goes. But there is also a more conceptual way to understand this identity. And that's because-- now, this is also important to understand what the Fourier transform is. It's not just some magical formula somebody wrote down, like this is a very natural operation. It's because the characters, the set of characters, is an orthonormal basis. So the Fourier characters form an orthonormal basis. As a result, what the Fourier transform is is a unitary change of basis. You can check. It's very straightforward to check that the Fourier characters, indeed, form a orthonormal basis, because-- well, you can evaluate the inner product between two Fourier characters. So remember, in the physical space, we're always doing averaging. And so now, I'll just write down first what I mean by the inner product. So that's the inner product. And by the definition of the Fourier character, you have that. So think about what this expectation is-- unless r equals to s, in which case, this expectation is 1. Unless that is the case, you always have some coordinate of x in the exponent. So as you average over all possibilities, they average out to 0. So this calculation shows you that the Fourier characters form a orthonormal basis. And a basic fact you know from linear algebra is that if you do a change of basis, if you do a unitary change of basis, then inner product is preserved. It's like a rotation. It's the same-- so you're not changing the inner product. So the inner product is preserved under this change of basis. And that's why Plancherel is true. Another important thing is what's known as the Fourier inversion formula. The Fourier transform tells you how to go from a function to the Fourier transform. Well, now, if you are given the Fourier transform, how do you go back? There's a formula which tells you that you can go back by the following formula there. So that's the Fourier inversion formula. It allows you to do this inversion. And again, it's one of these formulas where I encourage you to try it out yourself by plugging in the formula and expanding. And it's pretty easy to check. It's much easier in the finite field setting, by the way. So if you use the usual Fourier transform on the real line, there are some technicalities even to prove the Fourier inversion. But in finite groups, it's almost trivial. You expand, and then you'll see. So it's very easy to prove. But you can also see this Fourier inversion formula more conceptually, because you're in a unitary change of basis. So to go back, well, think about what it means in linear algebra to revert a unitary transformation. You simply multiply the coefficients with the coordinates. Orthogonal, orthonormal change of basis. Finally, Fourier transform behaves while under convolution. So by convolution, we define the convolution of two functions, f and g, using the following formula. And so then the claim is that the Fourier transform behaves very well under convolution. It's basically multiplicative under convolution. So what this means is, if I put in-- so it's pointwise true everywhere. Again, very easy proof, because I just evaluate the left hand side, see what-- plug in the formula for the Fourier transform. And I find it's that. And now, I plug in the formula for convolution. So now, you can do a change of variables. And then you-- it's not hard to see. You eventually end up at the right hand side. So these are some of the properties. So there are important properties of the Fourier transform. So this is something that, whenever you learn about Fourier transform, you always see these few properties. And so we'll use them. But we'll also need another property that is specific to the analysis of 3-term arithmetic progressions. So what does Fourier transform have to do with 3-APs? So we want to use it to prove Roth's theorem. So we better have some tool that allows us to analyze the number 3-APs. And here is a key identity relating Fourier with 3-APs. And it's that, if you have three functions, then the following quantity, which relates the number of 3-APs-- So this function basically counts the number 3-APs, if your f, g, and h are indicator functions of a set. I want to express this formula in terms of the Fourier transforms of these functions. The formula turns out to be fairly simple, that it is simply that. So it's a single sum over the r's of f hat of r, g hat of minus 2r, and h hat of r. You might wonder why I put a minus 2 here, because minus 2 is r, and it looks certainly much nicer with just r in there. And that is true. This formula as written is true for over any group. And our proof will show it. So it's not really about F3 at all, but any group. So let me prove this for you in a couple of different ways. So the first proof is basically a straightforward no thinking involved proof, as in we apply these formula for using either Fourier inversion or the inverse Fourier transform, and plug it in, and expand, and check. So it's worth doing at least this once. So let's do this together at least once. But this something that is a fairly straightforward computation. The left hand side can be expanded using Fourier inversion. so r1 f hat r1 omega to the minus r1 dot x, and sum over r2 g hat r2 omega to the minus r2 dot x plus y, and then finally sum over r3 h hat of r3 omega to the minus r3 dot x plus 2y. So I'm using Fourier inversion, replace f, g, and h by their Fourier transforms. Oh, sorry, there should be-- yeah, so no minus. So now, we exchange sums and expectations, do a switch in the order of summation, so r1, r2, r3 and f hat of r1 g hat of r2 h hat of r3. And you have this expectation over x and y and omega of x dot r1 plus r2 plus r3. In fact, I can even write the x and y separately, y omega to the y dot r2 plus 2r3. So just rearranging. And now, you see that as you take expectation over x, this expectation is equal to 1, if r1 plus r2 plus r3 is equal to 0, and 0 otherwise. And likewise, the third expectation is either 1 or 0, depending on the sums of r2 and r3. So the only terms that remain, after you take out these 0's, are cases where both these two equations are satisfied. And then you see that the only remaining terms are basically the ones given in the sum on the right hand side. OK? So that's the proof. Pretty straightforward, you plug in Fourier inversion. I want to show you a different proof that hopefully will be more familiar and more conceptual. Now, it doesn't involve carrying through this calculation, even though this is not at all hard calculation. But first, let me rewrite the formula up there. So in F3, it will be convenient, and so the formula is actually slightly easier to interpreting F3, in F3, the identity says that, if you look at the quantity-- I need. So let me give you a second proof that works just in F3, but you can modify it to work in other groups. But in F3, it's particularly nice. The left hand side, you see, the left hand side, I can rewrite it as the following form, where I sum over-- well, I take expectation over all triples x, y, z that sum to 0. Because a 3-AP is the same as three elements, three points in the vector space summing to 0. But now, you see that this quantity is the same as the convolution evaluated at 0-- so if you extend the definition of convolution to more than one func-- more than two functions. But now, we apply Fourier inversion. And we find that-- OK. So by Fourier inversion, you have that. But now, by the identity that relates the Fourier transform and inversion, you have that. And that's the proof, because minus 2 r is the same as r. So it's shorter, because we're using some properties here about convolution and-- yeah, so about the convolution. That formula up there is, of course, related to counting 3-APs. Because if f, g, and h are all indicators of some set, then the left hand side is the same as basically the number of triples of elements in A whose sum is equal to 0. And the right hand side is the sum of the third power of the Fourier coefficients. And this formula should look somewhat familiar, because we also used this kind of formula back when we discussed spectral graph theory. And remember, the third moment of the eigenvalues is the trace of the third power, which counts closed walks in Cayley graph. So this is actually the same formula. So in the case if A is symmetric, let's say, then this is the same as the formula that counts closed walks of length three in the Cayley graph. The point of this comment is just to tell you that Fourier transform is somehow it's not this brand new concept that we've never seen before. It is intimately tied to many of the things that we have seen earlier in this course but in disguise. So it is related to the spectral graph theory that we discussed at length earlier in this course. Now that we have the Fourier transform, I want to develop some machinery to prove Roth's theorem following the strategy up there. So let's take a quick break. And then when we come back, we'll prove Roth's theorem. Any questions so far? AUDIENCE: So for this one, you said-- is it like we only used the fact that it's F3 to the n-th at the end of the proof, like in that last step? YUFEI ZHAO: OK. So question is, where do we use that in F3 to the n-th? This formula here holds in every finite abelian group, if you use the correct definition of Fourier transform with the averaging normalization. So in the other formula where you replace minus 2 by 1, that requires F3. But you can-- I mean, you can follow the proof and come up with a similar formula for every equation. So there's a general principle here, which I'll discuss more at length in a bit, that for patterns that are governed by a single equation-- in this case, 3-APs, x minus 2y plus z equal to 0-- patterns that can be governed by a single equation can be controlled by a Fourier transform. So let's begin our proof, the Fourier analytic proof of Roth's theorem in F3 to the n-th. AUDIENCE: So at the end, you said it was going to be connected with counting the [INAUDIBLE] graph. Does this mean that the Fourier transform of the indicator of A, those are exactly the eigenvalues of the Cayley graph? Or is it like [INAUDIBLE]? YUFEI ZHAO: OK, so you're asking about the final step, where we're talking about-- so I mentioned that there was this connection between counting walks in graphs and spectral graph theory. So you can check that, if you have a subset A of an abelian group, then the Fourier transforms of A are exactly the eigenvalues of the Cayley graph. AUDIENCE: So then I guess, have we done anything so far that could have been done in a spectral way yet? Well, I guess, where is the Fourier analysis better than the spectral [INAUDIBLE]? YUFEI ZHAO: OK. Question, where is the Fourier analysis better than the spectral posts? Well, let's see the proof first. And then you'll see, yeah. So there's no graphs anymore. So we're going to work inside F3 to the n-th. But just like the proof of regularity in counting, we're going to have a counting lemma. So all of these are analytic. And at this point, they should be very familiar to you. They may come in a different form. They may be dressed in different clothing. But it's still a counting lemma. So let's see. The counting lemma, in this case, says that if you are in the setting of A F3 to the n-th-- and I'm going to throughout write the density of A as alpha, then let me write, let me define this lambda 3 of A to be the function which basically counts 3-APs in A but with the averaging normalization. So this is-- we saw this earlier. So the counting lemma says that this normalized number of 3-APs in A-- so including trivial 3-APs; that's why this is a nice analytic expression-- differs from what you might guess based on density alone. This difference should be small if all the nonzero Fourier coefficients of A are small. So in this strategy, I said that if-- so the counting lemma tells you, if A is Fourier uniform, then it is pseudorandom. And this is where it comes in. If A has all small Fourier coefficients, then you have a counting lemma, which tells you that the counts of A should not be so different from the guess based on density. So the proof is very short. It's based on the identity that we saw earlier. The 3-AP count of A by the identity earlier is simply the third power of the Fourier transforms. And all of these calculations should be reminiscent, because we've done these kind of calculations in some form or another earlier in this course. So we're going to separate out the main term and subsequent terms. So the main term is the one corresponding to r equals to 0. So that's the density. And all the other terms I'm going to lump together into this sum. So we now know that the difference we're trying to bound is upper bounded by the third moment of the absolute values of the Fourier transform. And I want to upper bound this quantity here, assuming that all of these Fourier coefficients are small. We've also done this kind of calculations before. So where have we seen this before? We saw this calculation earlier in the class with the 3 replaced by a 4. So in counting four cycles in a proof equivalent of quasirandomness, we said that, if all the eigenvalues other than the top one are small, then you can count four cycles. It's the same proof. And remember, in that proof, there was a important trick, where you do not uniformly bound each term by the max, because then you lose. You lose by an extra factor of n that you don't want. So you only take out one factor. So you take out one factor. And you keep the rest in there. In fact, I can be more generous and even throw the r equal to 0 term back in. And now, by Plancherel-- so by Plancherel/Parseval, this here is equal to the expectation of the indicator function of A squared. So you take this-- you go back to physical space, and that's simply the density. So then that proves the theorem. So the moral of the counting lemma is the same as the one that we've seen before when we discussed graphs. If you're in pseudorandom, then you have good counting. And here, pseudorandom means having small Fourier coefficients, uniformly small Fourier coefficients. So now, let's begin the proof of Roth's theorem. The Roth's theorem proof will have three steps. The first step, we will observe that if you're in the 3-AP-free set, then there exists a large Fourier coefficient. Throughout, I'm going to use uppercase N to denote the size of the ambient group. And specifically, we will prove the following. And also throughout, A is a subset of F3 to the n-th and with density alpha. So I'll keep this convention throughout this proof. We will show that if A is 3-AP-free and N is at least 2 to the alpha to the minus 2-- so N is at least somewhat large-- then there exists a nonzero r such that the r-th Fourier coefficient is at least alpha squared over 2. If you're 3-AP-free, free then provided that you're working in a large enough ambient space, you always have some large Fourier coefficient. So the proof is essentially-- well, this claim is essentially a corollary of counting lemma, by the counting lemma. And using the fact that in a 3-AP-free set, what is the quantity lambda 3 of A? Up there, you only have the trivial 3-APs present. So this quantity lambda must then be the size of A divided by N squared or alpha over N, which are precisely counting the trivial 3-APs. So by the counting lemma, then we have that the upper bound on the right hand side, which we now write alpha max over nonzero r's, is at least alpha cubed minus the lambda 3 term, which should be alpha over N. So provided that N is large enough-- big N is large enough-- then the trivial 3-APs should not contribute very much. So I can lower bound the right hand side by, let's say, alpha 3-- alpha cubed over 2. So then you deduce the conclusion. I want you to think about how this proof is related to Szemerédi's graph regularity lemma. The analogy will break down at some point. But we've seen this step before as well. From lack of 3-APs, you extract some useful information and from this will extract some structure. And the structure here-- and this is where the proof now diverges from that of regularity-- having a large Fourier coefficient will now imply a density increment on a hyperplane. Specifically, if you have-- so keeping the same convention as before, if the Fourier coefficient of A at r is at least delta for some nonzero r, then A has density at least alpha plus delta over 2 when restricted to a hyperplane. So if you have a large Fourier coefficient, then I can pass down to a smaller part of the space where the density of A goes up significantly. To see why this is true, let's go back to the definition of the Fourier coefficient, the Fourier transform. So recall that Fourier transform is given by the following formula, where I'm looking at this expectation over points in F3 to the n-th together with indicator of A multiplied by this Fourier character. And you see that this function here, it is constant on cosets of the hyperplane defined by the orthogonal complement of r. So the value of this dot product is this constant on the three hyperplanes. So I can rewrite this expectation simply as 1/3 of alpha 0 plus alpha 1 omega plus alpha 2 omega squared, where alpha 0, alpha 1, alpha 2 are the densities of A on the three cosets of r perp. So group this expectation into these three hyperplanes. So now, you see that if this guy is large, then you should expect that alpha 0, alpha 1, and alpha 2 are not all too close to each other. So if they were all equal to each other, you should get 0. But you should not expect them to be too close to each other. In particular, we would want to say that one of them goes up-- is much bigger than alpha. So one of these, they must be much bigger than alpha. That's an elementary inequality. This is something that I'm sure I give you five minutes you can figure out. But let me show you a small trick to show this. And the reason for this trick is because in next lecture, when we look at Roth's theorem over the integers, we'll need this extra trick. And the trick here is this. We now know that because of the hypothesis, 3 delta is lower bound to the absolute value of alpha 0 plus alpha 1 omega plus alpha 2 omega squared. OK, so note here that the average of the three alphas is equal to the original alpha by definition of density. So this inner sum I can read write like that. So the sum of the three groups of unity add up to 0. And now, I apply triangle inequality to extract the terms. So now, you should already deduce that one of the alphas i's has to be significantly larger than alpha. And has to be significantly different, but there are only three times. One of them has to be significantly larger. But let me do this one extra trick, which we'll need next time, which is that let me add an extra term like that, which sums out to 0. But now, you see that each summand is always nonnegative. So one of the-- so there exists some j such that delta lower abounds the j-th summand. And if you look at what that means, then alpha lower bounds the j-th summand. So in particular, this sum here should be nonnegative. So you have that. Good. So we obtained a density increment of this hyperplane. And finally, I want to iterate this density increment. So I want to iterate this density increment-- so summary so far is that we have-- so if A is 3-AP-free with density alpha and N at least 2 to the alpha to the minus 2, then A has density at least alpha plus alpha squared over 4 on some hyperplane. So you combine step one and step two, we obtain this conclusion. Well, I can now repeat this operation. So I can repeat by restricting A to this hyperplane. If A is originally 3-AP-free, I restrict it to a hyperplane, it's still 3-AP-free. So I can keep going. I can keep going provided that my space is still large enough, because I still need this lower bound on F. So don't forget this one here. So I can keep going as long as N is-- so I'm using N sub j to denote what happens after the j-th step. I can keep going as long as this is still satisfied. But of course, you cannot keep on going forever, because the density is bounded. So density cannot exceed 1. So these two will give you a bound on the total dimension. So let's work this out. So let alpha i denote the density after step i in this iteration. And we see from over here that you start with density alpha, and each step you go up by an increment, which is basically what-- so you go up by some increment. And you want to know, if you start with alpha, how many steps at most can you take before you blow out 1. So can you give me some bound? So what's the maximum-- at most, how many steps? So we know that the density cannot exceed 1. AUDIENCE: 4 by alpha squared. YUFEI ZHAO: So you see that you have at most 4 over alpha squared steps, because density is at most 1. And if you plug this in, you get something which is not quite what I stated. So it turns out that if you plug this in, you find that alpha is-- you find that the size of A is at most 3 to the n-th over square root n. So let me do a little bit better, so then simply seeing that this term here is at least alpha squared over 4. And the point is that when you increment, you increment faster and faster. So I can use that to give a better bound on the number of steps. And here's the way to see it. So let me-- we can do better. So starting at alpha, I then now ask, how many steps do you need to take before it doubles? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: It goes up by alpha squared over 4. So it doubles after at most 4 over alpha steps. And at which point this alpha, new alpha, becomes at least the original-- twice the original alpha? But now, you keep going. How many times does it take to double again? 2 over alpha, because the alpha became twice as much. So it doubles again after at most 2 over alpha steps. And then you keep going. The next iteration is 1 over alpha. So in total-- so you see that we must stop after at most 8 over alpha steps. So the number of times it doubles, actually it decreases by at least half each time. So now, we know that-- we see that the-- so you keep on going. So you must stop after at most 8 over alpha steps. What is the final density when you have to stop, because when are you forced to stop? You are forced to stop if you run out of space. So you're forced to stop when you run out of space. So if the process terminates after m steps-- so we're at density alpha m, so then the final subspace has size less than 2 to the alpha m raised to minus 2, which is-- So now, I use this bound alpha. So the initial N is upper bounded by what? So how many steps did you take? You took at most 8 over alpha steps. Each of those steps, you pass down to codimension what? You lose a dimension for each step. And the final subspace has at least this-- has at most that much space. So the final dimension is this, basically log 1 over alpha. So put them together, we see that the size of the space originally is at most 1 over alpha. Yeah. AUDIENCE: Should this be a lower case n? YUFEI ZHAO: Thank you. Yeah. This should be a lower case n, so the dimension. Good. And OK, so then that's the conclusion, that the density alpha is big O of 1 over n. That proves the main theorem for today, so Roth's theorem over F3 to the n-th. So we went through this Fourier analytic proof. Next lecture, we will see the same proof again but done in the integers for interval. And there, there are some difficulties that we don't see over here. Because in the finite field space, in the finite field model, there's this very nice idea of looking at subspaces, so looking at hyperplanes. Each Fourier coefficient gets you down to one dimension less. But when you're working in the integers, there are no subspaces you can use. So we'll be looking at ways to get around the lack of subspaces. And this is why I said in the beginning that the finite field model is often a very good playground for additive combinatorics type techniques, especially Fourier analytic techniques. Because in the additive-- in all of these techniques, they just come out to be much cleaner. If you're working in a finite field setting, you have nice subspaces, you have Fourier transform in a very clean way. The Fourier transform always takes, in this case, one of three values. Everything's very clean. Everything's very simple. And you get to see the idea here. You get to see the sense of the increment argument. But once you understand those ideas and you're willing to do more work, then oftentimes, you can bring those ideas to other settings, to other abelian groups, to the integers, for instance, but with more work in the-- there are some extra ingredients that you need to use. I mentioned that there was a bound-- OK, so initially-- So next time, we'll see that. Next time, we'll see what happens over the integers. Any questions? Yes. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: OK, great. So question is why the process must stop after at most 8 over alpha steps? So you know that the density doubles after this many steps, doubles again after that many steps. So eventually, if it keeps on doubling, it cannot keep on doubling forever. So this process cannot keep on doubling forever. So it must stop-- so cannot double more than log base 2 of 1 over alpha times. And that point, you have to stop. So how many steps have you taken? Well, you sum this geometric series. So this-- and the next thing is that you sum this geometric series. And that geometric series sums to 8 over alpha. Great. So let's finish here. So next time, we'll see Roth's proof of Roth's theorem.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
4_Forbidding_a_subgraph_III_algebraic_constructions.txt
YUFEI ZHAO: Last time, we started discussing the extremal problem for bipartide graphs. And in particular, we saw the Kovari-Sos-Turan theorem, which tells us that if you forbid your graph from having a complete bipartide graph, Kst, then you have this upper bound on the number of edges in your graph. So we gave a proof. It was fairly short, used the double counting argument, and it give you this bound. And the next question is how tight is this bound? Is there a lower bound that is off by, let's say, at most a constant factor? And that's a major open problem. It's a conjecture that this bound is tight up to constant factors. But that conjecture is known for only a very small number of graphs. And we saw a couple of examples last time. So last time we saw construction that shows that for S equals to 2, this bound is tight. So the extremal number for k through 2 is on the order of n to the 3/2. So this theta means I'm hiding constant factors. And our construction used this polarity graph, which is essentially the point line incidence graph of a projective plane. And a basic algebraic or geometric fact, if you will, that two lines intersecting at most one point. We also sketched a construction that showed that for S equals to 3, this bound is also tight. And this construction here involved using spheres, again, in some space over a finite field. So both these constructions are in some sense algebraic geometric. And you can ask, is there a way to extend these ideas to construct other examples of Kst free graphs with the right number of edges using some ingredients from algebraic geometry? And today, I want to show you two different ways of doing that. So the state of the art, which I mentioned last time, let me remind you what is known about constructions that achieve the right exponent up there. So in a series of two papers by [INAUDIBLE] it is shown that if the constants t and s, such that t is large enough compared to s, in particular t is bigger than s minus 1 factorial, then the extremal number of Kst's is of the order, same order, as given in the upper bound of the Kovari-Sos-Turan theorem. In particular, these range of parameters allows you to do (2, 2), (3, 3), which we already know how to do. But the next case is 4, 7. And it is still open how to do 4, 6. So I want to show you this construction. And I will tell you exactly what the graph is. I'll give you an explicit description of this graph, which is Kst free and has lots of edges. And as I mentioned earlier, it is an algebraic construction. So as before, we start with p prime. And we will take n to be a p raised to the power s. And let's restrict s to be integer at least 2. And, of course, that's same as last time, if you have other values of n, take a prime close to the desired value and then take it from there. To describe the construction, let me remind you, the norm map, if you have a field extension, in this case, specifically I am looking at the field extension, this Fp to the s, I can define a norm map as follows. Sending x to be the product of all the conjugates, or the Galois conjugates of x, in this field extension. So explicitly written out is just that expression, which I can clean and collect and write it down like this. So I wrote that the image of this norm map lies in the base field Fp. And that is because-- well, one of the many reasons why this is the case, is that if you look at-- so I'll denote this norm map by N-- if you look at N of x, it raised to power p, leaves this value unchanged. And the base field is the field where it is invariant under power by p. So here's the graph, which I'll denote the norm graph with parameters p and s. So the norm graph will have as vertices just the elements of this field extension. And the edges will be the set of all pairs of vertices, not equal, of course, such that a norm of their sum equals to 1. So that's the graph. This is an explicit description of what the vertices and what the edges are. So now, we need to verify a couple of things. One is that this graph has the desired number of edges. It has lots of edges. And two is that this graph is Kst free. So let's do both of those things. So the first is let's check it has the right number of edges. So that's a relatively easy task. What we need to do is to count for every a how many choices of b are there in this field extension such that a plus b has norm exactly 1? And I claim that that number-- well, so here's a basic algebra fact, that the number of elements in this field extension with norm exactly 1 is precisely p to the s minus 1 divided by p to the s. And this is because really we're looking in the multiplicative subgroup. So the multiplicative group in this Fp to the s. And it has a cyclical-- so there's a generator-- the order of the cyclic group is p to the s minus 1. So you're asking, how many elements when raised to this power here ends up at the identity? So that's the answer. So that's one aspect. And so as a result, every vertex is adjacent to, well, how many vertices? For every given a I need to solve for b. And basically, this many solutions, I have to be just slightly careful because I don't want loops in my graph. So I may need to subtract 1. So it's adjacent to at least this number up here minus 1 to account for possible loops, which is pretty large, p to the s minus 1, which in other words is n raised to 1 minus 1 over s, that many vertices, and you see that this gives you the right number of edges. So this is a graph with lots of edges. So that part wasn't so hard. The next part, it's much trickier, which is we want to check that this graph has no Kst's. So previously in our algebraic construction, we used some geometric facts, such as no two lines intersect in more than one point to show that there's no k to 2 in the polarity graph. So there's going to be something like that here. So the claim is that this construction, this norm graph, is Ks, s factorial plus 1 free. So it's not quite the bound I claimed. So it's a little bit weaker. But it is in the spirit of what I am claiming, namely that for t large enough, this graph is Kst free. So for t a large enough comes constant here will show s factorial plus 1. And as a result, it would follow that the extremal number 4s sub s factorial plus 1 is at least 1/2 minus little 1 of the constant-- I don't already worry about that much, but it's on the order of n to the 2 minus 1 over s. OK, everyone with me? So we need to verify this graph here has no Kst. Yes, question? AUDIENCE: Should that t be an s? YUFEI ZHAO: Yes, that should be an s. Thank you. Any more questions? AUDIENCE: Should that be s minus 1 factorial? YUFEI ZHAO: So we will show later on a better result using s minus 1 factorial, but for now I'll show you the slightly weaker result, which is still in the same spirit. Yep. AUDIENCE: Is the stronger result using the same graph? YUFEI ZHAO: We'll change to a different graph. For the stronger result, we will change to a different graph. OK, so now let's show that this graph here is Kst free. And for that claim, we need to invoke an algebraic fact, which let me write down now. So suppose we have a field f. Any field will work with a finite field. Any field is fine. And I have a bunch of elements from the field, such that a sub ij is different from a sub rj for all i not the same as r. Then the system of equations-- and I'll write down the system. So x minus 1-- x1 minus a 1, 1, x2 minus a 1, 2, and so on. xs minus a1x equals to b1. That's the first equation. Second equation, x1 minus a 2, 1. It almost looks like the usual system of linear equations. But I'm taking products. And so on. So the last one being x1 minus a s1 x2 minus a s2, dot dot dot, xs minus a ss equals to b sub s. The system has at most s factorial solutions, where I'm working inside this field. So that's the claim. So let me just give you some intuition for this claim. Suppose the right side vector is 0, all zeroes. Then I claim that this is trivial. So what is the saying? I need to select x1 to be one of the a's from the first row and x2 be one of the a's from the second row, and so on. But each column of a is distinct. So that's the hypothesis. You have all the a's in the first column are distinct. So no two of the x's can-- so I need to set one of the xi's to be one of the a's from the first row. But you see that you cannot set x1 to be a 1, 1, and x1 to be a s1 at the same time. So the solution just counts permutations, which is exactly s factorial. So this algebraic fact plays a key role in the proof of the theorem that the lower bound that we're stating up there, if you look at the paper, they give a proof of this result. And it's not a long proof. But it uses some commutative algebra and algebraic geometry. And usually in a class, if the instructor doesn't present the proof, it's for one of several reasons. Maybe the proof is too short. It doesn't need to be presented. Maybe it's too long or too difficult. Maybe it's not instructive to the class. And the last reason, which is the case here, is that I don't actually understand the proof. As in I can follow it line by line, but I don't understand why it is true. And if one of you wants to come up with a different proof or try to explain to me how this seemingly elementary algebraic fact is proved, I would appreciate it. For small values of x, you can check it by hand. So x equals to 2, you're solving a system of two quadratic equations. And that you can check by hand. And three maybe you can do it with some work. But even with 4 it's not so clear how to do it. And also, one of the geometric intuition is that if b is o0, then you have exactly s factorial solutions. And the geometric intuition is somehow that if you move b around, then the fiber, the sides of the fiber, the number of solutions x can only go down. It can not go up. And this corresponds to some algebraic geometry phenomenon. And that's all I will say about this algebraic fact, which we'll now use as a black box. Great. So now we have that as our algebraic input, let us show that a norm graph is Kst free. It's actually not so hard once you assume that theorem up there. So let's show at a norm graph is Ks, s factorial plus 1 free. Well, what does it mean to have a Kst? It means that if you have distinct vertices, which then correspond to elements, s elements, y1 through ys, of this field, then the common neighbors of these elements correspond to solutions of this system of equations where I set all of these values to be 1. But I can write l exactly what these guys are because I have this form, that representation there for the norm map, so I can write it out. And now remember this fact that when you are in characteristic p, x plus y raised to the power p, is the same as x to the p plus yp and characteristic p. So I can expand the remaining parenthesis like that. So I want the first line to equal to 1, and so on. And each of the lines has that equal to 1. How many solutions in x does this system of equations have? So even if I treat each of x and x to the p and so on as separate variables, that theorem appear tells me that there are at most s factorial solutions in x. Satisfies all the hypotheses of that theorem up there. Therefore, the graph is Ks sub s factorial plus 1 free. You do not have more than s plus 1 different values of x satisfying this system of equations. And that's the proof that this norm graph is Kst free. Yes, question? AUDIENCE: Why can't powers of like [INAUDIBLE] YUFEI ZHAO: Sorry, can you repeat the question? AUDIENCE: Why cannot the powers of the y's be the same? YUFEI ZHAO: The question is why cannot the powers of the p's be the same? So you are asking, down the second column, let's say, why are all these y's different? Because you're working inside a field. And raising to a p in this field is a bijection. So think about the order of the cyclic group. It has co-prime to p. But great question. Anything else? OK, so this gives you a construction that gives you Kst free for t bigger than s plus s factorial. Now, let me show you how to improve this construction to do a little bit better, to get s minus 1 factorial. And the idea is to take a variant of this norm graph, which we'll call the projective norm graph. And the projective norm graph will define it for s at least 3 is rather similar. But there's a twist. I have as the vertex set, not just the field extension-- OK, so now I take field extension, but one level less. And I take a second coordinate, which consists of non-zero elements from Fp. The edges are formed by putting an edge between these two elements, if and only if, the norm of the sum of the first coordinates equals to the product of the second coordinates. So now, you can run through a similar calculation that tells you the number of edges. So first of all, the number of vertices is p to the s minus 1 times p minus 1, so basically the same as p to s. And additionally, every vertex has degree exactly p raised to s minus 1 minus 1. And the reason is that if I tell you the values of x little x and big y, which cannot equal minus x, or else you will never form an edge, then they together uniquely determine little y. So for every value of big X and little x, I just need to run through all the values of big Y other than minus x. So the number of edges then equals to 1/2 times the number of vertices times the degree of every vertex, which, as before, is the claimed asymptotic. And the remaining thing to show is that this projective norm graph is Kst free. So it's K sub s, s minus 1 factorial plus 1 free. It's a similar calculation as the one before, but we need to take into account the small variant in the construction. So suppose we fix s vertices, labeled by this big Y's, little y's. And now we need to solve for uppercase X, lowercase x in this system of equations, so asking how many different pairs, big X, little x, can appear as a solution to this system of equations? Well, first of all, if some pairs of the first coordinates, big I is equal to big J, then if you have a solution, then that forces little y to be the same as little j. And so the y's wouldn't have been distinct to begin with. So this is not possible. So all the big Y's are distinct. Well, now, let's divide these equations by the final equation. And we get that the i-th equation becomes like that, which you can rewrite by dividing by the coordinate of the norm of big Y i minus big Y s. This is non-zero because we just showed that all the big Y i's are distinct. If you divide by this norm here and rearrange appropriately, we find that the equations become like this. So after doing some rearranging-- so this is the equation the set of new questions that we get. And you see that if you use new variables, x prime, do a substitution, this being x prime, then it has basically the same form as the one that we just saw with a different set of constants. And in particular from what we just saw, we see that you cannot have more than s minus 1 factorial solutions in x. Now, they're s minus 1 equations. And the field extension working as the x minus 1 field extension. So we saved an equation by using this projectivization. And that's it. So this shows you the claim of constructing a Kst free graph for t bigger than s minus 1 factorial, which has the desired number of edges. Yes, question? AUDIENCE: Why do you have the if some capital Y equals capital IJ? YUFEI ZHAO: OK, so the question is why do I say this part? So I'm maybe skipping a sentence. I'm saying, if there is a solution to this system of equations x, if these vertices have a common neighbor, then if you have some x satisfying this system of equations, then having two different big Y's being the same forces you to have the two smaller y's being the same. AUDIENCE: OK. YUFEI ZHAO: Right. And then the Y's will have been distinct. So for them to have some common neighbors, you better have these big Y's being distinct. Any more questions? Great. So as I mentioned, it is an open problem to determine whether what is the extremal number for K44, K45, K46. And you may ask, well, we have this nice construction-- it maybe somewhat mysterious because of that, but explicit. And you can write this graph down. And you can ask is this graph K46 free? So do we gain one extra number for free, maybe because we didn't analyze things properly? And it turns out that's not the case. So there was a very recent paper just released last month showing that this graph here for s equals to 4 actually does contain some K46's. So if you want to prove a corresponding lower bound for K46, you better come up with a different construction. And that's I think an interesting direction to explore. Any questions? Yes. AUDIENCE: Do we know of any similar results about this construction not working for larger s? YUFEI ZHAO: The question is, do we know any similar result of about does this graph contain Kst for other values of s and t less than the claimed threshold? It is unclear. So the paper that was uploaded, it doesn't address the issue of s bigger than 4. Yeah. AUDIENCE: Why Fp to the power s? YUFEI ZHAO: So question is why Fp to the power of s? So let's go back to the norm graph construction. So where do we use Fp to the power of s? Well, certainly we needed it to have the right edge count. So that comes up in the edge count. And also in the norm expression, you have the correct number of factors. So I encourage you to try for if you use a smaller or bigger value of s, you either don't get something which is Kst free or you have the wrong number of edges. Any more questions? So later, I will show you a different construction of Kst free graphs, again, for t large compared to s that will not do as well as this one. But it is a genuinely different construction. And it uses the idea of randomized algebraic construction, which is something that actually was only developed a few years ago. It's a very recent development. And it's quite nice. So it combines some of the things we've talked about with constructing using random graphs to construct H free graphs on one hand, and on the other hand, some of the algebraic ideas. In particular, we're not going to use that theorem up there, but we'll use some other algebraic geometry fact. OK, so let's take a quick break. So what I want to discuss now is a relatively new idea called a randomized algebraic construction, which combines some ideas from both the randomized construction and also the algebraic construction that we just saw. So this idea is due to Boris Bukh just a few years ago. And the goal is to give an alternative construction of a Kst free graph with lots of edges provided that as before, t is much larger compared to s. So this band here will not be as good as the one that we just saw. And I will even tell you what it is. But it's some constant. So for every s there is some t, such that this construction works. As before, we working inside some finite field geometry. So let's start with a, a prime power. You can think of prime if you like. It doesn't make so much difference. So we're working inside a finite field. And let's assume s is fixed and at least 4. Let me write down some parameters. Don't worry about them for now. Just think of them as sufficiently large constants. So d is this quantity here. So we'll come back to it later when it comes up. OK, so what's the idea? When we looked at the randomized construction, we took a random graph. We took an Erdos Renyi graph. Every edge appeared independently. And saw that it has lots of edges, if you choose p property and not so many copies of h. So you can remove all the copies of h to get a graph with lots of edges that is h free. What we're going to do now is instead of taking the edges randomly, we're going to take a random polynomial. F will be a random polynomial chosen uniformly among all polynomials with-- so I wrote uppercase Y and uppercase X and Y. Actually, X and Y, they're not single variants. They are-- so each of them is a vector of s variables. So in other words, x1 through xs are the variables in the polynomial. And then y1 through ys. So it's a 2s variable polynomial. So among all polynomials with degree, at most d-- d being the number up here-- in each of x and y sets of variables. So you look at it as xs variables, each monominal has their exponents sum to our most d, and likewise with each monomial for the y variables. So this is the random object. It's a random polynomial in 2s variables. And the degree is bounded. So you only have a finite number of possibilities, and I choose one of them uniformly at random. And now, what's my graph? We're going to construct a bipartide graph G. The bipartideness, it's not so crucial. But it'll make our life somewhat easier. So it's a bipartide graph. So it hs two vertex parts, which I will label left and right, L and R. And they are both the s dimensional vector space over f cube. And we'll put an edge between two vertices, if and only if that polynomial up there f evaluates to 0 on these values. That's the graph. So I give you a random polynomial f. And then you put in edges according to when f vanishes. So f, if you view the bipartide graph as a subset of Fq to s cross F to the s, then this is the zero set. The edge set is the zero set. Just like in random graphs, with the construction with random graphs, we'll need to show a couple of things. One is that has lots of edges, which will not be hard to show. And second, that it will have typically a small number of copies of Kst. And that will have some ingredients which are similar to the random graphs case we saw before, but it will have some new ideas coming from algebraic geometry. First, let's show that this graph has lots of edges. And that's a simple calculation, because for every pair of points of vertices, I claim that the probability-- so here f is the random object-- the probability that f evaluates to 0 on this pair is exactly q. So exactly 1 over q, 1 over the size of the field. So this is not too hard. And the reason is that the distribution of f is identical to if you are an extra random constant on f, chosen uniformly at random. So I took a random polynomial. I shifted by a random constant. It's still uniformly random polynomial, according to that distribution. But now, you see that this, whatever f evaluated to, if I shift by a random constant, you will end up with a uniform distribution. So it tells you that that guy up there is uniform distribution on every fixed point u, v. So in particular, it hits 0 with probability exactly 1 over q. And as a result, the number of edges of g in the expectation is exactly n squared over q, where n is the number of vertices. So n is actually not the number of vertices, but the size of each vertex part, namely q to the s. So you see that it gives you the right number of edges, so n to the 2 minus 1 over s. So we have the right number of edges. And now, we want to show that this graph here typically does not have too many copies of Kst's. It might have some copies of Kst's. Somehow that is unavoidable. Just as in the random case, you do have some copies of Kst's. But if they are not too many copies, I can remove them and obtain a Kst free graph. OK, so what is the intuition? How does it compare to the case when you have a genuine Erdos-Renyi random graph? Well, what is the expected number of common neighbors? So if you fix some u with let's say on the left side with exactly s vertices, I want to understand, how many common neighbors does u have? But because the common neighbors, if he has too many common neighbors, then that's a Kst. It is not hard to calculate the expectation of this quantity, both in the random graph case as well as in this case. And you can calculate if you pretend every edge occurs independently, the expected number of common neighbors is exactly n to the q to the minus s. So there are s elements of u, which is exactly 1. And you know that for a binomial distribution with expectation 1 and a large number of variables, the distribution is approximately Poissonian. Ah, but that's in the case when it's independently distributed, which is the case in the case of GMP. But it turns out for the algebraic setting we're doing here, things don't behave independently. It's not that you're doing coin flops for every possible edge. We're doing some randomized algebraic construction. And for algebraic geometry reasons, you will see that the distribution is very much not like Poissonian. It will turn out that either the number of common neighbors is bounded or it is very large. And that means that we can show using some Markov inequality that the probability that it is very large is quite small. So typically, it will not have many common neighbors? And that's the intuition, and so let's work out this intuition. Any questions so far? So how do we do this calculation? So first, let's start with something that's actually fairly elementary. So suppose you have some parameters, r and s. So I think of r and s-- so I have some parameters r and s. And thick of them as constants. Have some restrictions, but don't worry too much about them. Suppose I have two bounded subsets of the finite field, where you have size s and v has size r. Then the claim is that the probability that f vanishes on the Cartesian product of u and v. OK, so what do you expect it to be? So I have s r elements, and I want f, this random polynomial, to vanish on the entire product. Well, if s, its value behaved independently for every point, you should expect that the probability is exactly q to the power minus s r. And it turns out that is the case. So this is true. This is an exact statement. OK, so why is this true? So this is in some sense a generalization of this claim over here. And you have to do a little bit more work. But it's not too difficult. So let's first consider this lemma in a somewhat simpler case, where all the first coordinates of x are distinct-- of u are distinct. And all the first coordinates of v are distinct. Suppose u and v have that form. So I write down the list of points for u and first coordinates are all distinct. What I want to do is to give you a random shift, to do a uniform random shift. And I will shift it by a polynomial g, which is a bivariate polynomial. So these are not vectors. They are just single variables. And I look at all possible sum of monomials, where the degree in i is less than s, and the degree in j is less than r. And these a's are chosen uniformly independently at random from the ground field fq. And as before, we see that f and f plus g have the same distribution, the same probability distribution. And so all it remains to show that is whatever f comes out to be, if I tack on this extra random g, it creates a uniform distribution on the values on the entire u cross v. But, actually, see, I have sr choices exactly for these coefficients. And I have sr values that I'm trying to control. So really it's a counting problem. And it suffices to show a bijection, namely that for every possible vector of values, there exists a choice of coefficients, as above, such that g evaluates to the prescribed values with the given coefficients. And that uniformity will follow just because you have the exact if you just do a counting. And the one-dimensional version of this claim, so let's think about what that is. So if I have, let's say, three points on the line and a degree 2 polynomial, what I am saying is that if you give me the values you want on these three points, I can produce for you a unique polynomial that evaluates to the prescribed values on these three points. And that you should all know as Lagrange interpolation. So it tells you exactly how to do that. And that works for many reasons. One of them is that the random indeterminate is invertible. Here, we have multi-variables. So let's do Lagrange interpolation twice, once for each variable. So we'll apply Lagrange interpolation twice. So the first time, we'll see that for all values of u, there exists a single variate polynomial in the y variable with degree at most r minus 1 that evaluates to the correct values on the fixed little u. So do it for one variable at a time. For fixed u, do Lagrange interpolation on the y variable. And now, once we have those things there, viewing the g that we want to find as a polynomial whose coefficients are polynomials in the x variables, but is itself is a polynomial in the y variable, we find that again using Lagrange interpolation, there exists these values for these coefficients here, such that each coefficient of if you plug in the first entry into little u agrees with the coefficients of the g that we just found. And that should be the case for every little u. So once you find these polynomials and now you have a bonafide polynomial in g. And that's the claim above. So using Lagrange interpolation twice, once for each variable. So this is-- if you're confused, just think about it. There is nothing deep here. So that finishes the claim in the case when the first coordinates are all distinct. So we use that fact crucially in doing this Lagrange interpolation. Now, for general u and v, where we don't have this assumption of having distinct first coordinates, well, let's make them to have distinct first coordinates by considering a random linear transformation, so using a probabilistic method. So we suffice as to find invertible linear maps, p and s, on this vector space, such that TU and SV have the above properties. So let me show you how to do it for u. So I need to find you a invertible linear transformation t. Well, it's just the first coordinate that matters. So it suffices to find just a linear map corresponding to the first coordinate that is injective of u. Whatever linear map you have, even if it's zero, that's fine, I can extend it to the remaining coordinates. Actually, if it's zero, then it's not going to be injective of u. So it better not be zero. OK, well, let's find this map randomly. So pick t uniform via random among all linear maps. And I want to understand what is the probability of collision, bad event, if two elements of u end up getting mapped to the same point. Well, that's not too hard. So for every distinct pair of points in fq to the s the probability that they collide, think about why this is true. It's exactly 1 over q. If x and x prime, they're differing at least one coordinate, then even just along that coordinate, I can make them distinct. So this is the case for every pair. So now by union bound, the probability that t1 is injective on u is at least 1 minus the size if u choose 2 times 1 over q. And that's why we chose q to be large enough. So q is at least r squared, s So this number here is positive. So such a t exists. And so we can transform this u and v, two configurations where the first coordinates are all distinct, and then run the argument as before. OK, great. So what we've shown so far is that if you look at these Ksr structures, they appear with probability exactly-- well, with expectation exactly what you might expect as in independent random case. But what we really want to understand is the distribution of the number of common neighbors. In particular, we want to upper bound the probability that there are too many common neighbors. We want to understand some kind of tail probabilities. And to do that, one way to do tail probabilities is to consider moments. Yes, question? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: Sorry, can you repeat the question? AUDIENCE: How do you have the equality right before the lemma? YUFEI ZHAO: Question, how do I have the equality right before the lemma? So there, I'm actually saying for the Erdos-Renyi random graph case-- that's the case-- in Erdos-Renyi random graph, each edge, if you have the same edge probability, is 1 over q. And then that's the number of common neighbors you would expect. So that's a heuristic for the Erdos-Renyi random graph case. OK, so now let's try to understand the distribution of the number of common neighbors. So let's fix a u subset of Fp to the s with exactly s elements. And I want to understand how many common neighbors it has. So let's consider the number of common neighbors of u and the d-th moment of this random variable. So this is a common way to do upper tail bounds. And one way to analyze such moments is to decompose this count as a sum of indicator random variables. So let me write I of v to be the quantity which is 1 if f of uv is 0 for all with ou and big U. In the words, it's a common neighbor. It's 1 if v is a common neighbor for u, and 0 otherwise. So then the number of common neighbors would simply be the sum of this indicator as v ranges over the entire vertex set. And I can expand the sum. Then all of these are standard things to do when you're trying to calculate moments. OK, so I can bring this expectation inside and try to understand what is the expectation of this object inside. Well, if all the v's are distinct, then this is simply the expected number of Kst's. But the v's might not be distinct. So we need to be a little bit careful. But that's not too hard to handle. So let me write M sub r to be the number of subjective functions from d element set to an r elements set, an M to be sum of these M sub r's for r up to d. Then, let's consider how many distinct v values are there. If there are distinct v values, then they take on that many possible values. And Mr for the number of subjections, and for each possible r, the exact number for the exact value of this expectation is q to the minus rs. And that's exactly what we showed. So this comes from the lemma just now. But we chose-- I mean, look, this is that binomial coefficient. And you have this number here. So they multiply it to at most 1. So we have this number, this quantity there, which I think of as a constant. So this is a constant. So the d-th moment is bounded. And one way to get tail bounds once you have the moments is that we can use Markov's inequality. It tells us that the number of common neighbors of u, the probability that u has too many common neighbors, more than lambda common neighbors. I can rewrite this inequality here by raising both sides to the power d. And then, using Markov inequality, take expectation. So all of these are standard techniques for upper tail estimation. You want to understand the upper tails on random variable, understand its moments and use Markov on its moments. But now, we know we have some bound for the d-th moment, right? Which is M as we just showed. So there is this bound here. So far you can run the same argument in the random graphs case. And you wouldn't really do much different. I mean everything is more or less the same what I've said so far, although we had to do a special calculation algebraically that didn't really make sense-- I mean, that you have to show some kind of near independence. Question? AUDIENCE: Is that less than or equal to, right? The Markov inequality. YUFEI ZHAO: Ah, thank you. So this is less than or equal to. Thank you. But now is where the algebra comes in, so the algebraic geometry nature of this argument comes in. It turns out that this quantity here-- previously, we said, at least heuristically, in the random graphs case, it behaves like a qua Poisson random variable. So it's fairly uniform in a Poisson sense. It turns out because of the algebraic nature of the construction, this random variable behaves nothing like a Poisson. So it turns out it's highly constrained due to reasons using algebraic geometry. And I'll tell you exactly why. So that the number of common neighbors is either very small or very large. And here is the claim that for every s and d, there exists some c, such that if I have a bunch of polynomials on fq to the s of degree o at most d, then if you look at the number of zeros, common zeros, of the f's, how many common zeros can you have? It turns out it cannot just be some arbitrary number. So this set has size either bounded at most c. Or it is at least q minus something very small. And I'll explain just a little bit why this is the case, although I will not give a proof. So either somehow you are working in a zero dimensional case, or if this algebraic variety that comes with it has positive dimension, then you should have a lot more points. And the reason for this dichotomy has to do with how many points are there on the algebraic variety over a finite field? So I will not give a proof, although if you look on the course website for a link to a reference, that does have a proof. But I will tell you what the key algebraic geometric input is to that claim up here. And this is a important and famous theorem called a Lang Weil bound. So the Lang Weil bound tells you that if you have an algebraic variety, v. And for now, it is important in order to say this properly to work in the algebraic closure, so Fq bar as the algebraic closure, it's the smallest field extension where I can solve all polynomial equations. Then, the variety cut out by a set of polynomials-- so v is this variety. So if it is irreducible, it cannot be written as a union of finite number of smaller varieties, irreducible over Fq bar. And all of these polynomials have degree bounded. Then the question is, if I take these polynomials and I look at how many Fq points does it have? So in other words, now I leave the field. I come back down to Earth, to the base field and ask, what's the number of solutions where the coordinates are in Fq? OK, so how many points do we expect? Well, the simplest example of an algebraic variety is that of a subspace. If you have a d-dimensional subspace over Fq, you have q to the d points exactly. So you expect something like q raised to the dimension of the variety. Now the dimensions is actually a somewhat subtle concept. I won't define. But there are many definitions in algebraic geometry. It turns out it's not always exactly as nice as in the case of linear subspaces. But the Lang Weil bound tells us that is not too far off. You have a deviation that is at most on the order of q 1 over root q, where there are some hidden constants depending on the description of your variety in terms of the degrees of the polynomial with the dimension and the number of polynomials. But the point is that the number of points on this variety is basically should be around the same as the model case, namely that of a subspace. And that brings us some intuition to why this lemma is true. So you have those polynomials up there. So there are some subtle points one needs to verify about your disability, but the punchline is that either you are in the zero-dimensional case, in which case you have something like [INAUDIBLE] theorem, and that tells you that a number of solutions is bounded. Or you're in the positive dimensional case, in which case the Lang Weil term tells you, you must have lots of solutions. And there is no middle ground. And now, we're ready to finish off Boris Bukh's construction. So we see that applying this lemma up there with this, what should be my polynomials be? I'm going to use my polynomials f sub u of y to be-- well, I have a random polynomial up there. So I'm trying to find common neighbors. I'm trying to find common solutions. So these are my polynomial as u ranges over big U. So for q large enough, we find that the number of common neighbors of u, the probability that it is bigger than c, where c is supplied by that lemma, is equal to the probability that a number of common neighbors of u exceeds q over 2, where q over w is this quantity rounded-- smaller than that quantity up there. So if it has more than c solutions, that automatically has a lot of solutions. And now, we can apply the Markov inequality up there, the tail bound on the moments to deduce that this probability is at most M divided by q over 2 raised to the power d. And the moral of it is that we should have a very small number-- I mean, this should occur with very small probability. So let's call u being a subset of v bad if u has r elements, u is contained entirely on the left side or the right side of the original bipartition. And most importantly, u has a lot, namely more than c, common neighbors in j. So how many bad sets do we expect? Basically, a very small number. So the number of bad sets, bad use, is upper bounded by-- well, for each choice of s elements, the probability that something is bad is this quantity up here, which we chose d to be a large enough constant depending on s. If you look at the choice of d up there, see that this quantity here is quite a bit smaller than the number of vertices. And now, the last step is the same as our randomized construction using Erdos-Renyi random graphs, where we remove-- well, it's almost the same, but now we remove one vertex from every bad set. And we get some graph g prime. And we just need to check that g prime has lots of edges. We know got rid of all the bad sets. So g prime is now K sub sc plus 1 free. We got rid of all possibilities for s points having more than c common neighbors. Now, we just need to check that g prime has lots of edges. Well, the expected number of edges in g prime is at least the x-- what we removed one vertex for every bad u. And each bad u carries with it at most n edges, because there are only n edge on each side of the bipartition. And, well, the number of edges of g has expectations exactly n squared over q. And the number of bad u's we saw up there is not very large. So in particular, this quantity, the second term is dominated by the first term. And so we obtain the claimed number of edges. Also, the graph has at most 2n vertices. So we may have gotten got rid of some. But actually, fewer vertices, the better. At most, 2n vertices, and it is a case of s-- So it's Kst free for t large enough. So this gives you another construction of Kst free graphs. And so today, we saw two different constructions of Kst free graphs for constants s and t, but in both cases, t is substantially larger than s. But the most important thing is that they both match the Kovari-Sos-Turan bound. So it gives you some evidence that maybe the Kovari-Sos-Turan conjecture, the theorem, is tight up to at most a constant factor, although that is a major open problem. And it remains a very difficult it seems open problem, but one that is of central importance in extremal graph theory to try to come up perhaps with other constructions that can do better. Maybe they will have some algebraic input. But maybe they will have some input from other ideas. We do not know. Question? AUDIENCE: So is this q defined? Because I remember q as a prime power, but it doesn't say there. YUFEI ZHAO: So question is, is q defined? So just like in the proofs of polarity graphs and what not-- so you have some n. You rounded down to the nearest prime powers. So s is a constant. So n is basically q to the s. So take large n, round it down to the nearest prime power. q could be a prime, for instance. It could be a prime. It could be a prime power. Think q to be a prime. So I'm saying for every q, there is a construction. And for every n, you can round down to the nearest q to the s and then run this construction. Any more questions? Great. So next time I will begin by telling you a few more things about why people really like this construction and some conjectures that were solved using this idea and some conjectures that still remain open along the same lines. And we'll also go beyond Kst, so other bipartide graphs and show you how to do upper bounds for those bipartide graphs.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
23_Structure_of_set_addition_III_Bogolyubovs_lemma_and_the_geometry_of_numbers.txt
YUFEI ZHAO: OK, we are still on our journey to proving Freiman theorem. Right? So we've been looking at some tools for analyzing sets of small doubling. And last time, we showed the following result, that if a has small doubling, then there exists a prime. It's not too much bigger than a, such that a big subset of a, of at least 1/8 proportion, a big subset of a is Freiman 8-isomorphic to a subset of this small cyclic group. So last time, we developed this tool called of modeling lemma, so Ruzsa's modeling lemma that allows us to pass from a set of small doubling, which could have elements very spread out in the integers, to something that is much more compact, a much tighter set. That's a subset, a positive proportionate subset of a small cyclic group. And remember, the last time we defined this notion of Freiman 8-isomorphic, Freiman isomorphism, in this case, it just means that it preserves partially additive structure. It preserves additive structure when you look at most 8-wise sums. All right. Well, this is where we left off last time. If you start with small doubling, then I can model a big portion of this set by a large fraction of a small cyclic group. All right. So now, we're in the setting where we are looking at some space, a cyclic group, for instance. And now, we have positive proportion, a constant proportion subset of that group. And we would like to extract some additional additive structure from this large set. And that should remind you of things we've discussed when we talked about Roth's theorem. Right? So Roth's theorem also had this form. If you start with Z mod n or in the finite field setting, and you have a constant proportion of the space, then you must find a three-term arithmetic progression. In fact, you must find many three-APs. So we're going to do something very similar, at least in spirit, here. We're starting from this large proportion of some space. We're going to extract a very large additive structure, just from the size alone. So let me begin by motivating it with a question. And we're going to start, as we've done in the past, with a finite field model, where things are much easier to state and to analyze. The question is, suppose you have a set a, which is a subset of f2 to the m. And a is an alpha proportion of the space where you think of alpha as some constant. Question is, OK, suppose this is true. And must it be the case that a plus a the subset-- all right, so a itself, just because it's a large proportion of the space. So just because it's the 1% of the space, doesn't mean that it contains any large structures. It doesn't contain necessarily any large sub-spaces, because it could be a more random subset of f to the m. But there's a general principle in additive combinatorics or even analysis where if you start with a set that is quite large, and it might be a bit rough, a is a bit rough, it's all over the place. If you add a to itself, it smooths out the set. So a plus a is much smoother than a. And the question is, must A plus A contain a large subspace? And here, by "large," I mean the following. Because we're looking at constant proportions of the entire space, by "large," I would also want a constant proportion of the entire space. So does there exist some subspace of bounded codimension? So if alpha is a constant, I want a bounded codimensional subspace that lives inside A plus A. It turns out the answer is no. So there exists sets that, even though there are very large and you add it to itself, it still doesn't have large subspaces. So let me give you an example. And this construction is called a nivo set. So let's take A sub n to be the set of points in F2 to the n who's Hamming weight-- so Hamming weight here is just the number of 1's or it's a number of non-zero elements-- number of 1's in x or, in general, the number of non-zero elements among the coordinates of x, so the number of non-zero coordinates. And I want the Hamming weight to be less than this quantity here. So visually, what this looks like is I'm thinking of the Hamming cube placed so that the all-zero vector is here, the all-ones vector is up there. And then it's sorted by Hamming weight. So this is called a Boolean lattice. And I'm looking at all the elements, A, which are within a Hamming ball of the 0 vector. So this is the set. It's not too hard to calculate the size of the set, because I'm taking everything with Hamming weight less than this quantity over here by central limit theorem. The number of elements in the set is of the form a constant fraction of the entire space, where alpha is some constant if c is a constant. So it has the desired size. But also, A added to itself consists of points in the Boolean cube whose Hamming weight is at most n minus c root n. And I claim that this sumset does not contain any subspace of dimension larger than n minus c root n. So this is the final claim. It's something that's, again, one of these linear algebraic exercises that we've actually seen earlier when we discussed the proof of a cap set, right, so the polynomial method proof of cap set. If you have a subspace of dimension greater than some quantity, then you should be able to find a vector in that subspace whose support has size at least the dimension. OK. So you see, in particular, we do not have any bounded codimensional subspaces in this A plus A. So even though the philosophy is roughly right that if you start with a set A and you add it to itself, it smooths out the set, it should contain-- we expect it to contain some large structure. That's not quite true. But what turns out to be true, and this is the first result that I will show today, is that if you add A to itself a few more times, that, indeed, you can get large subspaces. And this is an important step in a proof of Freiman's theorem. And this step is known as Bogolyubov's lemma. So Bogolyubov's lemma, in the case of F2 to the n, says that if you have a subset A of F2 to the n of fraction alpha of the space, then 2A minus 2A contains a bounded codimensional subspace, so a very large subspace. so 2A minus 2A contains a subspace of codimension less than 1 over alpha squared. Here, I write 2A minus 2A even though we're in F2. So this is the same as 4A. But in general-- and you'll see later on when we do it in integers-- 2A minus 2A is the right expression to look at. So 2A minus 2A, something that works in every group. But for F2 to the n, it's the same as 4A. So the main philosophy here is that adding is smoothing. You start with a large subset of F2 to the n. It's large. Does it contain large structures? Not necessarily. But you add it to itself, and it smooths out the picture. So it has a rough spot. It smooths it out. And if you keep adding A to itself, it smooths it out even further. You add it to itself enough times, and then it will contain a large structure and just from the size of A alone. And there is a very similar idea, which comes up all over the place in analysis, is that convolutions are smoothing. So you start with some function that might be very rough. If you convolve it with itself and if you do it many more times, you get something that is much smoother. And in fact, adding and convolutions are almost the same things. And I'll explain that in a second. So this is an important idea to take away from all of this. So when we do these free analytic calculations-- so there will be some free analytic calculations-- the first time you see them, they might just seem like calculations. So you push the symbols around, and you get some inequalities. You get some answers. But that's no way to learn the subject. So you need to figure out, what is the intuition behind each step? Because when you need to work on it yourself, you're not just guessing the right symbols to put down. You have to understand the intuition, like why, intuitively, each inequality should be expected to hold? And this is an important idea that adding is smoothing and convolution is smoothing. All right. So let me remind you about convolutions. So recall, in a general abelian group, if I have two functions, f and g, on the group-- so complex value functions-- then the convolution is given by the following formula. So that's the convolution. And it behaves very well with respect to the Fourier transform. The Fourier transform turns convolutions into multiplications. So this means, point wise, I have that. Convolutions also relate to some sets, because if I-- and this is the interpretation of convolutions that I want you to keep in mind for the purpose of additive combinatorics. If you have two sets, A and B, then look at the convolution of their indicators. It has an interpretation. So if you read out what this value says, then what this comes out to is 1 divided by the size of the group over the number of pairs in A and B such that their sum is x. So up to normalization, convolution records some sets with multiplicities. So the convolution tells you how many ways are there to express x in terms of a sum of one element from A and another element from B. And in particular, this function here is supported on the sumset A plus B. So this is the way that convolutions and sumsets are intimately related to each other. So that's proof Bogolyubov's lemma. We're going to be looking at this sumset, which is related to the following convolution. So let f be this convolution of indicators, AA minus A minus A. Of course, in F2 to the n, you don't need to worry about the minus signs. But I'll keep them there for future reference. Here, by what we said earlier, the support of f is 2A minus 2A. It's not too hard to evaluate the Fourier transform of f, because Fourier transform plays very well with convolutions. So in this case, it is the Fourier transform of A squared, Fourier transform of minus A squared. The Fourier transform of minus A, if you look at the formula, is the conjugate. It's a complex conjugate of the Fourier transform of A. So again, in F2 to the n, they're actually the same. But in general, it's the complex conjugate. So we always have this formula here. So we always have that. So by the Fourier inversion formula, we can write f in terms of its Fourier transform. Here, I'm using that. We're now using that. We're in F2, so that's what the inverse Fourier transform looks like. And so we have the following formula for the value of f in terms of the Fourier transform of the original set. OK. We want to show that f, whose support is the 2A minus 2A we're interested in-- we want to show the support of f contains a large subspace, a small, codimensional subspace. So observe that if f has positive value, then-- if f of x is positive, then x lies in its support. So we just want to find a large subspace on which f is positive. But we can choose our subspace by looking at Fourier coefficients according to their size. So what we can do is let R be the set of, essentially, Fourier characters whose corresponding value in the Fourier transform is large. So it's at least alpha to the 3/2. And that value will come up later. So let's look at this R. And what we're going to do is we're going to look at the orthogonal complement of R and show that f is positive on the orthogonal complement of R. First, R is not too large. The size of R is, I claim, less than 1 over alpha squared. Why's that? So this is an important trick that we've seen a few times before. The number of large Fourier coefficients cannot be too large, because they're Parseval, which tells us that the sum of the squares of the Fourier coefficients is equal to the L2 norm of the original function, which, in this case, is just the density of A. So that's alpha. So just looking at that, the number of large terms cannot be too many. OK. So we have this small set, R, on which f has large Fourier transform values. Now, let's look at f of x. So let's look at f of x. We want to find out, when can we control f of x to make sure it is positive? Well, for the values of r-- little r-- not in big R or 0, we see that the Fourier transform-- we would like to upper bound this quantity here so that this is negligible. This is a small term. Again, this is a computation that we've seen several times earlier in this course. All of these terms are small. So I want to show that the whole sum is small. I don't want to bound each term individually and then sum up all the possible contributions. That will be too big. But we've seen this trick before where we just take out some subset of the factors. So in particular, I'll take out two of the factors and get alpha cubed upper bound plus the remaining factors. And once again, use Parseval on this very last sum, keeping in mind that I'm throwing away some of the r's, including 0. So it will be a strict inequality. OK. Yeah. So this step should be reminiscent of very similar computations that we did in the proof of Roth's theorem. So if x lies in the orthogonal complement of uppercase R, then f of x-- well, let's evaluate f of x from the Fourier inversion formula. We have this. So I can now split the sum as the 0-th term, the large terms. Now, you see, for the large terms, because we're in the orthogonal complement of A, I can make sure that they all come with a positive sign. And finally, the small terms. And you see that the main term is alpha to the 4. This term is always non-negative. And the error terms, the small terms, are strictly less than alpha to the 4th in magnitude. So as a result, this whole sum is positive. Yeah. AUDIENCE: These 1's are also 1 sub A's, right? YUFEI ZHAO: Thank you. The 1's are 1 sub A's. Yeah. So this is the very similar philosophy to when we proved Roth's theorem. We look at a sum like this, so some trigonometric series, some Fourier series. And we decompose it into several terms based on how large their Fourier coefficients are. We can control the small ones using what essentially amounts to a counting lemma and show that the small ones cannot ever annihilate the large, dominant terms. So as a result, f of x is positive on the orthogonal complement of R. So thus R lies in the support of f, which is equal to 2A minus 2A. And furthermore, the codimension of R is at most-- so it could be some linear dependencies-- is at most the size of R, which is strictly less than 1 over alpha squared. And that proves Bogolyubov's lemma. So if you have a large subset of F2 to the n, you add it to itself enough times so that it's a smoothing operation. And then eventually, you must find a large structure. And we only start by assuming the size of it. If it's just large enough, then we can find a large structure within this iterated sumset. Any questions? Yeah? AUDIENCE: Isn't R in support of R [INAUDIBLE]?? YUFEI ZHAO: Sorry, come again? AUDIENCE: You got that the orthogonal complement of R-- YUFEI ZHAO: Sorry. The orthogonal complement of R is in the support. Yeah. So R lives in the character space. OK, great. So this is the proof of Bogolyubov's lemma in the finite field setting, working in F2 to the n, which is fine. It's a useful setting as a playground for us to work in. But ultimately, we want to understand what happens in the integers. So if you look at where we left off last time, we started in the cyclic group, Z mod n. So we would like to know how to formulate a similar result but in the cyclic group where there are no more subspaces. We encountered a similar situation, although we didn't go into it, when we discussed Roth's theorem. In the first proof of Roth's theorem that we showed, in the first Fourier analytic proof in the finite field setting, the proof won by restricting to subspaces, to hyperplanes. And then we keep on iterating by restricting to hyperplanes. So you can stay in subspaces. And the finite field setting has lots of subspaces. And we said that to get that proof to work in the integers, we had to do something different. And we did something by restricting to intervals. But I also mentioned that, somehow, that's not the natural analog of subspaces. The natural analog of subspaces is something called a Bohr set. And so I want to explore this idea further now. So the natural analog of subspaces in Z mod n are these objects called Bohr sets. And they're defined as follows. So suppose you are given some R, a subset of Z mod n. We define a Bohr set, denoted like this, so Bohr of R and epsilon, to be the subset of Z mod n, so including elements x, such that rx is pretty close to a multiple of n. So here, we're looking at the R mod Z norm. So this is the distance to the closest integer such that this fraction is very close to an integer for all little r and big R. You see, this is the analog of subspaces, because in the finite field setting, the finite field vector space, even if I set epsilon to equal to 0 and turn this into an inner product, then Bohr sets are exactly subspaces-- namely, the orthogonal complement of the set R. But now we're in the integers, where you don't have exact 0. But I just want that quantity, that norm, to be small enough. So let me give you some names. So given the Bohr set, which, technically speaking, is more than just the set itself but also includes the information of R and epsilon-- so it's the entire data written on the board-- we call the size of the R the dimension of the Bohr set and epsilon, the width. Bogolyubov's lemma for Z mod n now takes the following form. If you start with a subset A of Z mod n, and all I need to know is that A is a constant fraction of the cyclic group, then the iterated sumset 2A minus 2A contains some Bohr set Bohr R of 1/4 with the size of R less than 1 over alpha squared. So earlier, we said that if you have a large subset of F2 to the n, then 2A minus 2A contains a large subspace. And now we say that if A is a large subset of the cyclic group, then 2A minus 2A contains a large Bohr set of small dimension. And so this terminology may be slightly confusing. The dimension corresponds to codimension previously. So if you do this translation, this dimension-- I mean, if R were a set of independent vectors and you have 2 to the n, then that'll be the codimension of the corresponding subspace. But this is the terminology that we're stuck with. OK. Any questions about the statement? You see, even the bounds are exactly the same, 1 over alpha squared. And I mean, the proof is going to be pretty much exactly the same once you make the correct notational modifications. So we're going to do that. So I'm going to write on top of this earlier proof and show you what are the notational modifications so that you can get exactly the same result here but with Bohr sets instead of a subspace. The thing to keep in mind is that we have a somewhat different Fourier transform. So let me now use different colored chalk. So the Fourier transform of a function f from Z mod n, so a complex value, is a function also on Z mod n defined by f hat of r equal to expectation over x in Z mod n of f of x times omega to the minus rx, where omega is a primitive n root of unity. And you also had the Fourier inversion formula. It's what you expect. I won't bother writing it down. So we go back to the proof. And pretty much everything will read exactly the same. So f is still the same f. And the Fourier transform has the same property. So all of these nice properties of the Fourier transform hold. For inversion, it's basically the same except that the formula is slightly different. So instead of minus 1 to the r dot x, what we have now is omega to the rx. So here, we have omega to the rx. OK. Great. The next part is the same, where we define r. So now we define r to consist of elements of z mod n, whose Fourier transform is large. I can take out 0. OK. This part is still the same. It's the same calculation. Now, it's the very last part that needs to be just slightly changed. Where does the 1/4 come in? So where does this come in? So observe that if x is in the Bohr set with width 1/4, then rx divided by n is-- OK, so by definition, all of these fractions are within the 1/4 of an integer. And if you think about what happens on the unit circle, if you are within 1/4 of the integer, then that means the corresponding place on the unit circle is on the left half circle. So in particular, the cosine of 2rx over n is non-negative. So it has non-negative real part. So now we go back to this part of the proof, where we're applying Fourier inversion formula to f of x. So we had the Fourier inversion formula up there. But because f of x is real, it's really the cosine that should come in play. It should be a cosine. And now, for the next step, we have no negative sign here, because this step-- OK, let me just cross out this step over there. All of these terms, the terms that correspond to little r and big R, they have non-negative contribution. Whatever the contributions here, it's non-negative. So I cross out this term. All I'm left with is the main term, corresponding to the density, and the error term, so to speak, the minor terms, which is less than alpha to the 4th in absolute value. OK. So it's positive. So basically the same proof. Once you make the appropriate modifications, it's the same proof in Z mod n. OK, great. So this concludes our discussion of Bogolyubov's lemma. So it says that-- OK, so continuing our previous thread, we start with a subset of z mod n of constant proportion. Then 2A minus 2A necessarily contains a large Bohr set. And the next thing I want to do is to start with this Bohr set. So that's the definition of a Bohr set. But what does it look like? So it's a bit hard to imagine. So what does it look like? In the finite field setting, we know it's a subspace. But in the Z mod n setting, right now, it's just some subset of Z mod n. OK, so in the next step, we want to extract some geometric structure from this Bohr set. So we're going to show that this Bohr set will contain a large, generalized arithmetic progression. So you asked something earlier about-- something seems a bit fishy about the general strategy. Seems like our goal for proving-- we want to prove Freiman's theorem, which says that the conclusion is that A is contained in some GAP, some fairly compact additive structure. And we're already losing quite a bit. So we pass down to 1/8 of A. So it seems like even if you contain the rest, even if you can contain this fraction, this large fraction of A, what are you going to do about the rest of A? That's an unanswered question. A second unanswered question-- so right now, what I've told you, the strategy is we're going to find a large GAP inside 2A minus 2A, which is not quite the thing that we want to do. We want to contain A in a small GAP. But at least it's some progress, right? It's some progress to find some structure. I mean, the name of the game is to try to find additive structure. So in the theme of this whole semester course is trying to understand the dichotomy between structure and pseudorandomness. And when you have structure, let's use that structure. See if you can boost that structure. So there will be an additional argument, which I will show you at the beginning of next lecture at the conclusion of the proof of Freiman's theorem, which will allow you to start with the structure on a small part of A, but not too small-- it's a constant fraction of A-- and pass it up to the whole of A. And we've actually already seen a tool that allows us to do that. So I want to cover all of A. So last time, we did something called the covering lemma, Ruzsa covering lemma, that tells us that if you have some nice control on A and you can cover some part of A very well, then I can cover the entirety of A very well. So those tools will come in hand. I mean, so similar to actually how we proved Freiman's theorem in groups with bounded exponent. And so we're going to use the covering lemma to conclude the theorem. But now I want to get into the issue of the geometry of numbers. OK. I want to tell you some necessary tools that we'll need to find a large GAP inside 2A minus 2A. Now, it will seem like a bit of a digression, but we'll come back into additive combinatorics in a bit. So the geometry of numbers concerns the study of lattices. So it concerns the study of lattices and convex bodies. So this is a really important area of mathematics, especially about a century ago with mathematicians like Minkowski playing foundational roles in the subject. So number theorists were very interested in trying to understand how lattices behave. So I'll tell you some very classical results that we'll use for proving Freiman's theorem. So first, what is a lattice? So let me give you the following definition of a lattice in R to the d. It's a structure on a group, if you will, as an integer span of d independent vectors. So I start with v1 through vd vectors that are linearly independent. And I look at their integer span. I think this is best explained with a picture. So if I have a bunch of-- so here, I'm drawing a picture in R2. And this picture extends in all directions. If I start with two vectors, v1 and v2, linearly independent, and look at their integer span, so that's a lattice. So that's what a lattice is. You can come up with all sorts of fancy definitions, like a discrete subgroup of R to the n. But this is what it is. So just to emphasize this definition for a bit-- and also, one more definition that we'll need is the determinant of a lattice. So what's the determinant of a lattice? One way to define it is you look at these v's, and you construct a matrix with the v's as columns. And you evaluate the absolute value of this determinant. More visually, the determinant of a lattice is also equal to the volume of its fundamental parallelepiped, which is a parallelepiped-- well, in the two-dimensional case, it's a parallelogram-- which is spanned by v1 and v2 or these v's, although you have more choices, right? So you could have chosen a different set of generating vectors. For example, you could have chosen these two vectors, and they also generate the same lattice. And that's also a fundamental parallelepiped. And they will have the same volume. You can make some wrong choices, and then they will not have the right volume. So if you had chosen these two, so this is not a fundamental parallelepiped. Great. So let me give you some examples. The simplest lattice is just the integer lattice, Zd, which has determinant 1. If I'm in the complex plane, which is viewed as two-dimensional real plane, then if I take, let's say, omega being the 3rd root of unity, I have a triangular lattice. And the fundamental parallelepiped of this lattice, that's one example. And you can evaluate its determinant as the area of that parallelogram. If I take two nonlinearly independent vectors-- so for example, if I'm in one dimension and I look at the integer span of 1 and root 2, this is not a lattice. Now, the next definition will initially be slightly confusing. But I will explain it through an example or at least try to help you visualize what's going on. So if I give you a centrally symmetric convex body-- "centrally symmetric" means that k equals to minus k. So centrally symmetric convex body, OK. So here, centrally symmetric is x in k if and only if minus x is in k. And I'm in d dimensions. Let me define the i-th successive minimum to be lambda i. OK, so i-th successive minimum of k with respect to lambda to be the infimum of all non-negative lambda such that the dimension of the span of the intersection of lambda k and-- well, little o lambda k and the lattice-- has dimension at least i. OK. So let me explain. I start with a lattice. So I start with some lattice. And I have some convex body. So this is 0, let's say. So I have some convex body, a centrally symmetric convex body like that. I initially could be bigger, as well, but that's scale it so that it's quite small initially. And let's consider an animation where I look at lambda k where k lambda goes from 0 to infinity. This is k. So initially, lambda k is very, very small. And I imagine it growing. It gets bigger and bigger and bigger. So it gets bigger and bigger. And let's think about the first time that this growing body hits a lattice point, a non-zero lattice point. At that point, I freeze the animation. And I record this vector. I record this vector where I've hit a lattice point. And now I continue the animation. It's going to keep on growing and growing and growing until when I hit a vector in a direction I haven't seen before. So it's going to keep growing. And then the next time I hit a vector in a new direction, I stop the animation. And I look at the other vector. So I keep growing this ball until I hit new vectors, keep growing this convex body. So for example, if your initial convex body is very elongated, if that's your k-- so you keep growing, growing-- you might initially hit that vector. And then you keep on growing it. And the next vector you hit might still be in the same direction. But I don't count it. I don't stop the animation here, because I didn't see a new direction yet. I only stop the animation when I see a new direction. So I keep growing until I see a new direction. And I stop the animation there. So think about this growing body, and stop in every place when you see a new direction contained in your lambda k. And the places where you stop the animations, they're the successive minimum of k. Yeah? AUDIENCE: Is this defined if i is greater than d? YUFEI ZHAO: Is this defined when i is greater than d? No. So you only have exactly d successive minimum. Now, sometimes you might see two new directions at the same time. That's OK. But once you exhaust all d directions, then there's no more new directions you can explore. We also consider the vectors that you see. So let me also call these so that we can-- OK, so we can select these lattice vectors bi. I am going to use underscore to denote. So I'm going to use this underline to denote boldface. So it's a vector bi, which is in, basically, this. You should think of bi as the new vector that you see. And it will have the property such that b1 through bd form a basis of Rd. So I keep growing this convex body. When I see a vector in a new direction, I record lambda. And I record the vector bi. I keep on going, keep going, keep going until I exhaust all d directions. I call these b's the directional basis. OK. Any questions? All right. So the result from the geometry of numbers that we're going to need is something called Minkowski's second theorem. So Minkowski's second theorem says that if you have lambda, a lattice, in Rd and k, a centrally symmetric body, also in Rd, such that lambda 1 through lambda d are the successive minima of k with respect to lambda, then one has the inequality lambda 1, lambda 2. So the product of these successive minima times the volume of k is upper bounded by 2 to the d times the determinant of lambda. For example, and here is a very easy case of this Minkowski's second theorem, if your k is an axis-aligned box-- namely, it is a box where the width in the i-th direction is 2 over lambda i-- so then you see that the successive minima of this box are exactly the lambda i's. And you can check that for-- this inequality is actually an equality. OK. So actually, in this case, lambda, the lattice, is the integer lattice. Now, this is a pretty easy case of Minkowski's second theorem. But the general case, which we're not going to prove, is actually quite subtle. I mean, the proof itself is not so long. It's worth looking up and trying to see what the proof is about. But it's actually rather counterintuitive to think about. It's one of those theorems where you sit down for half an hour or an hour. You're trying to prove. You think you might have come up with a proof. And then on closer examination, it'll be very likely that you made some very subtle error. So it's not so easy to get all the details right. And we're going to skip the proof. But any questions about the statement? OK. We're going to use Minkowski's second theorem to show that a large Bohr set contains a large GAP. And specifically, we will prove that every Bohr set of dimension d and width epsilon-- epsilon is between 0 and 1-- in Z mod nZ contains a proper GAP with dimension at most d and size at least this quantity, which is epsilon divided by d raised to the power of d fraction of the cyclic group. So just to step back a bit and see where we're going, from everything that we've done earlier, we conclude that 2A minus 2A contains a large Bohr set. Here, epsilon is 1/4. So epsilon is a constant. And R is also going to be a constant. It's depending on the doubling constant. And this proposition will tell us that inside this 2A minus 2A, we will be able to find a very large, proper GAP. So "proper" means that in this generalized arithmetic progression, all the individual terms are distinct, or you don't have collisions. So you're going to find this proper GAP that is constant dimension and at least a constant fraction of the size of the group, so pretty large GAP. To find this GAP, we will set up a lattice and apply Minkowski's second theorem. Suppose the Bohr set is given by R where the individual elements, I'm going to denote by little r1 through little rd. And let uppercase lambda be a lattice explicitly given as follows. It consists of all points in Rd that are congruent mod 1 to some integer multiple of the vector r1 over n, r2 over n, through rd over n, so congruent mod 1. So for example, in two dimensions, which is all I can draw on the board, if r1 and r2 are 1 and 3 and n equals to 5, then basically, what we're going to have is a refinement of the integer lattice, where this box is going to be the integer lattice. And I'm going to tell you some additional lattice vectors. And here, it's going to repeat, or it's going to tile all over. So I start with 1, 3. And I look at multiples of it. But I mod 1. So I would end up with these points and then repeat it. And so you would have-- so that's the lattice. So you have this lattice, lambda. What is the volume? What is the determinant of this lattice? So the determinant of the lattice, remember, is the volume of its fundamental parallelepiped. So I claim that the determinant is exactly 1 over n. There are a few ways to see this. So one is that, originally, I had the integer lattice as determinant 1. And now I put-- instead of one point, I have endpoints in each original parallelepiped. So the determinant has to go down by a factor of n. Or you can construct an explicit fundamental parallelepiped like that. And then you use base times height. OK. We're going to apply Minkowski's second theorem. And I will need to tell you-- I don't need the definition of Bohr set up there. So I want to tell you what to use as the convex body. The convex body that we're going to use is k being this box of width 2 epsilon. So that's the lattice. That's the convex body. And we're going to apply Minkowski's second theorem. So let's let little lowercase lambda 1 through lambda d-- so n is d-- be the successive minima of k with respect to lambda and b be the directional vectors, the rational basis corresponding to those successive minima. I claim that the L-infinity norm of bj is at most lambda j epsilon for each j. And this is basically because of the definition. I mean, if you look at the definition of successive minima and directional basis, this is k. I grow k, grow it by a factor of lambda j. And that's the first point when I see b sub j. So every coordinate of b sub j has to be at most this quantity in absolute value. So now let me denote uppercase L sub j to be 1 over lambda jd rounded up. And I claim that if little l is less than big L-- lj, Lj-- then little lj-- OK, so if I dilate the bj vector by factor little l, so if I plug it in and just look at these two inequalities, I obtain an upper bound of epsilon over d on the L-infinity norm of lj bj, so just looking at this bound here and the size of lj. And if this holds for all j, then summing up all of these individual inequalities, we find that the sum of these lj bj's is at most epsilon in L-infinity norm. So the point here is that we want to find the GAP in this Bohr set. And how does one think of a Bohr set? So it's kind of hard to imagine, because the Bohr set is a subset of Z mod n. But the right way to think about a Bohr set is in a higher dimensional lift, because a Bohr set is defined by looking at these numbers for R different values, R different coordinates. So we think of each r as its own coordinate. So we think of there being capital uppercase R many coordinates. And we want to consider the set of x's so that all the coordinate values are small. So instead of considering a one-dimensional picture, as we do in the Bohr sets, we're considering a higher dimensional or d-dimensional picture and then eventually projecting what happens up there down to this Bohr set. So what does Minkowski's second theorem have to do with anything? Well, once you have this higher dimensional lattice, what we're going to do is find a large lattice parallelepiped, so a large structure inside this higher dimensional lattice, and then project it down onto one-dimensional Z mod n. So this is the process of-- so you already see some aspects of a GAP in here. So these guys, they're essentially the GAP that we're going to eventually wish to find. And right now, they live in this higher dimensional lattice. But we're going to pull them down to Z mod n. All right. Now, where do these b's come from? So each b sub j is congruent to some x sub j times this vector mod 1 where x sub j is an integer between 0 and uppercase N. So this inequality up here, star. So the i-th coordinate for star-- "coordinate" meaning this is an L-infinity bound, so the i-th coordinate is upper bounded by epsilon. But the i-th coordinate bound implies that if you look at this sum over here times Ri divided by N, this quantity, whatever it is, is very close to an integer for each i. So the i-th coordinates implies this inequality, and it's true for every i. Thus what we find is that the GAP, which you already see in this formula over here-- so the GAP is given like that. So this GAP is contained in the Bohr set. So we found a large structure in the lattice. But the lattice came from this construction, which was directly motivated by the Bohr set. So we find a large GAP in the Bohr set. Well, we haven't shown yet it is large or that it is proper. So we need to check those two things. To check that this GAP that we found is large, we're going to apply Minkowski's second theorem. Let's check GAP is large. So by Minkowski's second theorem, we find that the size of the GAP, which is, by definition, the product of these upper case L's-- so if you look at how the uppercase L's is defined, you see that this quantity is at least 1 over the product of the successive minima times denominator d to the d. And now we apply Minkowski's second. And we find that this quantity is at least the volume of k divided by 2 to the d times the determinant of the lattice times d to the d. But we saw what is the determinant of the lattice. It is 1 over N. You have d to the d, 2 to the d. And the volume of k, well, k is just that box. So the volume of k is 2 epsilon raised to d. So putting everything together, we find that the size of this GAP is the claimed quantity. It's a constant fraction of the entire group. The second thing that we need to check is properness. So what does it mean to be proper? So we just want to know that you don't have two different ways of representing the same term in the GAP. So if I have the following congruence, so if this combination of the x's is congruent to a different combination of the x's where these little l's are between 1 and-- OK, so I want to show-- so to check that it's proper-- so we're in Z mod n-- we just need to check that if this holds, then all the corresponding little l's must be the same as their primes. Well, if it is true, then setting-- let's go back to the lattice-- setting the vector b to be a vector originally that corresponds to the difference of these two numbers-- so if we set b to be the difference of these two numbers, we find that, first of all, it lies in Z to the d, because these two numbers are congruent to each other mod n. And furthermore, the L-infinity norm of b is upper bounded by-- I mean, each one of them has small l-infinity norm. And this is some number that is bounded. It's less than uppercase L. So the whole thing, this whole sum, the L-infinity norm, cannot be larger than this quantity over here, where I essentially use the triangle inequality to analyze this b term by term. All of these numbers are very small, because if you look at what we saw up there, so the size of b, we see that this whole thing is at most epsilon. And epsilon is strictly less than 1. So you have some vector b, which is an integer vector, such that all of its coordinates have L-infinity norms strictly less than 1. So that means that b is equal to 0. So b is the 0 vector. So b is a zero vector. Thus this thing here equals to 0. So this sum here equals to 0. And since the bi's form a basis, we find that the li's and l prime i's are equal to each other for all i. And this checks the properness of this GAP. Yeah. So this argument, it's not hard. But you need to check the details. So you need to wrap your mind around changing from working in a higher dimensional lattice setting to going back down to Z mod n. And the main takeaway here is that the right way to think about a Bohr set is to not stay in Z mod n but to think about what happens in d-dimensional space where d is the dimension of the Bohr set. OK. So now we have pretty much all the ingredients that we need to prove Freiman's theorem. And that's what we'll do at the beginning of next lecture. We'll conclude the proof of Freiman's theorem. And then I'll tell you also about an important conjecture in additive combinatorics called a polynomial Freiman-Ruzsa conjecture, which many people think is the most important open conjecture in additive combinatorics.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
12_Pseudorandom_graphs_II_second_eigenvalue.txt
YUFEI ZHAO: All right. Last time we started talking about pseudorandom graphs, and we considered this theorem of Chung, Graham, and Wilson, which, for dense graphs, gave several equivalent notions of quasi-randomness that, at least the phase values, do not appear to be all that equivalent. But they are actually-- you can deduce one from the other. There was one condition at the very end which had to do with eigenvalues. And, basically, it said that if your second largest eigenvalue in absolute value is small, then the graph is pseudorandom. So that's something that I want to explore further today to better understand the relationship between eigenvalues of a graph and the pseudorandomness properties. For much of-- pretty much all of today, we're going to look at a special class of graphs known as n, d, lambda graphs. This just means we have n vertices, and we're only going to consider, mostly out of convenience, d regular graphs. So this will make our life somewhat simpler. And the lambda stands for that-- if you look at the adjacency matrix, and if you write down the eigenvalues of the adjacency matrix, then, well, what are these eigenvalues? The top one, because it's d regular, is equal to d. And lambda corresponds to the statement that all the other eigenvalues are, at most, lambda in absolute value. So the top one is equal to d. All the other ones in absolute value-- so it could be basically the maximum of these two-- is bounded above by lambda. And at the end of last time, we showed this expander mixing lemma, which, in this language, says that if G is n, d, lambda, then one has the following discrepancy type. So the randomness property, namely that if you look at two vertex sets and look at how many actual edges are between them compared to what you expect if this were a random graph of a similar density, then these two numbers are very similar, and the amount of error is controlled by your lambda. In particular, a smaller lambda gives you a more pseudorandom graph. So the second part of today's class, I want to explore the question of how small this lambda can be. So what's the optimal amount of pseudorandomness? But, first, I want to show you some examples. So, so far, we've been talking about pseudorandom graphs, and the only example, really, I've talked about is that a random graph is pseudorandom. Which is true. A random graph is pseudorandom with high probability, but some of the spirit of pseudorandomness is to come up with non-random examples, come up with deterministic constructions that give you pseudorandom properties. So I want to begin today with an example. A lot of examples, especially for pseudorandomness, come from this class of graphs called Cayley graphs, which are built from a group. So we're going to reserve the letter G for graphs, so I'm going to use gamma for a group. And I have a subset S of gamma, and S is symmetric, in that if you invert the elements of S, they remain in S. Then we define the Cayley graph given by this group and the set S to be the following graph, where V, the set of vertices, is just the set of group elements. And the edges are obtained by taking a group element and multiplying it by S to go to its neighbor. So this is a Cayley graph. And Cayley graphs are-- start with any group, start with any subset of the group, you get a Cayley graph. And this is a very important construction of graphs. They have lots of nice properties And, in particular, an example of a Cayley graph is a Paley graph. They're not related. So a Paley graph is a special case of a Cayley graph obtained by considering the group, the cyclic group mod p, where p is prime 1 mod 4. And I'm looking at S being the set of quadratic residues, mod p. It's actually nonzero quadratic residues. So elements of mod p that could be a square. So we will show in a second that this Paley graph has nice pseudorandom properties by showing that it is an n, d, lambda graph with lambda fairly small compared to the degree. Just a historical note-- so Raymond Paley-- so the Paley graph named after him-- he actually-- he was from the earlier part of 20th century. So from 1907 to 1932. So he died very young at the age of 26, and he actually died in an avalanche when he was skiing by himself in Banff. So Banff is a national park in Alberta in Canada. And when I was in Banff earlier this year for a math conference-- so there's also a math conference center there-- so I had a chance to go visit the-- Raymond Paley's tomb. So there's a graveyard there where you can find his tomb. And it's very sad that, in his short mathematical timespan, actually he managed to do a lot of amazing mathematical-- find a lot of amazing mathematical discoveries. And there are many important concepts named after him. So things like Paley-Wiener theorem, Paley-Zygmund, Littlewood-Paley, all this important ideas and analysis named after Paley. And Paley graph is also one of his contributions. So what we'll claim is that this Paley graph has the desired pseudorandom properties, in that if you look at its eigenvalues, then the top eigenvalues, except-- so except for the top eigenvalue, all the other eigenvalue are quite small. So keep in mind that the size of S is basically half of the group. So p minus 1 over 2. So for especially larger values of p, p's eigenvalues are quite small compared to the degree. So the main way to show that Cayley graphs like that have small eigenvalues is to just compute what the eigenvalues are. And this is actually not so hard to do for Cayley graphs, so let me do this explicitly. So I will tell you very explicitly a set of eigenvectors. And they are-- the first eigenvector is just the all 1's vector. The second eigenvector is the vector coming from 1, omega, omega squared, so omega to the p minus 1, where omega is a parameter of p-th root of unity. The next one is 1, omega square, omega fourth, all the way to omega p-- omega to the 2 times p minus 1. And so on. So I want to have-- yes, so OK. So I make this list, and I have p of them. So these are my eigenvectors. And let me check that they are actually eigenvectors. And then we can also compute their eigenvalues. So the top eigenvector corresponds to d. So the all 1's in a d regular graph is always an eigenvector with eigenvalue d. And the other ones, we'll just do this computation. So instead of getting confused with indices, let me just compute, as an example, the j-th coordinate of the adjacency matrix times V2. So the j-th coordinate, so what it comes to, is the following sum. If I run over S, then omega raised to j plus s. So S is symmetric, so I don't have to worry so much about plus or minus. So I say j plus s. So if you think about what this Cayley graph, how it is defined, if you hit this vector with that matrix, the j-th coordinate is that sum there. But I can rewrite the sum by taking out this common factor omega to j. And you see that this is the j-th coordinate of V2. And this is true for all j. So this number here is lambda 2. And, more generally, lambda k is the following sum, for k from 0-- so from k being 1 through p. So when you plug in k equals to 1, you just get d. And the others are sums of these exponential sums. Now, this is a pretty straightforward computation. And, in fact, we're not using anything about quadratic residues. This is a generic fact about Cayley graphs of z mod p. So this is true for all Cayley graphs S, not necessarily for quadratic residues. And the basic reason is that, here, you have this set of eigenvectors, and they do not depend on S. So you might know this concept from other places, such as circular matrices and whatnot, but this is true in this simple computation. So now we have the values of lambda explicitly. I can now compute their sizes. I want to know how big this lambda is. Well, the first one, when k equals to 1, it's exactly d, the degree, which is p minus 1 over 2. But what about the other ones? So, for the other ones, we can do a computation as follows. So note that I can rewrite lambda k by noting that if I take twice it and plus 1, then I obtain the following sum. Because here I am using the S as a set of quadratic residues. So if I consider this sum here, every quadratic residue gets counted twice, except for 0, which gets counted once. And now I would like to evaluate the size of this sum, this exponential sum. And this is something that's known as a Gauss sum. So, basically, a Gauss sum is what happens when you have something that's like a quadratic, an exponential sum with a quadratic dependence in the exponent. And the trick here is to consider the square of the sum. So the magnitude squared. Now if I expand the square-- so squaring is a common feature of many of the things we do in this course. It really simplifies your life. You do the square, you expand the sum. You can re-parameterize one of the summands like that. So do two steps at once. I'm re-parameterizing and I'm expanding. But now you see, if I expand the exponent, we find-- so that's just algebra. And now you notice that this sum here, the sum over a is equal to-- when b is nonzero, I claim that this sum is 0. And when b is nonzero, then I'm summing over some permutations of the roots of unity. So here I'm assuming that k is bigger than-- let's say here k is not 0. So I'm re-parameterizing k a little bit. So k is not 0. Then when b is not 0, the sum over a is 0. And otherwise it equals to p. So the sum over here equals to p. And, therefore, lambda k, lambda sub k-- how about if I-- so what should I change that to? So if I-- k is 0, then I want this to be lambda sub k plus 1. Then lambda sub k plus 1 is equal to plus/minus p plus 1 over 2, for all lambda not equal to 0. So, really, except for the top eigenvalue, which is just the degree, all the other ones are one of these two values, and they're all quite small. So this is an explicit computation showing you that this Paley graph is indeed a pseudorandom graph. It's an example of a quasi-random graph. Yes. AUDIENCE: Do we know what the sign is? YUFEI ZHAO: The question is, do we know what the sign is? So we actually-- so here I am not telling you what the sign is, but you can look up. Actually, people have computed exactly what the sign should be. And this is something that you can find in a number theory textbook, like Aaron and Rosen. Any more questions? There is a concept here I just want to bring out, that you might recognize sums like this. So this kind of sum. That's a Fourier coefficient. So if you have some Fourier transform, I mean, this is exactly what Fourier transforms look like. And it is indeed the case that, in general, if you have an Abelian group, then the eigenvalues and the spectral information of the corresponding Cayley graph corresponds to Fourier coefficients. And this is the connection that we'll see also later on in the course when we consider additive combinatorics and giving a Fourier analytic proof of Roth's theorem. And there Fourier analysis will play a central role. But this is actually-- this analogy, as I've written it, is only for Abelian groups. If you try to do the same for non-Abelian groups, you will get something somewhat different. So for non-Abelian groups, you do not have this nice notion of Fourier analysis, at least in the versions that generalizes what's above in a straightforward way. But, instead, you have something else, which many of you have seen before but under a different name. And that's representation theory, which, in some sense, is Fourier analysis, except, instead of one-dimensional objects and complex numbers, we're looking at higher-dimensional representations. So I just want to point out this connection, and we'll see more of it later on. Any questions? So let's talk more about Cayley graphs. So, last time, we mentioned these notions of quasi-randomness. And I said at the end of the class that many of these equivalences between quasi-random graphs, they fail for sparse graphs. If your density, if your x density is a constant, then the equivalences no longer hold. But what about for Cayley graphs? And, in particular, I would like to consider two specific notions that we discussed last time and try to understand how they relate to each other for Cayley graphs. So for dense Cayley graphs, it's a special case of what we did yesterday. So I'm really interested in sparser Cayley graphs, even down the degree. So even down the degree. So that's much sparser than the regime we were looking at last time. And the main result I want to tell you is that the DISC condition is, in a very strong sense, actually equivalent to the eigenvalue condition for all Cayley graphs, including non-Abelian Cayley graphs. So before telling you what the statement is, I first want to give an example showing you that this equivalence is definitely not true if you remove the assumption of Cayley graphs. For example, if you-- so example that this is false for non-Cayley. Because if you take, let's say, a large-- so let's say d regular graph. So let's say a large random d regular graph. d here can be a constant or growing with n, but this is a pretty robust example. And then I add to it an extra destroying copy of k sub d plus 1 that's much smaller in terms of number of vertices. The big, large random graph, well, by virtue of being a random graph, has the discrepancy property. And because we're only adding in a very small number of vertices, it does not destroy the discrepancy property. The discrepancy property, if you're just adding a small number of vertices, it doesn't change much. So this whole thing has discrepancy. However, what about the eigenvalues? Claim that the top two eigenvalues are in fact both equal to d. And that's because you have two eigenvectors, one which is the all 1's vector on this graph, another which is the all 1's vector on that graph. These two DISC components each give you a top eigenvector of d, so you get d twice. And, in particular, the second eigenvalue is not small. So the implication from DISC to eigenvalue really fails for non-Cayley graph for general graphs. The implication, the other direction is actually OK. In fact, the eigenvalue implies DISC is actually the content of the expander mixing lemma. So this follows by expander mixing lemma. And that's because, if you look at the expander mixing lemma for a Cayley graph-- or for a-- not for Cayley graph-- for-- if you have the eigenvalue condition, then, automatically, you would find that these two guys here are at most n. So if lambda is quite small compared to the degree, then you still have the desired type of quasi-randomness. So I'll make the statements more precise in a second. So the question is, how can we certify, how can we show that, in fact, DISC, which is seemingly weaker property, implies a stronger property of eigenvalue for Cayley graphs. And what is a special about Cayley graphs that would allow to do this, that the statement is generally false for non-Cayley graphs? So let me define-- so let me first tell you the result. So this is the result due to David Conlon and myself two years ago. So many of you may not have been to many seminar talks, where there's this convention in mathematics talks where you don't write out your full name, only by the initial. Although some kind of false modesty. But, of course, we all love talking about our own results, but somehow we don't like to write our own name for some reason. So here's the theorem. So I start with a finite group, gamma. And let me consider a subset S of gamma that is symmetric. And consider G the Cayley graph. Let me right n as the number of vertices, and d the size of S. So this is a d regular graph. Let me define the following properties. The first property, I'll call DISC with epsilon. So I give you an explicit parameter. The number of edges between x and y differs from the number of edges that you would expect. So as in the expander mixing lemma. So the DISC property is that this quantity is small relative to the total number of edges. The second property, which we'll call the eigenvalue property, EIG, is that G is an n, d, lambda graph, with lambda, at most, epsilon d. So lambda is quite small as a function of d. The conclusion of the theorem is that, up to a small change of parameters, these two properties are equivalent. In particular, eigenvalue implies a-- epsilon implies DISC of epsilon. And DISC of epsilon-- and this is the-- the second one is the more interesting direction-- it implies EIG. Well, you lose a little bit, but, at most, a constant factor. EIG of 8 epsilon. Any questions about the statement so far? And so, as I mentioned, this is completely false if you consider non-Cayley graphs. And we also, using expander mixing lemma, using that implication up there, this direction follows. One of the main reasons I want to show you a proof of this theorem is that it uses this tool which I think is worth knowing. And this is an important inequality known as Grothendieck's inequality. So many of you probably know Grothendieck as this famous French mathematician who reinvented modern algebraic geometry and spent the rest of his life writing tomes and tomes of text that have yet to be translated to English. But he also did some important foundational work in functional analysis before he became an algebraic geometry nerd. And this is one of the important results in that area that he-- so Grothendieck's inequality tells us that there exists some absolute constant k such that for every matrix A-- so a real-valued matrix-- we have that the-- so we have that, if you-- so here's the idea. Let's consider the supremum-- so let's consider the following quantity. This is a bilinear form. So this is a bilinear form. This is basically a-- so bilinear form, if you hit it by a vector x and y from the two sides. And I'm interested in what is the maximum value of this bilinear form if you are allowed to take x and y to be plus/minus 1-valued real numbers? So this is an important quantity, and it gives you a matrix. And it's basically asking you, you get a sign of plus or minus to each row and column, and I want to maximize this number here. This is an important quantity that we'll see actually much more in the next chapter on graph limits. But, for now, just take my word. This is a very important quantity. And this is actually a quantity that is very difficult to evaluate. If I give you a very large matrix and ask you to compute this number here, there is no good algorithm for it. And it's believed that there is no good algorithm for it. On the other hand, there is a relaxation of this problem, which is the following. It's still a sum, but now, instead of considering the bilinear form there, let's consider the xi's and yi's. Not-- take them not form real numbers, but take vectors. So let's consider the sum where I'm taking a similar-looking sum, except that xi's and yi's come from a unit ball in some vector space with an inner product, where B is the unit ball in some Rm, where here the dimension is actually not so relevant. The dimension is arbitrary. If you like, you can make m n or 2n because you only have that many vectors. So this quantity here, just by very definition, is a relaxation of the right-hand of this quantity here. So it's at least this large. So, in particular, if you have whatever plus/minus, you can always look at the same quantity with m equal to 1, and you obtain this quantity here. But this quantity may be substantially larger. So the x and y's have more room to put themselves in to maximize the sum. And Grothendieck's inequality tells us that the left-hand side actually cannot be too much larger than the right-hand side. It exceeds it by, at most, a constant factor. So, in other words, the left-hand side, which is known as a semi-definite relaxation, you are not losing by more than a constant factor compared to the original problem. And this is important in computer science because the left-hand side turns out to be a Semidefinite Program, an SDP, which does have efficient algorithms to compute. So you can give a constant factor approximation to this difficult compute but important quantity by using semidefinite relaxation. And Grothendieck's inequality promises us that it is a good relaxation. You might ask, what is the value of k? So I said there exists some constant k. So this is actually a mystery. So the current proofs have been improved over time. And Grothendieck himself proved this theorem, but it constantly has been improved over time. And, currently, the best-known result is something along the lines of k roughly 1.78 works. But the optimal value, which is known as Grothendieck's constant, is unknown. So this is Grothendieck's constant. Actually, this, what I've written down is what's called the real Grothendieck's constant. Because you can also write a version for complex numbers and complex vectors, and that's the complex Grothendieck's constant. Yes. AUDIENCE: Is there a lower bound that's known [INAUDIBLE] greater than 1? YUFEI ZHAO: Is there a lower bound that is known? Yes. It's known that it's strictly bigger than 1. AUDIENCE: Do we know [INAUDIBLE]?? YUFEI ZHAO: So there are some specific numbers, but I forget what they are. You can look it up. Any more questions? So we'll leave Grothendieck's inequality. We'll use it as a black box. So if you wish to learn the proof, I encourage you to do so. There are some quite nice proofs out there. And we'll use it to prove this theorem here about quasi-random Cayley graphs. So let's suppose DISC holds. So what would we like to-- what do we like to show? We want to show that this eigenvalue condition holds. And we'll use the-- some min-max characterization of eigenvalues. But, first, some preliminaries. Suppose you have vectors x and y which have plus/minus 1 coordinate values. Then, by letting-- so let's consider the following vectors, where I split up x and y according to where they're positive and where they're negative. So, here, these are such that x plus is equal to-- so if I evaluate it on a coordinate g, then it's 1. So if x sub g is plus 1, and 0 otherwise. xg sub minus is 1 if x sub g is minus 1. 0 otherwise. So x splits into x plus minus x minus, and y splits into y plus minus y minus. Let's consider a matrix A where the g comma h entry of A is the following quantity. I have the set S, and I look at whether g inverse h lies in S. And I can consider an indicator of that. So it's 1 or 0. And then subtract d over n so that this value has mean 0. So this is a matrix. And now if I consider the bilinear form, hit A from left and right with x and y, then the bilinear form splits according to the plus and minuses of the x's. And I claim that each one of these terms is controlled because of DISC. So, for example, the first term is, if you expand out what this guy is-- so here's an indicator vector. That's an indicator vector. And if you look at the definition, then this is precisely the number of edges between x plus and y plus minus d over n times the size of x plus times the size of y plus, where x plus is the set of group elements such that x sub g is 1, and so on. All right. So the punchline up there is that this quantity-- so this quantity is, at most, by discrepancy, epsilon dn. So this sum here, by triangle inequality, is, at most, 4 epsilon dn. All right. So, so far, we've reinterpreted the discrepancy property. And what we really want to show is that this graph satisfies eigenvalue condition. So what does that actually mean to satisfy the eigenvalue condition? So by the min-max characterization of eigenvalues, it follows that the maximum of these two eigenvalues, which is the quantity that we would like to control, is equal to the following. It is equal to the supremum of this bilinear form when x and y are unit-length vectors. And this is simply because A is the matrix-- it's not the adjacency matrix. A is not the adjacency matrix. A is the matrix obtained by essentially taking the adjacency matrix and subtracting that constant there. And subtracting that constant gets rid of the top eigenvalue. And what you remained is whatever that's left. And you want to show that whatever you remained has small spectral radius. So we would like to show that this quantity here is quite small. Well, let's do it. So give me a pair of vectors, x and y. And let's set the following quantities, where I take a twist on this x vector by rotating the coordinates, setting x super s sub g, the coordinate g, to be x sub sg. So x is a vector indexed by the group elements, and then rotating this indexing of the group elements by s. So that's what I mean by superscript s. And, likewise, y superscript s is defined similarly. So I claim that these twists, these rotations, do not change the norm of these vectors. And that should be pretty clear, because I'm simply relabeling the coordinates in a uniform way. And, likewise, same for y. So I would like to show this quantity up here is small. So let's consider two unit vectors. And consider this bilinear form. If I expand out this bilinear form, it looks like that. I'm just writing it out. But now let me just throw in an extra variable of summation. What we'll do is essentially look at the same sum, but now I add in an extra s, and put this s over here. So convince yourself that this is the same sum. So it's simply re-parameterizing the sum. So this is the same sum. But now, if you look at the definition of A, there's this cancellation. So the two s's cancel out. So let's rewrite the sum. 1 over n, then g, h, s, all group elements. Then-- now, if I bring this summation of s, now I bring it inside, and then you see that what's inside is simply the inner product between the two vectors, x sub g-- between the two vectors. So this is-- so what's inside is simply the product, inner product, between these two. So I may need to redefine. Yes. So when you're looking at-- when you're talking about non-Abelian groups, it's always a question of which side should you multiply things by. And you guys are OK? Or I need to change this s to over here. But anyway, it should work. Yes, question. AUDIENCE: yh [INAUDIBLE]. YUFEI ZHAO: yh. Thank you. Yes, I think-- OK. Question. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: Great. So maybe I need to switch the definition here, but, in any case, some version of this should be OK. Yes. So figure it out later in the notes. But now-- OK. So you have this-- we have this here. And if you look at this quantity here, it is the kind of quantity that comes up in Grothendieck's inequality. So this is basically the left-hand side of Grothendieck's inequality. What about the right-hand side of Grothendieck's inequality? Well, we already controlled that. We already controlled that because we said, whenever you have up there little x and little y-- so the conclusion of this board was that-- let me erase over here. So the conclusion of this board was that this bilinear form is bounded by, at most, 4 epsilon d, for all x and y being plus/minus 1 coordinate valued. So combining them by Grothendieck, we have an upper bound, which is the Grothendieck constant times 4 epsilon-- so 4 epsilon dn. There's a-- sorry. There's an n missing here. And, therefore, because the Grothendieck constant is less than 2, we have a bound of 8 epsilon d. And this shows that this variational problem, which characterizes the largest eigenvalue in absolute value, is, at most, 8 epsilon d, thereby implying the eigenvalue property. So the main takeaway from this proof, two things. One is Grothendieck's inequality is a nice thing to know. So it's a semidefinite relaxation that changes the problem, which is initially somewhat intractable, to a semidefinite problem which is both, from a computer science point of view, algorithmically tractable, but also has nice mathematical properties. And for this application here, there's this nice trick in this proof where I'm symmetrizing the coordinates using the group symmetries. And that allows me to obtain this characterization showing that eigenvalue condition and this discrepancy condition are equivalent for Cayley graphs. Let's take a quick break. Any questions so far? So we've been talking about n, d, lambda graphs. So d regular graphs. And the next question I would like to address is, In an n, d, lambda graph, how small can lambda be? So smaller lambda corresponds to a more pseudorandom graph. So how small can this be? And the right kind of setting that I want you to think about is think of d as a constant. So think of d as a constant, and n getting large. So how small can lambda be. And it turns out there is a limit to how small it can be. And it is known as the Alon-Boppana bound, which tells you that if you have a fixed d-- and so G is an n-vertex graph with adjacency matrix eigenvalues lambda 1 through lambda n, sorted in non-increasing order. Then the second largest eigenvalue has to be at least, basically, 2 root d minus 1 minus a small error term, little on-- little o1, where the little o1 goes to 0 as n goes to infinity. So the Alon-Boppana bound tells you that the lambda cannot be below this quantity here. And I want to explain what is the significance of this quantity, and you will see it in the proof. And this quantity is the best possible. And it also says what do we know about the existence of graphs which have lambda 2 close to this number. So this is the optimal number you can put here. Question. AUDIENCE: Does it say anything about how negative lambda n can be? YUFEI ZHAO: Question-- does it say how negative lambda n can be? So I'll address that in a second, but, essentially, if you have a bipartite graph and lambda n equals to minus lambda 1. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: More questions? So I want to show you a proof and, time permitting, a couple of proofs of Alon-Boppana bound. And they're all quite simple to execute, but the-- I think it's a good way to understand how these special techniques work. So, first, as with all of the proofs that we did concerning-- or most of them-- concerning eigenvalues, we're looking at the Courant-Fischer characterization of eigenvalues. It suffices to show, to exhibit some vector z-- so a nonzero vector-- such that z is orthogonal to the all 1's vector and this quotient is at least the claimed bound. So by the Courant-Fischer characterization of the second eigenvalue, if you vary over all such d that are orthogonal to the unit vector, then the maximum value this quantity attains is equal to lambda 2. So to show the lambda 2 is large, it suffices to exhibit such a z. So let me construct such a z for you. So let r be a positive integer. And let's pick an arbitrary vertex v. So v is a vertex in the graph. And let V sub i denote vertices at distance exactly i from V. From-- yes, from V. So, in particular, V0 is equal to V-- and I can just draw you a picture. So you have V0, and then the neighbors of V0, and each of them have more neighbors. Like that. So I'm calling V0 this stuff, big V0. And then big V1, V sub 2, and so on. So I'm going to define a vector, which I'll eventually make into z, by telling you what is the value of this vector on each of these vertices. I will do this by setting very explicitly-- so set x to be a vector with value x sub u to be wi, where wi is d minus 1 raised to power minus i over 2 whenever u lies in set big V sub i. So u is distance exactly i from V. I set it to this number. So notice that they decrease as you get further away from V. And I do this for all distances less than r. So this is my x vector. And I set all the other vect-- all the other corners to be 0 if the distance between u and V is at least r. So that gives you this vector. And I would like to compute that quotient over there for this vector. And I claim that this quotient here is at least the following quantity. But this is a computation, so let's just do it. So why is this true? Well, if you compute the norm of x-- so I'm just taking the sum of the squares of these coordinates. Well, that comes from adding up these values. So for each element in the i-th neighborhood. So I have wi squared. And if I look at that quantity up there, so what is this? A is the adjacency matrix. So over here, A is the adjacency matrix. So this quantity, I can write it as a sum over all vertices u. And I look at x sub u, and now I sum again over all neighbors of u, and consider x sub u prime. It's that sum there. But this sum, I have some control over, because it is-- so what's happening here? I claim it has at least the following quantity. Consider where u is. So u could be-- I mean, it's only nonzero if u lies in the r minus 1th neighborhood. So in that neighborhood, I have V sub i possible choices for the vertex u. For that choice, this x sub u is w sub i. But what about all its neighbors? So it could have neighbors, well, in the same set going left. But there's-- so there's one neighbor going left, and all the other neighbors are-- maybe it's in the same set, maybe it's in the next set. But, in any case, I have the following inequality. There's one neighbor in the same-- in the left, if you look at that picture just now. And then all the remaining neighbors have x sub u primes at least w sub i plus 1, because these weights are decreasing. So I can-- the worst case, so to speak, is if you-- all the neighbors point to the next set. So I had that inequality there. There's an issue. Because if you go to the very last set, if you go to the very last set and think about what happens, when i in in that very last set, I'm overcounting neighbors that no longer has weights. So I need to take them out. So I should subtract d minus 1 times-- and so this is the maximum possible weight sum I could have-- maximum possible overcount. So each product here has t minus 1 neighbors at most. All right. So this is-- should be pretty straightforward if you do the counting correctly. But now let's plug in what these weights are. And you'll find that this sum here, this quantity, is equal to-- so the key point here is that this thing simplifies very nicely if you consider what this is. So what ends up happening is that you get this extra factor of 2 root d minus 1. And then the sum minus 1/2 of V sub [INAUDIBLE].. It's pretty straightforward computation using the specific weights that we have. And one more thing is that notice that this-- so notice that the sizes of each neighborhood cannot expand by more than a factor of d minus 1, because, well, you only have d minus 1 outward edges going forward at each step. And, as a result, I can bound this guy. And so what you find is that this whole thing here is that least 2 times root d minus 1. The main term is the sum. And this here is less than each individual summand. So I can do 1 minus 1 over 2r. Putting these two together, you find the claim. All right. So I've exhibited this vector x, which has that quotient property. But that's not quite enough, because we need a vector-- so it's called z up here-- that is orthogonal to the all 1's vector. And that you can do, because if the number of vertices is quite a bit larger than-- compared to the degree, then I claim that there exists u and v vectors-- vertices that are at distance at least 2r. So if I let-- this is the size of this tree. So if you have-- everything is within distance r-- distance 2r from a vertex, then they all lie on this tree edge. If you count the number of vertices in that tree, it's what I have-- the sum I've written here. So if I consider these two vectors-- so be-- so x be the vector obtained above, which is, in some sense-- and I'm being somewhat informal here-- centered at v. And if I let y be the vector but I center it now at the vector-- at u, then I claim that, essentially, x and y are supported on disjoint vertex sets that have no edges even between them. So, in particular, this inner product-- this bilinear form-- not inner product but this bilinear form-- is equal to 0, since no edge between the supports of x and y. So now I have two vectors that do not interact, but both have this nice property above. And now I can take a linear combination. Let me choose a constant c-- so it's a real constant-- such that this z equal to x minus cy has-- and I can choose this constant. So x and y are both non-negative entries. They're both nonzero, and I can choose this constant c so that it is-- this z is orthogonal to the all 1's vector. And I now I have this extra property I want. But what about the inner products? Well, these two vectors, x and y, they do not interact at all. So their inner products split just fine, and the bilinear form splits just fine. So you have this inequality here, as desired. And r, notice that I can take r going to infinity as n going to infinity, because d is fixed. So if n goes to infinity, then r can go to infinity, roughly a logarithmic n. And that proves the Alon-Boppana bound. And just to recap, to prove this bound, we needed to exhibit by the Courant-Fischer some vector with a nice-- this quotient such that this quotient is large. And we exhibit this quotient by constructing the vector explicitly around the vertex and finding two such vertices that are far away from constructing these two vectors, taking the appropriate linear combination so that the final vector is orthogonal to the unit vector, to the all 1's vector, and then showing that the corresponding bilinear form has-- is large enough. Any questions? I want to show you a different proof which gives you a slightly worse result, but the proof is conceptually nice. So let me give you a second proof which is slightly weakening. And just that we'll show-- so we'll show that-- so the earlier proof showed that lambda 2 is quite large. But, next, we'll show that the max of lambda 2 and the lambda n is large. So not that the second largest eigenvalue is large, but the second largest eigenvalue in absolute value is large. So it's slightly weaker, but, for all intents and purposes, it's the same spirit. So I'll show this one here. And this is a nice illustration of what's called a trace method, sometimes also a moment method. Here's the idea. As we saw in the proof relating the quasi-randomness of C4 and eigenvalues, well, C4's are-- eigenvalues are related to counting closed walks in a graph. And so we'll use that counting closed walks in a graph. And, specifically, the 2k-th moment of the spectrum is equal to the trace of the 2k-th power, which counts the number of closed walks of length exactly 2k. So to lower bound the left-hand side, we want to lower-bound the right-hand side. So let's consider closed walks starting at a fixed vertex. So the number of closed walks of length exactly 2k starting at a fixed vertex v. Here we're in a d regular graph. So here we are in a d regular graph. I claim, whatever this number is-- it maybe different for each d-- it is at least the same quantity if I do this walk in an infinite d regular tree. So infinite d regular tree is what? This is an infinite d regulator tree. We just start with the vertex, and go out d regular. So why is this true? So think about how you walk. So let me just explain. This is, I think, pretty easy once you see things the right way. So start with a vertex v. Think about how you walk. And whatever way you can walk, well, you can walk the same way on the infinite d regular tree. Well, I mean, sorry. Whatever walk you can do an infinite d regular tree, if you label the first vertex, the first edge, second edge, if you do a corresponding labeling on your original graph, you can do that walk on your original graph. Although the original graph may have some additional walks, namely things that involve cycles, that are not available on your tree. But, certainly, every walk, you can do. Every closed walk you can do on a tree, you can do the same walk on your graph. So you can make this more formal. So you can write down a bijection or injection to make this more formal, but it should be fairly convincing that this inequality is true. But this is just a number. So this is a number of 2k walks in a d regular tree starting on the vertex. And this number has been well studied, and we don't need to know the precise number. We just need to know some good lower bound. And here is one lower bound, which is that there's at least a Catalan number, the k-th Catalan number, times d minus 1 to the k, where C sub k is the k-th Catalan number, which is equal to-- so k 2k choose k divided by k plus 1. So let me remind you what this is. A wonder, has many combinatorial interpretations, and it's a fun exercise to do bijections between them. But, in particular, C3 is the equal to 5, which counts the number of ups and down walks of length 6 that never dip below the horizontal line where you start. So, then, this corresponds to going away from the root versus coming back to the root. Soon you have at least that many ways. And when you are moving away from the root, you have d minus 1 choices on which branch to go to. OK, good. Given that, the right-hand side is at least, then, n, the number of vertices, times the quantity above related to Catalan numbers. On the other hand, the left-hand side is at most-- here we're using that 2k is an even number-- is at most d to the 2k plus all the other eigenvalues that are most lambda in absolute value. So let me call this quantity lambda. Rearranging this inequality, we find that lambda to the 2k is at least this number here. Just here, I'm changing n minus 1 to n. So we have that. And now what can we do? We let n go to infinity and k go to infinity slowly enough. So if k goes to infinity and n goes to-- so k goes to infinity with n, but not too quickly. But k is little of log n. And we find that this quantity here is essentially 2 to the k-- 2 to the 2k. And this guy here is little o1. So lambda is at least 2 root d minus 1 minus little o1. That proves, essentially, the Alon-Boppana bound, although a small weakening because we are-- this big eigenvalue, you might find might actually be very negative instead of very positive. But that's OK. For applications, this is not such a big deal. These are two different proofs. And now, we think about, are they really the same proof? Are they different proofs? Are they related to each other? So it's worth thinking about. They look very different, but how are they related to each other? And one final remark. You already saw two different proofs as to-- I mean, that shows you this number, and you see where this number comes from. And let me just offer one final remark on where that number really comes from. And it really comes from this infinite d regular tree. So it turns out that 2 root d minus 1 exactly is the spectral radius of the infinite d regular tree. And that is the reason, in some sense, that this is the correct number occurring Alon-Boppana bound. This is-- if you've seen things like algebraic topology or topology, this is a universal cover for d regular graphs. So I won't talk more about it, but just some general remarks, and you already saw two different proofs. So beginning of next time, I want to wrap this up and to show you-- to explain some-- what we know about are there graphs for which this bound is tight? And the answer is yes, and there are lots of major open problems as well related to what happens there. And then, after that, I would like to start talking about graph limits. So that's the next chapter of this course. OK, good.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
26_Sumproduct_problem_and_incidence_geometry.txt
YUFEI ZHAO: Today we want to look at the sum product problem. So for the past few lectures, we've been discussing the structure of sets under the addition operation. Today we're going to throw in one extra operation, so multiplication, and understand how sets behave under both addition and multiplication. And the basic problem here is, can it be the case that A plus A, A times A, which is, analogously, the set of all pairwise products of elements from A-- can these two sets be simultaneously small, that is, the same for some single A? Can we have it so that A plus A and A times A are simultaneously small? For example, it's easy to make one of them small. We've seen examples where if you take A to be an arithmetic progression, then A plus A is more or less as small as it gets. But for such an example, you see A times A is pretty large. It's actually not so clear how to prove how large it Is. And there are some very nice proofs. And this problem has actually been more or less pinned down. But the short version is that A times A has size close to its maximum possible. So it turns out the size of A times A is almost quadratic. So this number is actually now known fairly precisely. So this problem of determining the size of A times A for the interval 1 through N is known as the Erdos multiplication table problem. So if you take an N by N multiplication table, how many numbers do you see in the table? So that turns out to be sub-quadratic, but not too sub-quadratic. So this problem has been more or less solved by Kevin Ford. And we now know a fairly precise expression, but I don't want to focus on that. That's not the topic of today's lecture. This is just an example. Alternatively, you can take A times A to be quite small by taking A to be a geometric progression. Then it's not too hard to convince yourself that A plus A must be fairly large in that case. And the geometric progression doesn't have so much additive structure, so A plus A will be large. So can you make A plus A and A times A simultaneously small? So there's this conjecture that the answer is no. And this is a famous conjecture in this area, known as the Erdos similarity conjecture on the sum product problem, which states that for all finite sets of real numbers, either A plus A or A times A has to be close to quadratic size. So that's the conjecture. It's still very much open. Today I want to show you some progress towards this conjecture via some partial results. And it will use a nice combination of tools from graph theory and incidence geometry, so it nicely ties in together many of the things that we've seen in this course so far. So Erdos and Szemeredi proved some bound, which is like 1 plus c for some constant c. Today we'll show some bounds for somewhat better c's. So you'll see. The first tool that I want to introduce is a result from graph theory known as the "crossing number inequality." So you know that planar graphs are graphs where you can draw on the planes so that the edges do not cross. And there are some famous examples of non-planar graphs, like K5 and K 3, 3. But you can ask a more quantitative question. If I give you a graph, how many crossings must you have in every drawing of this graph? And the crossing number inequality provides some estimate for such a quantity. So given the graph G, denoted by cr, so crossing of G, to be the minimum number of crossings in a planar drawing of G. There is a bit of subtlety here, where by a planar drawing, do I mean using line segments or do I mean using curves? It's actually not clear how it affects this quantity here. That's a very subtle issue. So for planar graphs, there's a famous result that more or less says if a planar graph can be drawn using continuous curves, then it can be drawn using straight lines. But the minimum number of crossings, the two different ways of drawings, they might end up with different crossing numbers. But for the purpose of today's lecture, we'll use a more general notion, although it doesn't actually matter for today which one we'll use-- so planar drawing using curves. Draw the graph where edges are continuous curves. How many crossings do you get? The crossing is a pair of edges that cross. You can ask-- it's just a cross over point that can-- it doesn't matter. So there are many different subtle ways of defining these things. They won't really come up for today's lecture. The crossing number inequality is a result from the '80s, which give you a lower-bound estimate on the number of crossings. If G is a graph with enough edges-- the number of edges is, let's say, at least four times the number of vertices-- then the number of crossings of every drawing of G is at least the number of edges cubed divided by the number of vertices squared. And there's an extra constant factor, which is some constant. So the constant does not depend on the graph. In particular, if it has a lot of edges, then every drawing of G must have a lot of crossings. So the crossing number inequality was proved by two separate independent works, one by Ajtai, Chvatal, Newborn, Szemeredi and the other by Tom Leighton, our very own Tom Leighton. So let me first give you some consequences of this theorem, just for illustration. So if you have an n-vertex graph with a quadratic number of edges, then how many crossings must you have? You plug in these parameters into the theorem. See that it has necessarily n to the 4th crossings. But if you just draw the graph in some arbitrary way, you have at most n to the 4 crossings, because a crossing involves four points. So when you have a quadratic number of edges, you must get basically the maximum number of crossings. The leading constant term factor is an interesting problem, which we're not going to get into. Let's prove the crossing number inequality. First, the base case of the crossing number inequalities is when you can draw a graph with no crossings. And those are planar graphs. So for every connected planar graph, if it has at least one cycle-- and you'll see why in a second, why I say this-- if with at least one cycle, so that's not a tree, we must have that 3 times the number of faces is at most 2 times the number of edges. So here, we're going to use the key tool being Euler's formula, which we all know as the number of vertices minus the number of edges plus the number of faces equals to 2. We're here for face, because I draw a planar graph, and so I count the faces. Here there are two faces, outer face, inner face, count edges and vertices, so you have Euler's formula up there. And plug in Euler's formula for a planar graph with at least one cycle, so we can obtain this consequence over here, because every face is adjacent to at least three edges. If you go around the face, you see these three edges, and every edge is counted exactly twice, is adjacent to exactly two faces. So you do the double counting, you get that inequality up there. So plugging these two into Euler gets you that inequality up there. Plugging these two into Euler, we get that the number of edges is almost 3 times the number of vertices minus 6. So for this leaves that inequality, but plug it into Euler, plug in this into Euler, you get this. So we have that the number of edges is at most 3 times the number of vertices for every graph G. So here, we require that the graph is planar and has at least one cycle, but even if we drop the condition that it has at least one cycle but just require that it's planar, every planar graph G satisfies this inequality over here. So in other words, you might have heard before, in a planar graph, the average degree of a vertex is almost 6. So in particular, the crossing number of a graph G is positive if the number of edges exceeds 3 times the number of vertices. It's not planar, so it has at least one crossing every drawing. And by deleting an edge from each crossing, we get a planar graph. You draw the graph. You have some crossings. You get rid of an edge associated with each drawing. Then you get a planar graph. If you look at this inequality and you account for the number of edges that you deleted, we obtain then the inequality that the number of edges minus the number of crossings is at least 3 times the number of vertices. So we obtain the inequality that the lower bounds in number of crossings as the number of edges minus 3 times the number of vertices, this one. So that's some lower bound on the crossing number. It's not quite the bound that we have over there. And in fact, if you take a graph with a quadratic number of edges, this bound here only gives you quadratic lower bound on the crossing number, some lower bound. But it's not a great lower bound. And we would like to do better. So here's a trick that is a very nice trick, where we're going to use this inequality to upgrade it to a much better inequality, bootstrap it to a much tighter inequality. So this involves the use of the probabilistic method. Let me denote by p some number between 0 and 1, to be decided later. And starting with a graph G, let's let G prime, with vertices and edges being V prime and E prime, be obtained from G by randomly deleting some of the vertices, or rather randomly keeping each vertex with probability p, independently for each of these vertices. So you have some graph G. I keep each vertex with probability p. And I delete the remaining vertices. And I get a smaller graph. I get some induced subgraph. And I would like to know what can we say about the crossing number of the smaller graph in comparison to the crossing number of the original graph? For the smaller graph, because it's still a planar graph so G prime-- so it's still a graph. It's not a planar graph, but it's still a graph, so G prime still satisfies this inequality up here. So G prime still satisfies that the number of crossings in every drawing of G prime is at least the number of edges of G prime minus 3 times the number of vertices of G prime. But note that G prime is a random graph. G was fixed, given. G prime is a random graph. So let's evaluate the expectation of both quantities, left-hand side and right-hand side. If this inequality is true for every G prime, the same inequality must be true in expectation. Now what do we know about all the expectations of each of these quantities? The number of vertices in expectation-- that's pretty easy. So this one here is p times the original number of vertices. The number of edges is also pretty easy. Each edge is kept if both endpoints are kept. So this expectation on the number of edges remaining is also pretty easy to determine. The crossing number of the new graph-- that I have to be a little bit more careful of, because when you look at the smaller graph, maybe there's a different way to draw it that's not just deleting the sum of the vertices from the original graph. So even though the original graph might have a lot of crossings, when you go to a subgraph, maybe there's a better way to draw it. But we just need an inequality in the right direction. So we are still OK. And I claim that the crossing number of G prime is in expectation at most p to be 4th times the crossing number of G. Because if you keep the same drawing, then the expected number of crossings that are kept-- each crossing is kept if all four of its end points are kept. So each crossing is kept with probability p to the 4th. So you can draw it in expectation with this many crossings. Maybe it's much less. Maybe there's a better way to draw it, but you have an inequality going in the right direction. Looking at that inequality up there in yellow, we find that the crossing number of G is at least p to the minus 2 E minus 3p to the minus 3. And this is true for every value of p between 0 and 1. So now you pick a value of p that works most in your favor. And it turns out you should do this by setting these two equalities to be roughly equal to each other. So setting p between 0 and 1 so that 4 times the-- basically, set these two terms to be roughly equal to each other. And then we get that this quantity here is at least the claimed quantity, which is E cubed over V squared up to some constant factor, which I don't really care about. In order to set p, I have to be a little bit careful that p is between 0 and 1. If you set p to be 1.2, this whole argument doesn't make any sense. So this is OK. So we know p is at most one as long as E is at most 4p. I mean, the 4 here is not optimal, but if 4 were 2, then it's not true. So if E is 2V, you can have a planar graph, so you shouldn't have a lower bound on the crossing number. So this is the proof of the crossing number inequality. As I said, if you have lots of edges, then you must have lots of crossings. Any questions? So let's use the crossing number inequality to prove a fundamental result in incidence geometry. Incidence geometry is this area of discrete math that concerns fairly basic-sounding questions about incidences between, let's say, points and lines. And here's an example. So what's the maximum number of incidences between endpoints and end lines, where by "incidence" I mean if p-- so curly p-- is a set of points, and curly l is a set of lines, then I write I of p and l to be the number of pairs, one point, one line, such that the point lies on the line. So I'm counting incidences between points and lines. You can view this in many ways. You can view it as a bipartite graph between points and lines, and we're counting the number of edges in this bipartite graph. So I give you end points, end lines. What's the maximum number of incidences? It's not such an obvious question. So let's see how we can approach this question. But first, let me give you some easy bounds. So here's a trivial bound-- so here, I want to know if I give you some number of points, some number of lines, what's the maximum number of incidences. So a trivial bound is that the number of incidences is at most the product between the number of points and the number of lines. One point, one line, at most one incidence. So that's pretty trivial. We can do better. So we can do better because, well, you see, let's use this following fact, that every line-- so every pair of points determine at most one line. I have two points. There's at most one line that contains those two points. Using this fact, we see that the number of-- so let's count the number of triples involving two points and one line such that both points lie on the line. So how big can this set be? So let's try to count it in two different ways. On one hand, this quantity is at most the number of points squared, because if I give you two points, then they determine this line-- so at most the number of points squared. But on the other hand, we see that if I give you a line, I just need to count now the number of-- let me also require that these two points are distinct. So if I give you a line, I now need to count the number of pairs of points on this line. So I can enumerate over lines and count line by line how many pairs of points are on that line. So I get this quantity over here. On each line, I have that contribution. And now, using Cauchy-Schwartz inequality, we find that this squared term is at least the number of incidences divided by the number of lines. And the remaining minus 1 term contributes just to the number of incidences. So the first is by Cauchy-Schwartz. So putting these two inequalities together, we get some upper bound on the number of incidences. If you have to invert this inequality, you will get that the number of incidences between points and lines is upper bounded by the number of points times the number of lines raised to power 1/2 plus the number of lines. So that's what you get from this inequality over here. By considering point-line duality-- so whenever you have this kind of setup involving points and lines, you can take the projected duality and transform the configuration into-- lines into points and points into lines, and the incidences are preserved. So I also have an inequality. By duality-- I also have an inequality where I switch the roles of points and lines. So I is already the numbers. I don't need to put an extra absolute value sign. So the number of points and lines is upper bounded by the number of lines times the square root of a number of points plus an extra term, just in case there are very few lines. So these are the bounds that you have so far. And the only thing that we have used so far is the fact that every two points determine at most one line, and every two lines meet at at most one point. So these are the bounds that we get. And in particular, for end points and end lines, we get the number of incidences is-- they go off n to the 3/2. This should remind you of something we've done before. So in the first part of this course, when we were looking at extremal numbers, where did 3/2 come up? AUDIENCE: [INAUDIBLE] like C4? YUFEI ZHAO: C4, yeah. So if you compare this quantity to the extremal number of C4, it's also n to the 3/2. And in fact, the proof is exactly the same. All we're using here is that the incidence graph is C4-free So in fact, this is an argument about C4-free graphs. So this fact here, every two points determine at most one line, is saying that if you look at the incidence graph, there's no C4. That's all we're using for now. Any questions? So is this the truth? Now, back when we were discussing the extremal number for C4-free graphs, we saw that, in fact, this is the correct order. And what was the construction there? So the construction also came from incidences, but incidences of taking all lines and points in the finite field plain, Fq squared. If you look at all the lines and all the points in a finite field plain, then you get the correct lower bound for C4. But now we are actually working in the real plane, so it turns out that the answer is different when you're not working the finite field. We're going to be using the topology of the real plane. And we're going to come up with a different answer. So it turns out that the truth for the number of maximum number of incidences in the plane, for points and lines in the real plane, is not exponent 3/2, but turns out to be 4/3. And this is a consequence of an important result in incidence geometry, a fundamental result, known as the Szemeredi-Trotter theorem. So the Szemeredi-Trotter theorem says that the number of incidences between points and lines is upper bounded by this function where you look at the number of points times the number of lines, and each raised to power 2/3 and plus some additional terms, just in case there are many more lines compared to points or way more points compared to lines. So that's the Szemeredi-Trotter theorem. And as a corollary, you see that n points, n lines give you at most n to the 4/3 incidences, in contrast to the setting of the finite field plain, where you can get n to the 3/2 incidences. So somehow, we have to use the topology of the real plane for this one. And I want to show you a proof-- turns out not the original proof, but it's a proof that uses the crossing number inequality to prove Szemeredi-Trotter theorem. You see, in crossing number inequality, we are using the topology of the real plane. Where? AUDIENCE: Euler's formula. YUFEI ZHAO: Euler's formula, right. So the very beginning, Euler's formula has to do with the topology of the real plane. Now, this bound turns out to be tight. So let me give you an example showing that the 4/3 exponent is tight. And the example is, if you take p to be this rectangular grid of points, and L to be a set of lines-- so I'm going to write the lines by their equation, where the slope is an integer from 1 through k and the y-intercept is an integer from 1 through k squared. And you see here that every line in L contains exactly k points from P. So we got in total k to the 4th incidences, which is on the order of n to the 4/3. So n to the 4/3 third is the right answer. Now let me show you how to prove Szemeredi-Trotter theorem from the crossing number inequality. It turns out to be a very neat application that's almost a direct consequence once you set up the right graph. And the idea is that we are going to draw a graph based on our incidence configuration. So first, just to clean things up a little bit, let's get rid of lines in L with 1 or 0 points in P. So this operation doesn't affect the bounds. So you can check. These lines don't contribute much to the incidence bound, and only contributes to this plus L. So you can get rid of such lines. So let's assume that every line in L contains at least two points from P. And let's draw a graph based on this incidence structure. So if I have-- so suppose these are my points and lines. I'll just draw a graph where I keep the points as the vertices, and I put in an edge. It's a finite edge that connects two adjacent points on the same line. So I get some graph. Let me make this graph a bit more interesting. So I get some graph. And how many crossings, at most, does this graph have? So the number of crossings of G is at most the number of lines squared, because a crossing comes from two lines. So here, you have a crossing. A crossing comes from two lines. Number of crossings is at most number of lines squared. On the other hand, we can give a lower bound to the number of crossings from the crossing number inequality. And to do that, I want to estimate the number of edges. And this is the reason why I assume every line contains at least two points from P, because a line with now k incidences gives k minus 1 edges. And if k is at least 2, then k minus 1 is at least k over 2, let's say. I don't care about constant factors. So by crossing number inequality, the number of crossings of G is at least the number of edges cubed over the number of vertices squared, which is at least the number of incidences of this configuration cubed over the number of points squared. Actually, number of vertices is the number of points. And number of edges, by this argument here, is on the same order as the number of incidences. Putting these two facts together, we see-- there was one extra hypothesis in crossing number inequality. Provided that this hypothesis holds, which is that the number of incidences is at least 8 times the number of points, so that the original hypothesis holds. So putting everything together, and rearranging all of these terms, and using upper and lower bounds on the crossing number, we find that the number of incidences is upper bounded by-- the main term you see is just coming from these two, but there are a few other terms that we should put in, just in case this hypothesis is violated, and also to take care of this assumption over here, so adding a couple of linear terms corresponding to the number of points and the number of lines. If this hypothesis is violated, then the inequality is still true. So this proves the crossing numbers inequality. Any questions? So we've done these two very neat results. The question is, what do they have to do with the sum product problem? So I want to show you how you can give some lower bound on the sum product problem using Szemeredi-Trotter theorem. So it turns out that the sum product problem is intimately related to incidence geometry. And the reason-- you'll see in a second precisely why they're related, but roughly speaking, when you have addition and multiplication, they're are kind of like taking slope and y-intercept of an equation of a line. So there are two operations that are involved. So turns out, many incidence geometry problems can be set up and a way-- so many sum product problems can be set up in a way that involves incidence geometry. And a very short and clever lower bound to the sum product problem was proved by Elekes in the late '90s. So he showed the bound that if you have a subset of finite, subset of reals, then the sum set size times the product set size is at least A to the 5/2. As a corollary, one of these two must be fairly large. The max of the sum set size and the product set size is at least a to the 5/4. Let me show you the proof. I'm going to construct a set of points and a set of lines based on the set A. And the set of points in R2 is going to be pairs x comma y, where the horizontal coordinate lies in the sum set, A plus A, and the vertical coordinate lies in the product set, A times A. And a set of lines is going to be these lines-- y equals to a times x minus a prime, where a and a prime lie in A. So these are some points and some lines. And I want to show you that they must have many incidences. So what are the incidences? So note that the line y equals to a times x minus a prime-- it contains the points a prime plus b and ab, which lies in P for all b in A. You plug it in. If you plug in a prime plus b into here, you get ab. And this point lies in P, because the first coordinate is the sum set. The second coordinate lies in the product set. So each line in L contains many incidences. So each line in L contains a incidents. So this line, each line in L contains a incidences. Also, we can easily compute the number of lines and the number of points. The number of points is A plus A size times the size of A times A. And the number of lines is just the size of A squared. So by Szemeredi-Trotter, we find that the number of incidences is lower bounded by noting this fact here. We have many incidences. So the number of lines, each line contributes a incidences. But we also have an upper bound coming from the Szemeredi-Trotter theorem. So plugging in the upper bound, we find that you have-- so now I'm just directly plugging in the statement of Szemeredi-Trotter. The main term is the first term. You should still check the latter two terms, but the main term is the first term. So plugging in the values for P and L, we find this is the case, plus some additional terms, which you can check are dominated by the first term. So let me just do a big O over there. Now you put left and right together, and we could obtain some lower bound on the product of the sizes of the sum set and the product set, thereby yielding allocations. So this is some lower bound on the sum product problem. And you see, we went through the crossing number inequality to prove Szemeredi-Trotter, a basic result in incidence geometry. And viewing sum product as an incidence geometry problem, one can obtain this lower bound over here. Any questions? I want to show you a different proof that was found later, that gives an improvement. And there's a question, can you do better than 5/4? So it turns out that there was a very nice result of Solymosi sometime later that gives you an improvement. Solymosi proved in 2009 that if A is a subset of positive reals, then the size of A times A multiplied by the size of A plus A squared is at least size of A to the 4th divided by 4 ceiling log of the size of A, where the log is base 2. So don't worry about the specific constants. A being in the positive reals is no big deal, because you can always separate A as positive and negative and analyze each part separately. So as a corollary to Solymosi's theorem, we obtain that for A, a subset of the reals, the sum set and the product set, at least one of them must have size at least A raised to 4/3 divided by 2 times log base 2 size of A raised to 1/3 third. So basically, A to the 4/3 minus little one in the exponent, so better than before. And this is a new bound. I want to note that in this formulation, where we are looking at lower bounding this quantity over here, this is tied up to logarithmic factors, by considering A to be just the interval from 1 to n. If A is the interval from 1 to n, then the left-hand side, A plus A, is around size n. So you have n squared. And A times A is also, I mentioned, around size n squared. So this inequality here is tight. The consequence is not tight, but the first inequality is tight. So in the remainder of today's lecture, I want to show you how to prove Solymosi's lower bound. And it has some similarities to the one that we've seen, because it also looks at some geometric aspects of the sum product problem. But it doesn't use the exact tools that we've seen earlier. It does use some tools that were related to the lecture from Monday. So last time, we discussed this thing called the "additive energy." You can come up with a similar notion for the multiplication operation, so the "multiplicative energy," which we'll denote by E sub, with the multiplication symbol, A. So the multiplicative energy is like the additive energy, except that instead of doing addition, we're going to do a multiplication instead. So one way to define it is the number of quadruples such that there exists some real lambda such that a, comma, b equals to lambda c, comma, d. So basically the same as additive energy, except that we're using multiplications instead. By the Cauchy-Schwartz inequality-- and this is a calculation we saw last time, as well-- we see that if you have a set with small product, then it must have high multiplicative energy. So last time, we saw small sum set implies high additive energy. Likewise, small product set implies high multiplicative energy. In particular, the multiplicative energy of A, you can rewrite it as sum over all elements x in the product set of the quantity, which tells you the number of ways to write x as a product, this number squared and then summed over all x. By Cauchy-Schwartz, we find that this quantity here is lower bounded by the size of A to the 4th divided by the size of A times A. So to prove Solymosi's theorem, we are going to actually prove a bound on the energy, instead of proving it on the set. We're going to prove it on the energy. So it suffices to show that the multiplicative energy is at most 4 times the sum set size times-- so let me divide the energy by log of A. So when you plug this into this inequality, it would imply that. So it remains to show this inequality over here upper bounding the multiplicative energy. There's an important idea that we're going to use here, which is also pretty common in analysis, is that instead of considering that energy sum here, we're going to consider a similar sum, except we're going to chop up the sum into pieces according to how big the terms are, so that we're only looking at contributions of comparable size. And so this is called a "dyadic decomposition." The idea is that we can write the multiplicative energy similar to above, but instead of summing over x in the product set, let me sum over s in the quotient set. So you can interpret what this quotient A is. This is the set of all A divided by B, where A and B are in A. A is a set of positive reals, so I don't need to worry about division by 0. So what remains, then, is the intersection of s times A and A squared. Remember, s times A is scaling each element of A by s. So we have this quantity over here. So I want to break up the sum into a bunch of smaller sums, where I want to break up the sum according to how big the terms are, so that inside each group, all the terms are roughly of the same size. And easiest way to do this is to chop them up into groups where everything inside the same collection differs by at most a factor of 2. So that's why it's called a dyadic decomposition, going from 0 to-- the maximum possible here is basically A. So let's look at i going from 0 to log base 2 of A. So this is the number of bins. And partition the sum into sub-sums where I'm looking at the i-th sub-sum consisting of contributions involving terms with size between 2 to the i and 2 to the i plus 1. Break up the sum according to the sizes of the summands. By pigeonhole principle, one of these summands must be somewhat large. So by pigeonhole, there exists a k such that setting D to be the s such that that corresponds to the k-th term in the sum. So one has that this sum coming from just contributions from D is at least-- so it's at least the multiplicative energy divided by the number of bins. All of that many bins-- by pigeonhole, I can find one bin that's a pretty large contribution to the sum. And the right-hand side, we can upper bound each term over here by 2 to the 2k plus 2, and the number of terms as the size of D. Let me call the elements of D S1 through Sm, where S1 through Sm are sorted in increasing order. Now let me draw you a picture of what's going on. Let's consider for each element of D, so for each i and m, let's consider the line given by the equation y equals to s sub i times x. Let me draw this picture where I'm looking at the positive quadrant, so I have a bunch of points in the positive quadrant. And specifically, I'm interested in these points whose coordinates, both coordinates are elements of A. And I want to consider lines through points of A, but I want to consider lines where it intersects this A cross A in the desired number of points. And we find those set, and then let's draw these lines over here, where this line here, L1 has slope exactly S1, and L2, L3, and so on. I want to draw one more line, which is somewhat auxiliary, but just to make our life a bit easier. Finally, let's let L of m plus 1 be the vertical line, or rather be the vertical ray, which goes to the minimum element of A above Lm. So it's this line over here. That's Lm plus 1. So in A cross A, I draw a bunch of lines. So now all the lines-- so all these lines involve some point of A and the origin, but I don't draw all of them. I draw a select set of them. And what we said earlier says that the number of lines, the number of points on each of these strong lines, is roughly the same for each of these lines. Let's let capital L sub j denote the set of points in A cross A that lie on the j-th line. So that's L1, L2, and so on. I claim that if you look at two consecutive lines and look at the sum set of the points in A cross A that intersect, you're looking at two lines, and you're adding up points on those two lines. So you form a grid. So you end up forming this grid. And the number of points on this grid is precisely the product of these two point sets. Moreover, the sets Lj plus L sub j plus 1 are disjoint for different j. And this is where we're using the geometry of the plane here. Because the sum of L1 and L2 lies in the span, the sum of L2 and L3 in a different span, so they cannot intersect. So they lie in-- so since they span disjoint regions, L1 plus L2 lies here, L2 plus L3 lies there, and so on. But they're all disjoint. Now let's put everything that we know together. Remember, the goal is to upper bound the multiplicative energy as a function of the sum set. So in other words, we want to lower bound the sum set. So I want to show you that this A plus A has a lot of elements. There's a lot of sums. And I have a bunch of disjoint contributions to these sums. So let's add up those disjoint contributions to the sums. You see that the size of A plus A squared is the same as the size of the product set A plus A. So this is Cartesian product. Here is-- this is a Cartesian product, in other words, the grid that is strong up there. I add this product to itself. So I should get the same set here. But how big is this sum set? That grid, that lattice grid added to itself, how big should it be? I want to lower bound the number of sums. And the key observation is up there. We can look at contributions coming from distinct spans. In particular, this sum here, so this sum set here, size is lower bounded by these distinct Lj plus L j plus 1's. I threw away a lot. I only keep the lines on the L's, and I only consider sums between consecutive L's. That should be a lower bound to the sum set of the grid with itself. But you see, and here, we're using these different-- for different j's, these contributions are destroyed. But by what we said up there, Lj plus L j plus 1 is a grid. So it has size Lj times L j plus 1. And the size of each Lj is at least 2 to the k. So the sum here is at least m times 2 to the 2k. But we saw over here that the energy lower bounds this 2 to the 2k. So we have a lower bound that is the multiplicative energy of A divided by 4 times the log base 2 of the size of A. So don't worry so much about the constant factors. That's just the order of magnitude that is important. And that's it. Yep. AUDIENCE: How do you know that the size of big L sub m plus 1? YUFEI ZHAO: Great. The question is, what do we know about the size of big L sub m plus 1? So that's a good point. The easiest answer is, if I don't care about these constant factors, I don't need to worry about it. You can think about what is the number of points on this line above that. It's essentially the number of elements of A above the biggest element of s m, above s m. It's a good question. I think we don't need to worry about it. I'm being slightly sloppy here. Yeah. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: I think the question is, how do we know for j equals to m that you have this bound over here? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: Great. So yes. AUDIENCE: [INAUDIBLE] YUFEI ZHAO: So there are some ways to do it. You can notice that the vertical line has at least as many points as the first slanted line. So details that you can work on. So this proves Solymosi's theorem, which gives you a lower bound on the sum set and the product set sizes and the maximum of those two. It's based on-- it's very short. It's very clever. It took a long time to find. And it gave a bound on the sum product problem of 4/3 that actually remained stuck for a very long time, until just fairly recently there was an improvement that gives-- so by Konyagin and Shkredov where they improved the Solymosi bound from 4/3 to 4/3 plus some really small constant c. So it's some explicit constant. I think right now-- so that's being proved over time, but right now, I think c is around 1 over 1,000 or a few thousand. So it's some small but explicit constant. It remains a major open problem to improve this bound and prove Erdos' similarity conjecture, that if you have n elements, then one of the sums or products must be nearly quadratic in size. And people generally believe that that's the case. Any questions? So this concludes all the topics I want to cover in this course. So we went a long way. And so the beginning of this course, we started with extremal graph theory, looking at the basic problem of if you have a graph that doesn't contain some subgraph, triangle, C4, what's the maximum number of edges. In fact, that showed up even today. And then we went down to other tools, like Szemeredi's regularity lemma that allows us to deduce important arithmetic consequences, such as Roth's theorem. It's also an extremal problem if you have a set without a three-term arithmetic progression, how many elements can it have? And so the important tool of Szemeredi's regularity lemma then later showed up in many different ways in this course, especially the message of Szemeredi's regularity lemma, that when you look at an object, it's important to decompose it into its structural component and its pseudo-random component. So this dichotomy, this interplay between structure and pseudo randomness, is a key theme throughout this course. And it showed up in some of the later topics as well, when we discussed spectral graph theory, quasi-randomness, graph limits, and also in the later Fourier analytic proof of Roth's theorem. All of these proofs, all of these techniques, involve some kind of interplay between structure and pseudo-randomness. In the past month or, so we've been looking at Freiman's theorem, this key result in additive combinatorics concerning the structure of sets under addition. And there, we also saw many different tools that came up, and also connections I mentioned a few lectures ago, connections to really important results in geometry to group theory. And it really extends all around. And a few takeaways from this course-- one of them is that graph theory, additive combinatorics, they are not isolated subjects. They're connected to a lot within mathematics. And that's one of the goals I want to show you in this course, is to show these connections throughout mathematics and some to analysis, to geometry, to topology. And even simple questions can lead to really deep mathematics. And some of them I try to show you, try to hint at you, or at least I mentioned throughout this course. And what we've seen so far is just the tip of the iceberg. And there is a lot of still extremely exciting work that's to be done. And I've also tried to emphasize many important open problems that have yet to be better understood. And I expect in some future iteration of this course, some of these problems will be resolved, and I can show the next generation of students in your seats some new techniques, new methods, and new theorems. And I expect that will be the case. This is a very exciting area. And it's an area that is very close to my heart. It's something that I've been thinking about since my PhD. The bulk of my research work revolves around better understanding connections between graph theory, on one hand, and additive combinatorics on the other hand. It's been really fun teaching this course, and happy to have all of you here. Thank you. [APPLAUSE]
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
21_Structure_of_set_addition_I_introduction_to_Freimans_theorem.txt
YUFEI ZHAO: All right, today we're going to start a new topic an additive combinatorics. And this is a fairly central topic having to do with the structure of set addition. So the main players that we're going to be seeing in this chapter have to do with, if you start with a subset of some obedient group under addition-- not necessarily finite. So the obedient group that I'm going to keep in mind, the ones that will come up generally are integers, z mod n or the finite field model. We're going to be looking at objects such as a sum set, so a plus b, meaning the set of elements that can be written as a sum, where you take one element from a and another from b. Likewise, you can also have a minus b defined similarly, now taking a minus b. We can iterate this operation. So kA, so 2A, 3A, 4A, for instance, means I add A to itself k times, not to be confused with a dilation, which we'll denote by k dot A. So this is notation for multiplying every element of A by the number k. So given a subset of integers I can do these operations to the set. And I want to ask, how does the size of the set change when I do these operations? For example, what is the largest or the smallest? So how large or small can A plus A be for a given set size, A? So if I allow you to use 10 elements, how can you make A plus A as big as possible? And how can you make it as small as possible? So this is not a hard question. How can you make it as big as possible? So what's the maximum size A plus A can be as a function of A? Well, I'm looking at pairwise sums, so if there are no collisions between different pairwise sums, this is as large as possible. And then it's not hard to see that the maximum possible is the size of A plus 1, choose 2. So since at most, this many pairs and space possible if all sums are distinct. So for example, in integers, you can take 1, 2, 2 squared, and so on. So that will give you the span. The minimum possible is also not too hard. We're allowed to work in a general obedient group. So in that case, the minimum could be just the size of A. The size is always at least the size of A. And this is tight if A is a subgroup. If you have a subgroup, then it's closed under addition. So the set does not expand under addition. In the integers, you don't have any finite subgroups. So if I give you k integers, what's the smallest, the sum set can be? AUDIENCE: 2k minus 1. YUFEI ZHAO: 2k minus 1, right? So the example is when you have an arithmetic progression. So in integers, the minimum is 2k minus 1. And it's achieved for an arithmetic progression. So let me just give you the one-line proof why you always have at least this many elements, is if A has elements sorted like this, then the following elements are distinct in the sum set. So you start with A plus A. And then you move A1 plus A2, A1 plus A3, and so on, to A1 plus Ak. And then you move the first element forward. OK, so here you already see 2k minus 1 distinct elements in A plus A. OK, so these are fairly simple examples, fairly simple questions. So now let's get to some more interesting questions, which is, what can you say about a set if you know that it has small doubling? If it doesn't expand by very much, what can you tell me about the set? And for that, let me define the notion of a doubling constant. So the doubling constant of A is defined to be the number which we often denote by k, the number obtained by dividing the size of A plus A by the size of A. And we would like to understand-- and this is the main question that's addressed in the upcoming lectures is, what is the structure of a set with bounded doubling constant? So for instance, think of k as fixed. Let's say k is 100. If you know a set has doubling constant, at most, 100, what can you tell me about the structure of the set? So that's the main question. Let me show you in a second a few examples of sets the have bounded doubling constant. So that's easy to check that those examples indeed have bounded doubling constant. And what this question amounts to is what is often known as an inverse question. So it's an inverse problem that asks you to describe in reverse-- so it's easy to check in the upcoming examples that all of those examples have bounded doubling constant. And what we want to say is, in reverse, that if a set has bounded doubling constant, then it must in some sense look like one of our examples. It's the harder inverse question. OK, so let me give you some examples of sets with small doubling constant. One example we already saw earlier is that if you have an arithmetic progression. If you have an arithmetic progression, then the size of A plus A is always 2 times the size of A minus 1. So the doubling constantly is always, at most, 2. That's pretty small. That's as small as you can get in arithmetic progressions is in the integers. But if you start with an arithmetic progression and now I take just a subset of the elements of this progression, so if I take AP, and if I cross out a few elements, just a small number of elements from this progression, or even cross out most, but keeping a constant fraction of elements still remaining, I claim that's still a pretty good set. So if A can be embedded inside an AP such that the AP has size no more a constant factor and more than that of A, then the size of A plus A is, at most-- so we bound it by the size of P plus P, which is, at most, 2P. So the doubling constant of A is, at most, 2C. So if you have a set which is at least 1/10 fraction of an AP, then you are doubling constant at most, 20, bounded. So this is another class of examples. So it's kind of a modification, some alteration of the arithmetic progression. Another more substantial generalization of APs is that of a two-dimensional arithmetic progression. So you think of an arithmetic progression as equally spaced points on a line. But you can extend this in multiple dimensions, so like a grid. So this is a two-dimensional arithmetic progression, but I still want to work inside the integers. So what we are going to do is project this picture onto the integers. So that's a two-dimensional arithmetic progression. And specifically, we have a set of the form, so x0 is the starting point, plus l1 of x1-- l1 times x1, and l2 2 times x2, where the little l's are integers, non-negative integers up to big L. So that's a two-dimensional arithmetic progression. So the picture that you can have in mind is, on the number line, we can get, write down first an AP and then a few more points like that so that you can have a two-dimensional arithmetic progression. We say that this set, this two-dimensional arithmetic progression is proper if all terms are distinct. And if that's the case, then I can write A plus A in a very similar format. So A plus A contains elements still of the same form, but now the indices go up to 2L minus 1. So you see that A plus A has size, at most, 4 times the original set, A. Also easy to see from this blue picture up there-- you expand that grid. It goes to, at most, 4 times the size. Yes? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: So the question is, should it be? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: 2x0? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: What do you mean? AUDIENCE: 2x0 plus l1 x0 plus 1? YUFEI ZHAO: Ah, thank you, so 2x0, thank you. Yeah, 2x0, great. OK, so that's the size. And of course, you can generalize this example of a fairly straightforward way to d dimensional arithmetic progressions. And we call those things generalized arithmetic progressions. So a Generalized Arithmetic Progression, which we will abbreviate by the letters GAP, is a set of numbers of the form as above, except now you have d different directions and indices, are also straightforward generalizations of what was earlier. So this is the notion of a generalized arithmetic progression. So think about projection of a d dimensional grid onto the integers. And for GAPs, we say that it's proper if all the terms are distinct. We call d the dimension of the GAP. And for a GAP, whether it's proper or not, we call the size to be the product of the lengths. And this is potentially larger. So this is larger than the number of distinct elements if it's not proper. So when I refer to the size of a GAP-- so I view the GAP more than just as a set, but also with the data of the initial point and the directions. If I talk about the size, I'm always referring to this quantity over here. Great. So you see, if you take a GAP or a fraction of a GAP, then, as with earlier examples, you have small doubling. So if P is a proper GAP, of dimension d, then P plus P is, at most, 2 raised to power d times the size of P. And furthermore, if A is an arbitrary subset of P and such that A has size-- such that the GAP has size, at most, a constant fraction bigger than that of A, then A has small doubling as well. So all of these are examples of constructions of sets where, for some fixed constant, the doubling constant, we can find a family of sets with doubling constant bounded by that number. And the natural question though is, are these all the examples? So have we missed some important family of constructions not covered by any of these examples? And so that's the kind of inverse question I was referring to earlier. So all of these examples, easy to check that they indeed have small doubling constant. Can you go in reverse? So can you ask the inverse question, if a set has small doubling constant, must it look like, in some sense, one of these sets? It turns out this is not such an easy problem. And there is a central result in additive combinatorics known as Freiman's theorem which gives a positive answer to that question. So Freiman's theorem is now considered a central result in additive combinatorics. And it completely describes, in some sense, the sets that have small doubling. And let me write down the statement. So if A is a subset of Z and has bounded doubling, then A is contained in a GAP of bounded dimension and size bounded by some constant times the size of the set. This is a really important result in additive combinatorics. The title of this chapter, "Structure of Set Addition," Freiman's theorem tells us something about the structure of a set with small doubling. The next few lecturers are going to be occupied with proving this theorem. So this theorem will have-- its proof is involved and probably the most involved proof that we have in this course. And the proof will take the next several lectures. And we'll see a lot of different ingredients, a lot of really nice tools. Fourier analysis will come up at some point, but also other tools like the geometry of numbers and also some more classical additive combinatorics ideas. But before starting on a proof, I want to offer a few remarks and historical remarks to just give you some more context about Freiman's theorem, but first, a few mathematical comments. In this conclusions of Freiman's theorem, I didn't mention properness. And that's mostly a matter of convenience. So you can, in fact, make the conclusion proper as well at the cost of increasing the number somewhat, but still constants depending only on k-- can guarantee properness as well. So there is an extra step involved which we'll not cover, because it's not entirely trivial, but it's also not too hard. Freiman's original proof, so it's named after Freiman. So he proved that in the '60s. But at that time, the proof was considered rather obscure. It actually did not get the attention and the recognition that it deserved until much later. So this was kind of a forgotten result, a forgotten proof for a very long time until quite a bit later when Ruzsa-- Ruzsa's name will come up many times in this chapter. Ruzsa came and gave a different proof of Freiman's theorem, and significantly cleaned up the proof, and offered many new ideas. So much of what we'll see today are results that we now attribute to Ruzsa. And theorem sometimes is also called the Freiman-Ruzsa theorem. But this result was really brought into-- brought as a highlight of additive combinatorics in the work of Gowers when he proved, that gave his new proof of Szemerédi's theorem, giving much better bounds. So he had to use quite a bit of serious additive combinatorics. And many of the ideas that went into Gowers' proof of Szemerédi's theorem came from this line of work, Freiman and Ruzsa. So and their work was, again, brought into prominence as a result of Gowers' Fields-Medal-winning work on Szemerédi's theorem. So this is some of the history. And now Freiman's theorem is considered a central result in the area. You can see, it's a beautiful result. And it's also quite a deep result. Let me mention a few things about bounds. So what do we know about this d of k and f of k? But first, an example-- so if the set A is dissociated in the sense of having no arithmetic structure, no coincidental sums colliding, so for example, if a of this form, then you see that-- also and we saw the size of A plus A, so A plus 1 choose 2. So in this case, the doubling constant is the size of A plus 1 divided by 2, so roughly on the same order as the size of A. But what do you need to take in Freiman's theorem for d and for f? So how can I embed this A in generalized arithmetic progression? See, there is not a great way to do it. So I want to keep the size small. There is not a great way to do it. So one way to do it is to use one direction for each of these elements. So contained in GAP-- now of course, there is always a trade off between dimension and size. But usually, not a great-- I mean, it's not such an important trade off. So but certainly it's contained in the GAP of dimension size of A minus 1 and size 2 to the size of A minus 1, by thinking about A as a cube. And so you convince yourself that you basically cannot do much better. So the best possible bound that we can hope to prove is of the form d being, at most, linear in k, and f being, at most, exponential in k. So you see already, the bounds, that you have to lose some things. Yes? AUDIENCE: Why can't we just make the dimension 1 and just let our arithmetic progression be 1 through 2 to the sine of A minus 1? YUFEI ZHAO: OK, great, so that's a great question. So why can't we just make dimension 1 and have the entire thing be as part of a single linear arithmetic progression? So you can do that, but then I can cook up other examples where I blow up this cube. So I ask you to think about how to do that. So you can try to blow up this cube so that you really do need the dimension to not be constant, so exercise. So the best result is not quite this claim. So this is still open. So the best result so far is due to Tom Sanders, whose name came up earlier, as he has basically the best bound on Roth's theorem. And you know, many of these results are all related to each other. So Sanders has-- so he showed that Freiman's theorem is true with d being, so basically k, but you lose a poly log factor. I think the big O is maybe 3, or 4, something like that, so not substantial. And then f of k is also basically exponential, but you lose a poly log factor in the exponent. Just a minor note about how to read this notation-- so I mean, it's written slightly sloppily as log k raised to big O of 1. You should think k as constant, but somewhat big, because if k were 2, this notation actually doesn't make sense. So just think of chaos, as at least 3 when you read that notation. All right, so we will prove Freiman's theorem. So this bound will show a worse bound. It actually will be basically exponentially worse, but it will be a constant. So it will be just a function of k. And that will take us the next several lectures. So we'll begin by developing some tools that are, I think, of interest individually. And they can all be used for other things. So we'll develop some tools that will help us to show, eventually lead us to Freiman's theorem. And I'll try to structure this proof in such a way that there are several goal posts that are also interesting. So in particular, just as what we did with Roth's theorem, we'll begin by proving a finite field analog of Freiman's theorem. So what would that mean, a finite field analog? So what would a problem like this mean in F2 to the n? So in F2 to the n, so this is a finite field analog. If A plus A is small-- so I'm not trying to ask an inverse question. But what are examples of sets in F2 to the n that have small doubling? AUDIENCE: 2 to the n. YUFEI ZHAO: So 2 to the n, so you can take the entire space. Any other examples that have small doubling? AUDIENCE: You can take a subspace. YUFEI ZHAO: Exactly, I can take a subspace. So a subspace, well, it doesn't grow. So A plus A is the same as A. All right, so and also, as before, you can take a subset of a subspace. So then the analog of Freiman's theorem will say that A is contained in a subspace of size, at most, a constant times the size of A. So this is the analog of Freiman's theorem in F2. And so we'll see, so this will be much easier than the general result about Freiman's theorem, but it will involve a subset of F2. And we'll see this theorem first. So we'll prove that next lecture. Of course, this is much easier in many ways, because here, unlike before, I don't even have to think about what subspace to take. I can just take the subspace generated by the elements of A. All right, Any questions so far? Yes? AUDIENCE: Is the f of k here still exponential in k? YUFEI ZHAO: OK, so the question, is the f of k here still exponential in k? So the answer is, yes. And the construction is if you take A to be a basis. OK, so let's start with some techniques and some proofs. So in this chapter, many things are named after Ruzsa. And at some point, it becomes slightly confusing which ones are not named after Ruzsa. But the first thing will be named after Ruzsa. So it's a Ruzsa Triangle Inequality. All right, the Ruzsa Triangle Inequality tells us that, if A, B, and C-- so unless otherwise I tell you so, and I'll try to remind you each time, but basically, we're always going to be looking finite sets in an arbitrary obedient group always with an under addition-- then one has the inequality on their sizes of different sets. The size of A times the size of B minus C is upper bounded by the size of A minus B times the size of A minus C. So that's the Ruzsa Triangle Inequality. Let me show you the proof. We will construct an injection from A cross B minus C to A minus B cross A minus C. Of course, if you can exhibit such an injection, then you prove the desired inequality. To obtain this injection, we start with an element a, d. And for this a, d, so for each d, let me pick-- so if d is an element of B minus C, let us pick arbitrarily but stick with those choices a b of d in the set B and a c of d in the set C such that d equals to b of d minus c of d. So because d is the set B minus C, it can be represented as a difference from one element from each set. So it may be represented in many ways. But from the start, you pick a way to represent it. And you stick with that choice. And you label that function b of d and c of d. Now I map a, d to the element a minus b of d and a minus c of d. So this is a map. I want to show that it is injective. Why is it injective? Well, to show something is injective, I just need to show that I can recover where I came from if I tell you the image. So I can recover a and d from these two numbers. So if-- sorry, new board. OK, so well you basically can think about how you can recover a and d from the image elements. So if the image-- so I label that map phi. So that's phi up there. So if the image is given, then I can recover d. So how can we recover the element d? So you subtract these two numbers. So d is minus x. And once you recover d, you can also then take a look at the first element. And you can recover a. So now you know d. I can now recover a. OK, so then this is-- you can check this is an injection. And that proves the Ruzsa Triangle Inequality. OK, so it's short, but it's tricky. It's tricky. OK, so why is this called Ruzsa's Triangle Inequality? Where is the triangle in this? The reason that it's given that name is that you can write the inequality as follows. Suppose we use rho A, B to denote this quantity obtained by taking the log of the size of A minus B divided by the square root of the product of their individual sizes, then the inequality says that the rho of B, C is, at most, rho of A, B plus rho of A, C, which looks like a triangle inequality. So that's why it's called Ruzsa's Triangle Inequality, because this is-- don't take it too seriously, because this is not a distance. So rho of A, A is not equal to 0. But it certainly has the form of a triangle inequality, hence the name. How should you think of Ruzsa's triangle inequality? So in this chapter, there's going to be a lot of symbol pushing around. And it's easy to get lost and buried in all of these symbols. And I want to tell you about how you might think about what's the point of Ruzsa's Triangle Inequality. How would you use it? And the idea is that if you have a set with small doubling, we want to use Ruzsa's triangle inequality and other tools to control its further doublings. So in particular, if-- so I'll say, applications. So suppose you knew that 2A minus 2A is size, at most, k times A. So this is a stronger hypothesis than just A has small doublings. Even if you iterate it several times, you still have size, at most, constant times A. I would like to start from this hypothesis and control further iterations, further subsets of A. And Ruzsa's Triangle Inequality allows us to do it, because by the Ruzsa's Triangle Inequality, setting B and C to be 2A minus A, we find that 3A minus 3A is, at most, 2A minus 2A squared over A, the size of A. So plug it in. This is what you get. So if the size of 2A plus 2A is, at most, k times the size of A, then the size of 3A times 3A is-- blows up by a factor, at most, k squared. So it controls further doublings. And of course, we can iterate. If we know set B and C to be 3A minus 2A, then what we get is 5A minus 5A is, at most, a size of 3A minus 3A square divided by the size of A. And so now you have a bound which is k to the 4 times A. And you can continue. You can continue. OK, so this is all a consequence of Ruzsa's triangle. So starting with this hypothesis, now I get to control all the further doublings, the further subset iterations. I call them doublings, but they're no longer doubles, but further subsets. But this is a stronger hypothesis than the one that we start with in Freiman's theorem, because if you have that, then this 2A minus 2A is at least as large as the size of 2A. So can we start with just doubling constant and then obtain bounds on the iterations? | it turns out you can. It will require another theorem. So this theorem is called Plunnecke inequality. But actually, these days, in literature, it's often referred to as Plunnecke-Ruzsa inequality. So Plunnecke initially proved it. But nobody understood his proof. And Ruzsa gave a better proof. And actually, recently, there was an even better proof. And that's the one I will show you. So Plunnecke-Ruzsa inequality tells us that if A is subset of some obedient group, and has doubling constant, at most, k, then for all non-negative integers m and n, the size of mA minus nA is, at most, k to the m plus n times the size of A. So if you have bounded doubling, then the further iterations, the further subset iterations are also controlling size. I want you to think of polynomial transformations in k as negligible. So don't worry about that we're raising things here. k is constant. You should think of m and n as constant. So I'm changing k to some other constant. And in fact, I'm only changing it by a polynomial. So this is, like, almost no change at all. So this is tricky. So we'll do it after a short break. All right, let's prove Plunnecke's inequality, Plunnecke-Ruzsa inequality. So the history of Plunnecke's inequality has some similarities with Freiman's theorem. So Plunnecke initially proved it, but his proof was hard to understand and was sort of left not understood for a long time until others like Plun and Ruzsa came in and really simplified the proof. But even then, the proof was not so easy. And if I were teaching this course about 10 years ago, I would have just skipped this proof, maybe sketched some ideas, but I would have skipped the proof. And the proof, actually, it's a beautiful proof, but it uses some serious graph theory. It uses Menger's theorem about flows. You construct some graph. And then you try to understand its flows. It's very pretty stuff. And I do encourage you to look it up. And then about eight years ago, Petridis found a proof, so a proof by Petridis, who was a PhD student that Tim Gowers at the time. And that was surprisingly short, and beautiful, and kind of surprised everyone that such a short proof exists, given that this theorem sat in that state for such a long time. And it's a pretty central step in the proof of Freiman's theorem. We'll prove Plunnecke-Ruzsa via a slightly more general statement. So you see, it generalizes the earlier statement. Instead of having one set, it will be convenient to have two different sets. So let A and B be subsets of some obedient group, as usual. If size of A plus B is, at most, k times the size of A, then mB minus nB has size, at most, k to the m plus n times the size of A for all non-negative integers m and n. So instead of having one set, so I have two sets, A and B. Of course, then you derive the earlier statement setting A and B to equal. So we'll prove this more general statement. The proof uses a key lemma. And the key lemma says that if a subset x of A is non-empty and-- so if x is a non-empty subset of A that minimizes the ratio x plus B divided by size of x, and let k prime be this ratio, this minimum ratio, then so the conclusion says that x plus B plus C has size, at most, k prime times the size of x plus C for all sets C. So that's the statement. I'll explain how you should think about the statement. These ratios which you see in both hypotheses, how you should think about them is that there is this graph. Let's say it's the group bipartite graph with the group elements on both sides. And the graph has edges, the bipartite graph, where the edges are from each vertex a drawn edge for each element of B. So I expand by B. So if you have this graph and you start with some A on the left, then its neighbors on the right will be A plus B. And those ratios up there are the expansion ratios. so quantities like this, they are expansion ratios. You start with some set on the left and see by a what fraction does it expand if you look at the neighborhood. So let's read the statement of the key lemma. It says, if you have a set x-- I look, so I have a set A. And I'm choosing a subset of A that minimizes the expansion ratio, so choose a non-empty subset that minimizes the expansion ratio. And if this minimum expense ratio is k prime, then, so x minimizes expansion ratio and expense ratios k prime, then x plus C also has expansion ratio, at most, k prime as well. So that's the statement. I mentioned earlier that the previous proofs of this theorem went through some graph theory and Menger's theorem, that type of graph theory. You can kind of see where it might come in. We're not going to do that. We're going to stick with additive combinatorics. We're going to stick with playing with sums, playing with additive combinatorics. So let's see how we can prove the statement up there, so using the key lemma. So assuming key lemma, so let's prove the statement, the theorem up there. So take a non-empty subset x of A-- sorry, so x subset of A that minimizes the ratio x plus B divided by x. And let k prime be this minimum ratio. Note that k prime is, at most, k, because if you plug in x equals the k, you get-- if you plug in x equals to A, you get k. But I'm choosing x to be possibly even lower. So k prime is, at most, k. Now, applying the lemma, so applying the key lemma with C equals to B, we find that x plus 2B, so C, plug in B, x plus 2B has size, at most, k times size of x plus B. But the size of x plus B is, at most, k times the size of A. So we get k squared, so k times the size of x, at most, k squared x. So we're already in good shape. If you iterate expansion twice-- so I imagine there is several chains of these bipartite graphs. If you iterate this expansion twice, you still do not blow up by too much. So we can iterate further, so apply the lemma with C being now 2B, and then later 3B, and so on. So you find that x plus nB has size, at most, k raised to power n times the size of x for all non-negative integers, n. What do we want to control? So we want to prove a bound on the size of mB minus nB. Take a look at the statement of Ruzsa Triangle Inequality. Applying Ruzsa Triangle Inequality, we find that if we want to control mB minus nB, we can upper bound it by x plus mB x plus nB divided by the size of x. Because each of these two factors in the numerator are small expansions of x, now we can upper bound the whole expression by k to the m plus n times the size of x. And because x is a subset of A, we can do one more upper bound and obtain the bound that we are looking for. OK, so that proves the key lemma. It's OK? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: Sorry, that proves the theorem, assuming the key lemma. Thank you, that's what I meant to say. Yeah, so that proves the theorem, assuming the key lemma. So now we do prove the key lemma. Great, we need to prove the key lemma. And so Petridis' proof of the key lemma, it's quite surprising, in that it uses induction. And basically, we have not used induction in this course ever since the first or maybe the second lecture, and for good reason. So everything in this course is fairly analytic. You know, you have these Roth bounds. And putting one extra vertex often doesn't really help. OK, so here, we're going to use induction on the size of C. OK, I just want to emphasize again that the use of induction here was surprising. So if the base case-- always check the base case-- when C is 1, then plus C is a translation. So this shifts the set over. And so you can see that if you do plus C and minus 1, you raise the plus C. And the conclusion follows basically from the hypothesis. So in this case, x plus B plus C is equal to x plus B, which is, at most, k prime times the size x, by definition of-- so this actually is equal to the size of k. The base case is easy. Now we do the induction step. So let's assume that the size of C is bigger than 1 and C is C prime plus an additional element, which we'll call gamma. So let's see this expression, x plus B plus C, by separating it according to if its contribution came from C prime or not. The contributions that came from C prime, I can write it like that. And then there are other contributions, namely those that came from this extra element. But I may have some redundancies in doing this. So I may have some redundancies coming from the fact that some of the elements in this set might have already appeared earlier. So let me take out those elements by taking out elements where it already appeared earlier. So this means I'm looking at the set z being elements of x such that x plus B plus gamma is already a subset of x plus B plus C prime. So the stuff in yellow, I can safely discard, because it already appeared earlier. So because of the definition of z, we see that z plus B plus gamma appears in x plus B plus C prime. So that union is valid. Now, z is a subset of x. So the expansion ratio for z is at least k prime, because we chose x to minimize this expansion ratio. We would like to understand how big x plus B plus C. So let's evaluate the cardinality of that expression up there. The cardinality I can upper bound by the union of these sum of the sizes of the components. So up there, so I just do a union bound on that expression up there. And now you see z is a subset of x. So I can split this expression up even further. All right, now let's use the induction hypothesis. So we have some expression involving x plus B plus C prime. So now we apply induction hypothesis over here to this expression that has plus C prime. And we obtain an upper bound which is k prime x plus C prime. And the two expressions on the right, well, one of them here is, by definition, coming from the expansion ratio of x. And then the other, we gave a bound just now. OK, so we're almost there. So we are trying to upper bound the size of this quantity. So we decomposed it into pieces according to its contribution coming from this extra element. And we analyzed these pieces individually. But now I want to understand the right-hand side, so x plus C. So let's try to understand the right-hand side. See, the x plus C, I can likewise write it as earlier by decomposing it into contributions from C prime and those from the extra element. And as earlier, we can take out contributions that were already appearing earlier, which we now recall W plus gamma, where W is the set of elements in x, such that x plus gamma is already contained in x plus C prime. So this part was already included earlier. We don't need to include it any more. A couple of observations that were different from earlier-- now this union I claim and say disjoined union. So this union is a disjoined union. So there is actually no more overlaps. And furthermore, W is contained in the set z from earlier. Any questions? All right, therefore, the size of x plus C is equal to, because this is a disjoined union, x plus C prime plus the size of x minus the size of W, and which is-- so W, because W is contained in z, is x plus C prime plus the size of x minus the size of z. Now you compare these two expressions. And that proves the key lemma. OK? That's it. Yeah? AUDIENCE: Can you explain one more time why it's a disjoined union? YUFEI ZHAO: OK, great, so why is this a disjoined union? Now, I have the set here. So I'm looking at this x plus gamma. So think about, let's say, gamma equals to 0. So we translate, think about if gamma equals to 0. So I include x, but if some element of x was already here, I take it out. So here is x plus C prime. And let's say this set is x. This W then would there be their intersection. So now x minus W is just this set. So it's a disjoined union. So the points are, here, you're adding single elements, where there, you're adding some sets. So you cannot necessarily take a whole partition, necessarily. But here it's OK. It's tricky. Yeah, it's tricky. And you know, this took a long time for people to find. It was found about eight years ago. And yeah, it was surprising when this proof was discovered. People did not expect that this proof existed. And it's also tricky to get right. So the details-- I do it slowly. But the execution, like, the order that you take the minimalities is important. It's easy to mess up this proof. OK, any questions? Let me show you, just as an aside, an application of this key lemma. So earlier we saw Ruzsa's Triangle Inequality. And you may wonder, what if you replace the minus signs in the theorem by plus signs? I mean, if you replace the right-hand side, the two pluses by minuses, the same proof works. But if you replace all the minus signs by plus signs, you see, the proof doesn't work anymore. Just give yourself a moment to convince yourself that. If you just replace all the minus signs by plus signs, it doesn't work anymore, but it's still true. So this is more of an aside. We will not use it. But it's nice. It's fun. So we have the inequality A B plus C bounded by A plus B A plus C. So hopefully you've convince yourself that if you follow our notes with the previous proof, you are you're not going to get it. You're not going to prove this this way. It's still true. So how can we prove it? So we are going to use the key lemma. So first, the statement is trivial if A is empty. So let's assume that's not the case. Let x be a subset of A that minimizes the expression or the expansion ratio x plus B divided by x as in the key lemma. So let k denote the quantity A plus B over A, so the expansion ratio for A, and k prime be the expansion ratio for x. So the quantities came up earlier. k prime is, at most, value of k, because of our choice of x. So the key lemma gives x B plus C-- OK, and this, it's really amazing what's happening. It seems like we're just going to throw in some extra stuff. So I'm going to upper bound it by x plus B plus C. I'm just going to throw in some extra stuff. And then by the lemma, I can upper bound this expression by k prime times the size of x plus C. So that's what the lemma gives you. And because x is a subset of A, we can upper bound it by the size of A plus C. And now k prime is, at most, the size of k. k prime is, at most, k. So you have that. But now look at what the definition of k is. And that's it. So that's how you can prove this harder version of Ruzsa's Triangle Inequality, Yes, question? AUDIENCE: Are there equality cases for this? YUFEI ZHAO: All right, question, are there equality cases for this? Yes, so I mean, if you're in a subgroup, then all things are equal, although, if A, B, and C are all the same subgroup of some finite obedient group. AUDIENCE: What if you're working in the integers? YUFEI ZHAO: Great, yeah, so the question is, what if you're working in integers? That's a good question. I mean, you can suddenly get expansion ratio of two if you have-- no, OK. Right, yeah, so that's a good question. Can you get equality cases? If you set A, B, and C to be sets of very different sizes, AP is a very different set. Yes? AUDIENCE: If you set B being true for the single element and B and C to just be sets that have full extension so that B plus C is not [? B. ?] YUFEI ZHAO: Great, yeah, so you take A to be a single-element set, then it could be that B plus C is the same as the size of B times the size of C if B and C have no additive interactions. Yeah? AUDIENCE: Are there other known proofs of this that are less involved? YUFEI ZHAO: OK, are there other known proofs of this? I don't know. I'm not aware of other proofs. It would be nice to find a different proof. More questions? Yeah? AUDIENCE: How did come up with this? YUFEI ZHAO: How did he come up with this? You know, Petridis did a very long PhD. He spent, I think, seven or eight years in his PhD. And he eventually came up with this proof. So he must have thought a lot about this problem. But the already proofs are still nice. The earlier proofs, I think they are worth looking at. They are looking at expansion ratios in graphs. So you take a sequence of graphs, multi-partite graphs. And you think about expansion. And you think about flows. It's, again, not easy at all, but maybe more motivated if you're used to think about expansions and flows in graphs. And this one really distills the core ideas of that proof, but looks something you can teach in half a lecture, whereas before this proof came about, I could have taught the proof, but most likely, I would have just skipped it. To just give you a sense of what's coming up ahead, so going forward, the first thing we'll do in the next lecture is we'll show-- we'll see the proof of the Freiman's theorem in the finite field setting, so in F2 to the n. There is one more thing, one more very quick lemma called the covering lemma, Ruzsa are Covering Lemma, that I will tell you. And then once we have that, then we can prove Freiman's theorem in the finite field setting. But then moving on to the integers, we'll need to understand how to think about the integers. Well, if you start with a subset of integers, they could, even if you have a small number of elements, they could be spread out, really, all over the place. But because you only care about the additive structure within the integers, you can try to model that very spread-out set of integers to something that is very compact. So there is something called the modeling lemma, Ruzsa's Modeling Lemma, that we'll see next time. And that will play a pretty important role. Before finishing off, I also want to mention that Freiman in his work, so he had this result. And he also wrote a book I think called The Structural Theory of Set Addition, or something like that, that emphasized this connection. He tried to draw this analogy sort of comparing additive combinatorics to geometry in the sense of cline, where in order to understand sets, you don't think about sets. You think about maps between sets, which was kind of an obscure idea at the time. But we'll see next lecture that this actually is a very powerful, it's a very influential idea to really think about a sets of integers under transformations that only preserve their additive structure. So we'll see this next time.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
9_Szemerédis_graph_regularity_lemma_IV_induced_removal_lemma.txt
YUFEI ZHAO: We've been spending the past few lectures discussing Szemeredi's Regularity Lemma. And one of the first applications that we discussed of the Regularity Lemma is the triangle removal Lemma. So today, I want to revisit this topic and show you a strengthening of the Removal Lemma for which new regularity techniques are needed. But first, recall the graph removal Lemma. In the graph removal Lemma, we have that for every graph H and epsilon bigger than zero, there exists some delta such that if an N vertex graph has fewer than delta and to the number of vertices of H, many copies of H, then it can be made H-free by removing fewer than epsilon N squared edges. Even in the case when H is a triangle, when this is called a triangle removal Lemma, even in that case, basically the regularity method is more or less the only way that we currently know how to prove this theorem. So we saw this a few lectures ago. What I would like to discuss today is a variant of this result where instead of considering copies of H, we're now considering induced copies of H. OK? So this is the induced graph removal Lemma where the only difference is that the hypothesis is now going to be changed to induced copies of H. And the conclusion is that you can make the graph induced H-free. So let me remind you, the difference between the induced graph subgraph and the usual subgraph. So we say that H is an induced copy of G, induced subgraph of G. If one can obtain H from G by deleting vertices of G. You're not allowed to delete edges, but only allowed to delete vertices. So in other words, the four cycle is not an induced subgraph because, well, if you select four vertices, you don't generate this four cycle. You get extra edges. So it is a subgraph, but not an induced subgraph. So it is a theorem, the induced graph removal Lemma. So it's a theorem, and let's discuss how we may prove that theorem. Question. OK, question is, why is it stronger than the graph removal lemma? So it's not stronger, but we'll see the relationship between the two. So I claim that it is more difficult to do this theorem. Any more questions? So let's pretend for a second that whatever's in here is not quite true. So here's an example. For example, if your H is three isolated vertices. So what is that saying? We're looking at copies of H which are three isolated vertices. So really you are looking at triangles in g complement. So this is exactly the triangle removal lemma in the complement of g, but you can't get rid of these guys by removing edges. So we need to make the modification where instead of removing these edges, we need to both remove and add by adding or deleting. So maybe at the same time. So you're allowed to add some edges, delete some edges. But in total, you change no more than epsilon n squared edges. So those are sometimes also known as the edit distance. You're allowed to change edges. So you can add edges and delete edges. Any questions about the statement? All right, so let's think about how would you prove this result following the proof that we did for the triangle removal lemma. So let's pretend that we go through this proof and think about what could go wrong. So remember in the application of the removal lemma, so the recipe has three steps. The first step we do a partition. So we partition applying Szemeredi's regularity lemma to this partition. And the second step is do a cleaning, and the two key things that happen in the cleaning is we remove low density pairs of parts and irregular pairs. And the third step we claim that once we do the cleaning, once we remove those edges, the resulting graphs should be H3. Because if we're not H3, then by considering the vertex parts where H lie and applying the counting lemma, you can generate many more copies of H. So these were the three main steps in the proof of the triangle removal lemma. So let's see what happens when we try to apply this strategy to the induced version. I mean, the partition you still do the regularity partition. Nothing really changes there. So let's see in the cleaning step what happens. For low density pairs-- well, so now we need to think about not just low density pairs, but also high density pairs. Because in the induced, we think about edges and non-edges at the same time. So you might think of a strategy which is like the edge density is less than n. So less than epsilon, then you remove all those edges. And if the edge density is bigger than 1 plus epsilon, then you add all of those edges in. So this is the natural generalization of our strategy for triangle removal lemma for the induced setting. So so far, everything's still OK. But now what would you do for the irregular pairs? That's problematic. Previously for triangle removal lemma, we just said if a pair is irregular, get rid of that pair and it will never show up in the counting stage. But that strategy no longer works. Because for example, if your graph H being counted is this here, you do the regularity partition, and one of your pairs is irregular. So you, let's say, get rid of all those edges in between. Then maybe you have some embedding of H where you are going to use the removed edges. And now you don't have a counting lemma. You cannot say, I found this copy of H in my changed graph. And by the counting lemma I could get many copies of H because you have no control over this irregular pair anymore. So the fact that you have to add and remove makes it unclear what to do here, and this is a big obstacle in the application of the regularity lemma to the induced removal lemma application. Any questions about this obstacle? So make sure you understand why this is an issue. Otherwise you won't really appreciate what will happen next. So somehow we need to find some kind of regularity partition to get no irregular pairs. So the question is, is there a way to partition so that there are no irregular pairs? For those of you who have started your homework problem on time, you realize that the answer is no. So one of the homework problems is for you to show that for the specific graph known as the half graph. So there was an example in homework that for the half graph-- so you'll see in the homework what this graph is-- you cannot partition it so that you get rid of all irregular pairs. Irregular pairs are necessary in the statement of regularity lemma. So what I want to show you today is a way to do what's called a strong regularity lemma in which you obtain a somewhat different consequence that will allow you to get rid of irregular pairs in the more restricted setting. So this is the issue, the irregular pairs. Before telling you what this regularity lemma is, I want to give you a small generalization of the induced graph removal lemma, or just a different way to think about the statement. And you can think of it as a colorful version instead of induced where you have edges and no edges. You can also have colored edges. So colorful removal lemma, although this name is not standard. So colorful-- so when we talk about graphs, it's colorful graph removal lemma. So for every k, r, and epsilon, there exists delta such that if curly H is a set of r edge of the complete graph on little k vertices. So edge coloring just means using r colors to color the edges. So there are no restrictions about what are allowed, what are not allowed. So just a set of possible r colorings. Then if the complete graph-- say it slightly differently. So then every r edge coloring of the complete graph on n vertices with fewer than delta fraction of its k vertex subsets, say k vertex subgraphs, belonging to the script H. So every such graph can be made curly H free by recoloring, so using the same r colors, a fewer than epsilon fraction. So less than epsilon fraction of the edges of this kn. So in particular, the version that we just stated, the induced version, so the induced graph removal lemma, is the same as having two colors and H having exactly one red-blue coloring of k of the complete graph on the same number of vertices as H. So you color red the edges and blue the non-edges, for instance. And you're saying, I want to color the big complete graph with red and blue in such a way that there are very few copies of that pattern. So then I can recolor the red and blue in a small number of places to get rid of all such patterns. So having a colored pattern somewhere in your graph in this complete graph coloring is the same as having an induced subgraph. Yeah? AUDIENCE: So after done-- like the statement after done is a really long sentence. Can I-- YUFEI ZHAO: Yeah, OK. So every r edge coloring of kn with a small number of patterns can be made h-free by recoloring a small fraction of the edges. So like in a triangle removal lemma, every graph with a small number of triangles can be made triangle-free by removing a small number of edges. Any other questions? So this is a restatement of the induced removal lemma with a bit more generality. It's OK if you like this one more or less, but let's talk about the induced version from now on. But the same proofs that I will talk about also applies to this version where you have somewhat more colors. So the variant of the regularity lemma that we'll need is known as a strong regularity lemma. To state the strong regularity lemma, let me recall a notion that came up in the proof of Szemeredi's regularity lemma. And this was the notion of an energy. So recall that if you have a partition, denoted P. So if this is a partition of the vertex set of a graph, G, and here n is the number of vertices, we defined this notion of energy to be this quantity denoted q, which is basically a squared mean of the densities between vertex parts appropriately normalized if the vertexes do not all have the same size. In the proof of Szemeredi's regularity lemma, there was an important energy increment step which says that if you have some partition p that is not epsilon regular, then there exists a refinement, Q. And this refinement has the property that Q has a small number of pieces, or not too large as a function of P So it's bounded at least in terms of P. But also if P is not epsilon regular, then the energy of Q is significantly larger than the energy of P. So remember, this was an important step in the proof of regularity lemma. So to state the strong regularity lemma, we need that notion of energy. And the statement of the strong regularity lemma, if you've never seen this kind of thing before, will seem a bit intimidating at first because it involves a whole sequence of parameters. But we'll get used to it. So instead of one epsilon parameter, now you have a sequence of positive epsilons. And part of the strength of this regularity lemma is that depending on the application you have in mind, you can make the sequence go to zero pretty quickly. Thereby increasing the strength of the regularity lemma. So there exists some m bound, which depends only on your epsilons such that every graph has not just one, but now we're going to get a pair of vertex partitions P and Q with the following properties. So first, P refines-- so Q refines P. So it's a pair of partitions, one refining the other. The number of parts of Q is bounded just like in the usual regularity lemma. The partition P epsilon 0 regular. And here is the new part that's the most important one. Q is very epsilon regular. So it's not just epsilon 0 regular, it's epsilon sub the number of parts of P regular. So you should think of this as extremely regular because you get to choose what the sequence of epsilon is. And finally, the energy difference between P and Q is not too big. This is the statement of the strong regularity lemma. It produces for you not just one partition, but a pair of partitions. And in this pair of partitions, you have one partition, P, which is similar to the one that we obtained from Szemeredi's regularity lemma is some epsilon 0 regular, but we also get a refinement Q. And this Q is extremely regular. So you can think that is P, then Q is an extremely regular refinement of P. Any questions about the statement of the strong regularity lemma? So the sequence of epsilons gives you flexibility on how to apply it, but let's see how to prove it. And the proof is once you understand how this works, conceptually it's pretty short. But let me do it slowly so that we can appreciate this sequence of epsilons. And the idea is that we will repeatedly apply Szemeredi's regularity lemma. So start with the regularity lemma. We'll apply it repeatedly to generate a sequence of partitions. So first, let me remind you a statement of Szemeredi's regularity lemma. This is slightly different from the one that we stated, but comes out of the same proof. So for every epsilon, there exists some m0 which depends on epsilon such that for every partition P0, so starting with some partition-- so actually, let me start with just P. So if you start with some partition of the vertex set of g, there exists a refinement P prime of P into at most-- OK, so the refinement has is such that with each part of P refined into at most m0 parts such that P prime, the new partition, is epsilon regular. So this is a statement of Szemeredi's regularity lemma that we will apply repeatedly. So in the version that we've seen before, we would start with a trivial partition. And applying refinements repeatedly in the proof to get a partition into a bounded number of parts such that the final partition is epsilon regular. But instead, in the proof of the regularity lemma if you start with not a trivial partition but start with a given partition and run this exact same proof, you find this consequence. Except now you can guarantee that the final partition is a refinement of the one that you are given. So let's apply the statement, and we obtain a sequence of partitions of g-- the vertex set of g-- starting with P0 being a trivial partition, and so on. Such that each partition, each P sub i plus 1 refines the previous one, and such that each P sub i plus 1 is epsilon sub a P sub i regular. So you apply the regularity lemma with parameter based on the number of parts you currently have. Applied to the current partition, you get a finer partition that's extremely regular. And you also know that the number of parts of the new partition is bounded in terms of the previous partition. All right. Any questions so far? So now we get this sequence of partitions. We can keep on doing this. So g could be arbitrarily large, but eventually we will be able to obtain the last condition here, which is the only thing that is missing so far. So since the energy is bounded between 0 and 1, there exists some i at most 1 over epsilon 0 such that the energy goes up by less than epsilon 0. Because otherwise your energy would exceed 1. So now let's set P to be this Pi, and Q to be this, the refinement-- the next term in the partition. And what we find is that the-- so then you have basically all the conditions. So p is epsilon 0 regular, because it is epsilon-- the previous term, which is at most epsilon 0 regular. And you have this one as well, and this one as well. And we want to show that the number of parts of Q is bounded. And that's basically because each time there was a bound on the number of parts which depends only on the regularity parameters, and you're repeating that bound a bounded number of times. So Q is-- so it's bounded as a function of the sequence of epsilons-- this infinite vector of epsilons, but it is a bounded number. You're only iterating this bound a bounded number of times. And that finishes the proof. Any questions? It may be somewhat mysterious to you right now why we do this, so we'll get that application a second. But for now, I just want to comment a bit on the bounds. Of course, the bounds depend on what epsilon i's do you use. And typically, you want the epsilon i's to decrease with more parts that you have. And with almost all reasonable applications of this regularity lemma, the strong regularity lemma-- so for example, with epsilon i being some epsilon divided by, let's say, i plus 1 or any polynomial of the i's-- or you can even let it decay quicker than that, as well. You see, basically what happens is that you are applying this m0 bound. m0 applied in succession 1 over epsilon times. In the regularity lemma, we saw that the m0 that comes out of Szemeredi's graph regularity lemma is the tower function. So the tower function, that's a tower of i is defined to be the exponential function iterated i times. So of course, I'm being somewhat loose here with the exact dependence, but you get the idea that now we want to apply the tower function i times. Instead of iterating the exponential i times, now you iterate the tower function i times. And some of you laughing, this is an incredibly large number. It's even larger than the tower function. So in literature, especially around the regularity lemma, this function where you iterate the tower function i times is given the name wowzer. [LAUGHTER] As in, wow, this is a huge number. So it's a step up in the Ackerman hierarchy. So if you repeat the wowzer function i times, you move up one ladder in the Ackerman hierarchy and this hierarchy of rapidly growing functions. But in any case, it's bounded and that's good enough for us. Any questions so far? Yeah? AUDIENCE: What do you call like [INAUDIBLE] YUFEI ZHAO: Yes, so question is, what do you call wowzer iterated? I'm not aware of a standard name for that. Actually, even the name wowzer somehow is very common in the combinatorics community, but I think most people outside this community will not recognize this word. Any more questions? So another way it's a step up in Ackerman hierarchy. So it's enumerated one, two, three, four, you know, if you keep going up. All right. Another remark about this strong regularity lemma is that it will be convenient for us-- actually, some are more essential compared to our previous applications-- to make the parts equitable. So P and Q equitable. And basically, the parts are such that all the-- the partitions are such that all the parts have basically the same number of vertices. So I won't make it precise, but you can do it. It's not too hard to do it. And you can prove it similar to how I described how to modify the proof of the regularity level. So I won't belabor that point, but we'll use the equitable version. All right, so how does one use this regularity lemma? Let me state a corollary, and let me call this a corollary star because you actually need to do some work to get it to follow from the strong regularity lemma. But the corollary is the version that we will apply that if you start with a decreasing sequence of this epsilon, then there exists a delta such that the following is true. Every n vertex graph has an equitable vertex partition, call it i through the k, and a subset Wi of each Vi such that the following properties hold. First, all the W's are fairly large. They're at least constant proportion of the total vertex set. Between every pair of Wi Wj, it is epsilon sub k regular. And this is the point I want to emphasize. So here there are not you regular pairs anymore. So it is every. So no irregular pairs between the Wi's, and also we need to include the case when i equals the j, as well. So each Wi is regular with itself. And furthermore, the edge densities between the V's are similar to the edge densities between the corresponding W's. And here it is for most pairs for all but at most epsilon k square pairs. Epsilon 0, yeah. At most epsilon 0. Any questions about the statement? So let me show you how you could deduce the corollary from the strong regularity lemma. So first, let me draw your picture. So here you have a regularity partition. And so these are your V's, and inside each V I find a W such that if I look at the edge sets between pairwise blue sets, including the blue sets with themselves, it is always very regular. And also, the edge densities between the blue sets is mostly very similar to the edge density between their ambient white sets. OK, so let me say a few words-- I won't go into too many details-- about how you might deduce this corollary from the strong regularity lemma. So first let me do something which is slightly simpler, which is to not yet require that the blue sets, Wi's, are regular with themselves. So without requiring this as regular so we can obtain the Wi's by picking a uniform random part of the final partition, Q, inside each part of P in the strong regularity lemma. So you have the strong regularity lemma, which produces for you a pair of partitions like that. So it produces for you a pair of partitions. And what we will do is to pick one of these guys as my W, pick one of these guys at random, and pick one of those guys at random. Because W is so extremely regular, most of these pairs will be regular. So with high probability, you will not encounter any irregular pairs if you pick the W's randomly as parts of Q. So that's the key point. Here we're using that Q is extremely regular. So all the Wi Wj is regular for all i not equal to j with high probability. But the other thing that we would like is that the edge densities between the W's are similar to those between the V's. And for that, we will use this condition about their energies being very similar to each other. So the third consequence, C, is-- it's a consequence of the energy bound. Because recall that in our proof of the Szemeredi regularity lemma there was an interpretation of the energy as the second moment of a certain random variable which we called z. And using that interpretation, I can write down this expression like that. We are here assuming for simplicity that Q is completely equitable, so all the parts have exactly the same size. Z of Q is defined to be the edge density between Vi and Vj for random ij. So this is a random variable z. So you pick pair of parts uniformly, or maybe with some weights if they're not exactly equal. And you evaluate the edge density. So this energy difference is the difference between the second moments. And because Q is a refinement of P, it is the case that this difference of L2 norms is equal to the second moment of the difference of the random variables. So we saw a version of this earlier when we were discussing variance in the context of the proof of the similar irregularity lemma. Here it's basically the same. You can either look at this inequality part by part of V, or if you like to be a bit more abstract then this is actually a case of Pythagorean theorem. If you view these as vectors in a certain vector space, then you have some orthogonality. So you have this sum of squares identity. Where does part A come from? So part A, we want the parts, that Wi's to be not too small, but that comes from a bound on the number of parts of Q. So so far this more or less proves the corollary except for that we simplified our lives by requiring just that the i not equal to j, the Vi Vj's are regular. But in the statement up there, we also want the Vi's-- so the Wi's ice to be regular with themselves, which will be important for application. So I won't explain how to do that, and part of the reason is that this is also one of your homework problems. So in one of the homework problems problem set 3, you were asked to prove that every graph has a subset of vertices that is of least constant proportion such that it is regular with itself. And the methods you use there will be applicable to handle the situation over here, as well. So putting all of these ingredients together, we get the corollary whereby you have this picture, you have this partition. I don't even require the Vi's to be regular. That doesn't matter anymore. All that matters is that between the Wi's they are very regular, and that there are no irregular parts between these Wi's. And now we'll be able to go back to the induced graph removal lemma where previously we had an issue with the existence of irregular pairs in the use of Szemeredi regularity partition, and now we have a tool to get around that. So next we will see how to execute this proof, but at this point hopefully you already see an outline. Because you no longer need to worry about this thing here. Let's take a quick break. Any questions so far? Yes? AUDIENCE: Why are we able to [INAUDIBLE] YUFEI ZHAO: OK, so the question was, there was a step where we were looking at some expectations of squares. And so why was that identity true? So if you look back to the proof of Szemeredi's regularity lemma, we already saw an instance of that inequality in the computation of the variance. So you know that the variance of x, on one hand it is equal to where mu is the mean of x. And on the other hand, it is equal to this quantity. So you agree with this formula? And you can expand it to prove it, and the thing that-- the question that you raised basically you can prove by looking at this formula part by part. Any more questions? So let's now prove the induced graph removal lemma. And we'll follow the regularity partition, but with a small twist that Instead of using Szemeredi's regularity lemma, we will use that corollary up there. So let's prove the induced graph removal lemma. So the three steps. First, we do partition. So let's suppose you have a-- so we suppose g is like above. You have very few induced copies of H. Let's apply the corollary to get a partition of the vertex set of g into k parts. And inside each part I have a W. Satisfying the following properties that each Wi Wj is regular with the following parameter which will come out of later when we need to use the counting lemma. But it's some number, but don't worry too much about it. So here I'm going to-- so let's say H has little H vertices. So between Wi Wj it is this regular. So we actually have not yet used the full strength of the corollary where I can make the regularity even depend on k. So we will not need that here, but we'll need it in a later application. So the exponent is little H. OK, so other properties are that the densities between the Vi's and the Wi's do not differ by more than epsilon over 2 for all but a small fraction-- so epsilon k squared over 2-- pairs. And finally, the sizes of the Wi's are at least delta 0 times n where delta 0 depends only on epsilon. Epsilon and H. This is the partition step, so now let's do the cleaning. In the cleaning step, basically we're not going to-- I mean, there is no longer an issue of irregular pairs if we only look at the Wi's. So we just need to think about the low density pairs or whatever the corresponding analog is. And what happens here is that for every i less than j, and crucially including when i equals to j, if the edge densities between the W's is too small then we remove all edges between Vi and Vj. And if the edge density between the Wi's is too big, then we remove all edges. So we add all edges between Vi and Vj. How many edges do we end up adding or removing? So the total number of edges added or removed from g is-- in this case, so if the edges density in g between the Vi's and Vj's is also very small, then you do not remove very many edges. But most pairs of Vi and Vj have that property. So you tidy up what kind of errors you can get from here and there, and you find that the total number of edges that are added or removed from g is less than, let's say, epsilon n squared. Maybe even get an extra factor of 2, but you know, upon changing some constant factors, it's less than epsilon n squared. So this is some small details you can work out. Here we're using-- asking, how is the density between Vi and Vj related to Wi and Wj? Well, for most pairs of i and j they're very similar. And there's a small fraction of them that are not similar, but then you lump everything in to this bound over here. So maybe I need to-- let me just put a 2 here just to be safe. All right. So we deleted a very small number of edges, and now we want to show that the graph that has resulted from this modification does not have any induced H sub-graphs. And the final step is the counting step. So suppose there were any induced H left after the modification. So I want to show that, in fact, there must be a lot of H's-- induced H's originally in the graph, thereby contradicting the hypothesis. So where does this induced H sit? Well, you have the V's, and inside the V's you have the W's. So suppose my H is that graph for illustration. And in particular, I have a non-edge. So I have an edge, and I also have a non-edge. So between these two, that's the non-edge. So suppose you find a copy of H in the cleaned-up graph. Where can that cleaned up-- this copy of H sit? Suppose you find it here. The claim now is that if this copy of H existed here, then I must be able to find many such copies of H in the corresponding yellow parts. Because between the yellow parts you have regularity, and you also have the right kinds of densities. Because if they didn't have the right kind of density, we would have cleaned it up already. So that's the ideal. If you had a copy of this H somewhere, then I zoom into the yellow parts, zoom into these W's, and I find lots of copies of H in between the W's. So suppose-- let me write this down. So suppose the little V's, so the vertices, lies in the-- so I'm just indexing where a little v lies. The little v lies in big V sub phi V for some phi which since the vertices of H2 went through k. So now we apply counting lemma to embed induced copies of H in g where the vertex V in H is mapped to a vertex in the corresponding W. And we would like to know that there are lots of such copies. And the counting Lemma-- or rather, some variant, but I should read the counting lemma that we did last time and view it as a multi-partite version. Apply this so far part to part. So we find that the number of such induced copies is within a small error. So that regularity parameter multiplied by the number of edges of H, which we already canceled out, multiplied by the product of these Wi's. So it's within this error of what you would suspect if you naively multiply the edge densities together along with the vertex densities. So these factors are for the edges that you want to embed, and then I also need to multiply the densities for the long edges. So 1 minus these edge densities. So one way you can think of it is just consider the complement in g. So consider the complement of g to get this version here. And then finally, the product of the vertex set sizes. And the point is that this is not a small number. So hence the number of induced copies of H in g is at least on the order of-- well, OK? So it's at least some number, which is basically this guy over here. So epsilon over 4 raised to-- all of these are constants, so that's the point. All of these guys are constants, minus-- so here is the main term, and then the error term. And then the product of these vertex set sizes, and we saw that each vertex set is not too small. So you have lots of induced copies of H in g. Yep? AUDIENCE: How do you do in the case where the density between [INAUDIBLE] YUFEI ZHAO: OK, so can you repeat your question? AUDIENCE: How are you dealing with the [INAUDIBLE] YUFEI ZHAO: OK. So question, how do we deal with the all but epsilon over two pairs? So that comes up in the cleaning step in what I wrote in red in dealing with the number of total edges that are added or removed. So think about how many edges are added or removed. In these non-exceptional pairs, the number of edges that are added or removed-- let's just think about added edges. So if the density of V is controlled by that of W, then the number of edges added-- or removed, in that case-- from all such pairs along with-- yeah. So you have epsilon n squared edges changed. On the other hand, if this is not true then you only have epsilon k squared such pairs ij for which this cannot be true. So you also only have at most epsilon n squared edges added or removed in such cases. That answers your question? Yes? AUDIENCE: Is that number 0? YUFEI ZHAO: Is which number 0? AUDIENCE: The number of induced edges for the [INAUDIBLE] YUFEI ZHAO: The-- AUDIENCE: Yeah, the top board. YUFEI ZHAO: Top board? Good. So asking about this number. So that should have been 2. Yes? AUDIENCE: I don't see k anywhere. YUFEI ZHAO: OK, so question, you don't see k appearing anywhere. So the k in the corollary, do you mean? AUDIENCE: Yeah. YUFEI ZHAO: So that hasn't come up yet. So it comes up implicitly because we need to lower bound the sizes of these W's. So this is partly why we need a bound on the number of parts, but it is true that we do not need epsilon k to depend on k in this application yet. I will mention a different application in the second where you do need that k. OK, so the number of induced H in g is at least this amount. And that's a small lie. You need to maybe consider this is the number of homomorphic. Well, actually, no, we're OK. Never mind. So you can set delta to be this quantity here, and then that finishes the proof. So you have lots of induced copies of H in your graph which contradicts the hypothesis. So that finishes the proof of the induced removal lemma, and basically the proof is the same as the usual graph removal lemma except that now we need some strengthened regularity lemma which allows us to get rid of irregular parts but in a more restricted setting. Because we saw you cannot completely get rid of irregular parts. Any questions? Yes? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: So I want to address the question of why did I state this corollary in this more general form of a decreasing sequence of epsilons? So first of all, with strong regularity lemmas, the strength is sometimes always nice to-- it's always nice to state it with this extra strength. Because it's the right way to think about these types of theorems. That the regularity on the parts depends-- you can make it depend on the number of parts so that you get much stronger control on the regularity. But there are also some applications. For example, whether I will state next, an application where you do need that kind of strength. So here's what's known as the infinite removal lemma. Here we have not just a single pattern or a finite number of patterns we want to get rid of. For now we have infinitely many patterns. So for every curly H, which is a possibly infinite set of graphs. The graphs themselves are always finite, but this may be an infinite list. And an epsilon parameter. There exists an H0 and a delta positive parameter such that every n vertex graph with at most delta-- so less than delta-- V to the H induced copies of H for every H in this family with fewer than H0 vertices. So every graph with this property can be made curly H free. So it means free of-- induced curly H free by adding or removing fewer than epsilon n squared edges. So now instead of a single pattern you have a possibly infinite set of induced patterns and a want to make your graph curly H free-- induced curly H free. And the theorem is that if there exists some finite bound, H0, such that if you have few copies-- so for all the patterns up to that point-- then you can do what you need to do. So take some time to even digest this statement, but it's somehow infinite versions-- the correct infinite version of the removal lemma if you have infinitely many patterns that you need to remove. And I claim that the proof is actually more or less the same proof as the one that we did here, except now you need to take your epsilon case, as in this corollary, to depend on k. You need to in some way look ahead in this infinite pattern. So here in proof, this epsilon k from corollary depends on k. And also it depends on your family of patterns H. Finally, I want to mention a perspective-- a computer science perspective on these removal lemmas that we've been discussing so far. And that's in the context of something called property testing. And basically, we would like an efficient-- efficient meaning fast-- randomized algorithm to distinguish graphs that are triangle-free from those that are epsilon far from triangle-free. Where being epsilon far from triangle-free means that you need to change more than epsilon n squared edges here. n is, as usual, the number of vertices to make the graph triangle-free. So the distance, the [INAUDIBLE] distance is more than epsilon away from being triangle-free. So somebody gives you a very large graphing. n is very large. You cannot search through every triple vertices. That's too expensive. But you want some way to test if a graph is triangle-free versus very far away from being triangle-free. So there's a very simple randomized algorithm to do this, which is to just try randomly sample a random triple of vertices and check if it's a triangle. So you do this. And just to make our life a bit more secure, let's try it some larger number of times. So some c of epsilon some constant number of times. And if you find a triangle-- so if you don't find a triangle, then we return that it's triangle-free. Otherwise we return that it is epsilon far from triangle-free. So that's the algorithm. So it's a very intuitive algorithm, but why does it work? So we want to know that, indeed, somebody gives you one of these two possibilities. You run that algorithm, you can succeed with high probability. Question? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: So let's talk about why this works. So theorem, for every epsilon, there exists a c such that algorithm succeeds with probability bigger than 2/3, and 2/3 can be any number. So any number that you like because you can always repeat it to boost that constant probability. So there are two cases. If g is triangle-free, then it always succeeds. You'll never find this triangle, and it would return triangle-free. On the other hand, if g is epsilon far from triangle-free, then triangle removal lemma tells us that g has lots of triangles. Delta n cubed triangles. So if we sample c being, let's say, 1 over delta times-- delta here is a function of epsilon from the triangle removal lemma. So we find that the probability that the algorithm fails is at most-- so you have a lot of triangles. So very likely you will hit one of these triangles. So the probability that the algorithm fails is at most 1 minus delta n cubed divided by total number of triples raised to 1 over delta. And this is 1 minus at most 1 minus 6 delta raised to 1 over delta, and it's at most e to the minus 6. So less than 1/3 in particular. So this algorithm succeeds with high probability. Now, how big of a c do you need? Well, that depends on the triangle removal lemma. So it's a constant. So it's a constant, does not depend on the size of the graph. But it's a large constant, because we saw in the proof of regularity lemma that it can be very large. But you know, this theorem here is basically the same as a triangle removal lemma. So it's highly non-trivial if it's true. Even though the algorithm is extremely naive and simple. I just want to finish off with one more thing. Instead of testing for triangle-freeness, you can ask what other properties can you test? So which graph properties are testable in default in that sense? So distinguishing something which has the property, so P versus epsilon far from this property P. And you have this tester which is you sample some number of vertices. So this is called the oblivious tester. So you sample k vertices, and you try to see if it has that property. So there's a class of properties called hereditary. So hereditary properties are properties that are closed under vertex deletion. And these properties are-- lots of properties that you're seeing are of this form. So for example, being H3 is this form being planar so this one being induced H3, so this one being three-colorable, being perfect, they're all examples of hereditary properties. Properties that if your graph is three-colorable, you take out some vertices, it's still three-colorable. And all the discussions that we've done so far, in particular the infinite removal lemma. If you phrase it in the form of property testing given the above discussion, it implies that every hereditary property is testable. In fact, it's testable in the above sense with a one-sided error using an oblivious tester. One-sided error means that up there if it's triangle-free, then it always succeeds. So here one of the cases that always succeeds. And the reason is that you can characterize a hereditary property by a curly H induced H3 for some curly H. Namely, you're putting everything into H that do not have this property. This is a possibly infinite set of graphs, and that completely characterizes this hereditary property. And if you read out the infinite removal lemma, it says precisely, using above this interpretation, that you have a property testing algorithm.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
15_Graph_limits_II_regularity_and_counting.txt
[SQUEAKING] PROFESSOR: Last time, we started discussing graph limits. And let me remind you some of the notions and definitions that were involved. One of the main objects in graph limits is that of a graphon, which are symmetric, measurable functions from the unit squared to the unit interval. So here, symmetric means that w of x, comma, y equals to w of y, comma, x. We define a notion of convergence for a sequence of graphons. And remember, the notion of convergence is that a sequence is convergent if the sequence of homomorphism densities converges as n goes to infinity for every fixed F, every fixed graph. So this is how we define convergence. So a sequence of graphs or graphons, they converge if all the homomorphism densities-- so you should think of this as subgraph statistics-- if all of these statistics converge. We also say that a sequence converges to a particular limit if these homomorphism densities converge to the corresponding homomorphism density of the limit for every F. OK. So this is how we define convergence. We also define this notion of a distance. And to do that, we first define the cut norm to be the following quantity defined by taking two subsets, S and T, which are measurable. Everything so far is going to be measurable. And look at what is the maximum possible deviation of the integral of this function on this box, S cross T. And here, w, you should think of it as taking real values, allowing both positive and negative values, because otherwise, you should just take S and T to be the whole interval. OK. And this definition was motivated by our discussion of discrepancy coming from quasi randomness. Now, if I give you two graphs or graphons and ask you to compare them, you are allowed to permute the vertices in some sense, so to find the best overlay. And that notion is captured in the definition of cut distance, which is defined to be the following quantity, where we consider over all possible measure-preserving bijections from the interval to itself of the difference between these two graphons if I rotate one of them using this measure-preserving bijection. So think of this as permuting the vertices. So these were the definitions that were involved last time. And at the end of last lecture, I stated three main theorems of graph limit theory. So I forgot to mention what are some of the histories of this theory. So there were a number of important papers that developed this very idea of graph limits, which is actually somewhat-- if you think about all of combinatorics, we like to deal with discrete objects. And even the idea of taking a limit is rather novel. So this work is due to a number of people. In particular, Laszlo Lovasz played a very important central role in the development of this theory. And various people came to this theory from different perspectives-- some from more pure perspectives, and some from more applied perspectives. And this theory is now getting used in more and more places, including statistics, machine learning, and so on. And I'll explain where that comes up just a little bit. At the end of last lecture, I stated three main theorems. And what I want to do today is develop some tools so that we can prove those theorems in the next lecture. OK. So I want to develop some tools. In particular, you'll see some of the things that we've talked about in the chapter on Szemerédi's regularity lemma come up again in a slightly different language. So much of what I will say today hopefully should already be familiar to you, but you will see it again from the perspective of graph limits. But first, before telling you about the tools, I want to give you some more examples. So one of the ways that I motivated graph limits last time is this example of an Erdos-Renyi random graph or a sequence of quasi-random graphs converging to a constant. The constant graphon is the limit. But what about generalizations? What about generalizations of that construction when your limit is not the constant? So this leads to this idea of a w random graph, which generalizes that of an Erdos-Renyi random graph. So in Erdos-Renyi, we're looking at every edge occurring with the same probability, p, uniform throughout the graph. But what I want to do now is allow you to change the edge probability somewhat. OK. So before giving you the more general definition, a special case of this is an important model of random graphs known as the stochastic block model. And in particular, a two-block model consists of the following data where I am looking at two types of vertices-- let's call them red and blue-- where the vertices are assigned to colors at random-- for example, 50/50. But any other probability is fine. And now I put down the edges according to which colors the two endpoints are. So two red vertices are joined with edge probability Prr. If I have a red and a blue, then I may have a different probability joining them, and likewise with blue-blue, like that. So in other words, I can encode this probability information in the matrix, like that. So it's symmetric across the diagonal. So this is a slightly more general version of an Erdos-Renyi random graph where now I have potentially different types of vertices. And you can imagine these kinds of models are very important in applied mathematics for modeling certain situations such as, for example, if you have people with different political party affiliations. How likely are they to talk to each other? So you can imagine some of these numbers might be bigger than others. And there's an important statistical problem. If I give you a graph, can you cluster or classify the vertices according to their types if I do not show you in advance what the colors are but show you what the output graph is? So these are important statistical questions with lots of applications. This is an example of if you have only two blocks. But of course, you can have more than two blocks. And the graphon context tells us that we should not limit ourselves to just blocks. If I give you any graphon w, I can also construct a random graph. So what I would like to do is to consider the following construction where-- OK, so let's just call it w random graph denoted by g and w-- where I form the graph using the following process. First, the vertex set is labeled by 1 through n. And let me draw the vertex types by taking uniform random x1 through xn-- OK, so uniform iid. So you think of them as the vertex colors, the vertex types. And I put an edge between i and j with probability exactly w of xi, xj, so for all i less than j independently. That's the definition of a w random graph. And the two-block stochastic model is a special case of this w random graph for the graphon, which corresponds to this red-blue picture here. So the generation process would be I give you some x1, x2, x3, and then, likewise, x1, x3, x2. And then I evaluate, what is the value of this graphon at these points? And those are my edge probabilities. So what I described is a special case of this general w random graph. Any questions? So like before, an important statistical question is if I show you the graph, can you tell me a good model for where this graph came from? So that's one of the reasons why people in applied math might care about these types of constructions. Let me talk about some theorems. I've told you that the sequence of Erdos-Renyi random graphs converges to the constant graphon p. So instead of taking a constant graphon p, now I start with w random graph. And you should expect, and it is indeed true, that this sequence converges to w as their limit. So let w be a graphon. So let w be a graphon. And for each n, let me draw this graph G sub n using the w random graph model independently. Then with probability 1, the sequence converges to the graphon w. So in the sense that I've shown above, described above. So this statement tells us a couple of things-- one, that w random graphs converge to the limit w, as you should expect; and two, that every graphon w is the limit point of some sequence of graphs. So this is something that we never quite explicitly stated before. So let me make this remark. So in particular, every w is the limit of some sequence of graphs, just like every real number, in analogy to what we said last time. Every real number is the limit of a sequence of rational numbers through rational approximation. And this is some form of approximation of a graphon by a sequence of graphs. OK. So I'm not going to prove this theorem. The proof is not difficult. So using that definition of subgraph convergence, the proof uses what's known as Azuma's inequality. So by an appropriate application of Azuma's inequality on the concentration of martingales, one can prove this theorem here by estimating the probability that-- to show that the probability that the F density in Gn, it is very close to the F density in w with high probability. OK. Any questions so far? So this is an important example of one of the motivations of graph limits. But now, let's get back to what I said earlier. I would like to develop a sequence of tools that will allow us to prove the main theorem stated at the end of the last lecture. And this will sound very familiar, because we're going to write down some lemmas that we did back in the chapter of Szemerédi's regularity lemma but now in the language of graphons. So the first is a counting lemma. The goal of the counting lemma is to show that if you have two graphons which are close to each other in the sense of cut distance, then their F densities are similar to each other. So here's a statement. So if w and u are graphons and F is a graph, then the F density of w minus the F density of u, their difference is no more than a constant-- so number of edges of F times the cut distance between u and w. So maybe some of you already see how to do this from our discussion on Szemerédi's regularity lemma. In any case, I want to just rewrite the proof again in the language of graphons. And this will hopefully-- so we did two proofs of the triangle counting lemma. One was hopefully more intuitive for you, which is you pick a typical vertex that has lots of neighbors on both sides and therefore lots of edges between. And then there was a second proof, which I said was a more analytic proof, where you took out one edge at a time. And that proof, I think it's technically easier to implement, especially for general H. But the first time you see it, you might not quite see what the calculation was about. So I want to do this exact same calculation again in the language of graphons. And hopefully, it should be clear this time. So this is the same as the counting lemma over epsilon-regular pairs. So it suffices to prove the inequality where the right-hand side is replaced not by the cut distance but by the cut norm. And the reason is that once you have the second inequality by taking an infimum over all measure-preserving bijections phi-- and notice that that change does not affect the F density. By taking an infimum over phi, you recover the first inequality. I want to give you a small reformulation of the cut norm that will be useful for thinking about this counting lemma. Here's a reformulation of the cut norm-- namely, that I can define the cut norm. So here, w is taking real values, so not necessarily non-negative. So the cut norm we saw earlier is defined to be the supremum over all measurable subsets of the 0, 1 interval of this integral in absolute value. But it turns out I can rewrite this supremum over a slightly larger set of objects. Instead of just looking over measurable subsets of the interval, let me now look at measurable functions. Little u. So OK, let me look at functions. So u and v from 0, 1 to 0, 1-- and as always, everything is measurable-- of the following integral. So I claim this is true. So I consider this integral. Instead of integrating over a box, now I'm integrating this expression. OK. So why is this true? Well, one of the directions is easy to see, because the right-hand side is strictly an enlargement of the left-hand side. So by taking u to be the indicator function of S and v to be the indicator of function of T, you see that the right-hand side, in fact, includes the left-hand side in terms of what you are allowed to do. But what about the other direction? So for the other direction, the main thing is to notice that the integral or the integrand, what's inside this integral, is bilinear in the values of u and v. So in particular, the extrema of this integral, as you allow to vary u and v, they are obtained. So they are obtained for u and v, taking values in the endpoints 0, comma, 1. It may be helpful to think about the discrete setting, when, instead of this integral, you have a matrix and two vectors multiplied from left and right. And you had to decide, what are the coordinates of those vectors? It's a bilinear form. How do you maximize it or minimize it? You have to change every entry to one of its two endpoints. Otherwise, it can never be-- you never lose by doing that. OK, so think about it. So this is not difficult once you see it the right way. But now, we have this cut norm expressed over not sets, but over bounded functions. And now I'm ready to prove the counting lemma. And instead of writing down the whole proof for general H, let me write down the calculation that illustrates this proof for triangles. And the general proof is the same once you understand how this argument works. And the argument works by considering the difference between these two F densities. And what I want to do is-- so this is some integral, right? So this is this integral, which I'll write out. So we would like to show that this quantity here is small if u and w are close in cut norm. So let's write this integral as a telescoping sum where the first term is obtained by-- so by this, I mean w of x, comma, y minus u of x, comma, y. And then the second term of the telescoping sum-- so you see what happens. I change one factor at a time. And finally, I change the third factor. So this is the identity. If you expand out all of these differences, you see that everything intermediate cancels out. So it's a telescoping sum. But now I want to show that each term is small. So how can I show that each term is small? Look at this expression here. I claim that for a fixed value of z-- so imagine fixing z. And let x and y vary in this integral. It has the form up there, right? If you fix z, then you have this u and v coming from these two factors. And they are both bounded between 0 and 1. So for a fixed value of z, this is at most w minus u-- the cut norm difference between w and u in absolute value. So if I left z vary, it is still bounded in absolute value by that quantity. So therefore each is bounded by w minus u cut norm in absolute value. Add all three of them together. We find that the whole thing is bounded in absolute value by 3 times the cut normal difference. OK, and that finishes the proof of the counting lemma. For triangles, of course, if you have general H, then you just have more terms. You have a longer telescoping sum, and you have this bound. OK. So this is a counting lemma. And I claim that it's exactly the same proof as the second proof of the counting lemma that we did when we discussed Szemerédi's regularity lemma and this counting lemma. Any questions? Yeah. AUDIENCE: Why did it suffice to prove over the [INAUDIBLE]?? PROFESSOR: OK. So let me answer that in a second. So first, this should be H, not F. OK, so your question was, up there, why was it sufficient to prove this version instead of that version? Is that the question? AUDIENCE: Yeah. PROFESSOR: OK. Suppose I prove it for this version. So I know this is true. Now I take infimum of both sides. So now I consider infimum of both sides. So then this is true, right? Because it's true for every phi. But the left-hand side doesn't change, because the F density in a relabeling of the vertices, it's still the same quantity, whereas this one here is now that. All right. So what we see as a corollary of this counting lemma is that if you are a Cauchy sequence with respect to the cut distance, then the sequence is automatically convergent. So recall the definition of convergence. Convergence has to do with F densities converging. And if you have a Cauchy sequence, then the F densities converge. And also, a related but different statement is that if you have a sequence wn that converges to w in cut distance, then it implies that wn converges to w in the sense as defined for F densities. So qualitatively, what the counting lemma says is that the cut norm is stronger than the notion of convergence coming from subgraph densities. So this is one part of this regularity method, so the counting lemma. Of course, the other part is the regularity lemma itself. So that's the next thing we'll do. And it turns out that we actually don't need the full strength of the regularity lemma. We only need something called a weak regularity lemma. What the weak regularity lemma says is-- I mean, you still have a partition of the vertices. So let me now state it for graphons. So for a partition p-- so I have a partition of the vertex set-- and a symmetric, measurable function w-- I'm just going to omit the word "measurable" from now on. Everything will be measurable. What I can do is, OK, all of these assets are also measurable. I can define what's known as a stepping operator that sends w to this object, w sub p, obtained by averaging over the steps si cross sj and replacing that graphon by its average over each step. Precisely, so I obtain a new graphon, a new symmetric, measurable function, w sub p, where the value on x, comma, y is defined to be the following quantity-- if x, comma, y lies in si cross sj. So pictorially, what happens is that you look at your graphon. There's a partition of the vertex set, so to speak, the interval. Doesn't have to be a partition into intervals, but for illustration, suppose it looks like that. And what I do is I take this w, and I replace it by a new graphon, a new symmetric, measurable function, w sub p, obtained by averaging. Take each box. Replace it by its average. Put that average into the box. So this is what w sub p is supposed to be. Just a few minor technicalities. If this denominator is equal to 0, let's ignore the set. I mean, then you have a zero measure set, anyway, so we ignore that set. So everything will be treated up to measure zero, changing the function on measure zero sets. So it doesn't really matter if you're not strictly allowed to do this division. OK. So this operator plays an important role in the regularity lemma, because it's how we think about partitioning, what happens to a graph under partitioning. It has several other names if you look at it from slightly different perspectives. So you can view it as a projection in the sense of Hilbert space. So in the Hilbert space of functions on the unit square, the stepping operator is a projection unto the subspace of constants, subspace of functions that are constant on each step. So that's one interpretation. Another interpretation is that this operation is also a conditional expectation. If you know what a conditional expectation actually is in the sense of probability theory, so then that's what happens here. If you view 0, 1 squared as a probability space, then what we're doing is we're doing conditional expectation relative to the sigma algebra generated by these steps. So these are just a couple of ways of thinking about what's going on. They might be somewhat helpful later on if you're familiar with these notions. But if you're not, don't worry about it. Concretely, it's what happens up there. OK. So now let me state the weak regularity lemma. So the weak regularity lemma is attributed to Frieze and Kannan, although their work predates the language of graphons. So it's stated in the language of graphs, but it's the same proof. So let me state it for you both in terms of graphons and in graphs. What it says is that for every epsilon and every graphon w, there exists a partition denoted p of the 0, 1 interval. And now I tell you how many sets there are. So it's a partition into-- so not a tower-type number of parts, but only roughly an exponential number of parts-- 4 to the 1 over epsilon squared measurable sets such that if we apply the stepping operator to this graphon, we obtain an approximation of the graphon in the cut norm. So that's the statement of the weak regularity lemma. There exists a partition such that if you do this stepping, then you obtain an approximation. So I want you to think about what this has to do with the usual version of Szemerédi's regularity lemma that you've seen earlier. So hopefully, you should realize, morally, they're about the same types of statements. But more importantly, how are they different from each other? And now let me state a version for graphs, which is similar but not exactly the same as what we just saw for graphons. So let me state it. So for graphs, the weak regularity lemma says that, OK, so for graphs, let me define a partition p of the vertex set is called weakly epsilon regular if the following is true. If it is the case that whenever I look at two vertex subsets, A and B, of the vertex set of g, then the number of vertices between A and B is what you should expect based on the density information that comes out of this partition. Namely, if I sum over all the parts of the partition, look at how many vertices from A lie in the corresponding parts. And then multiply by the edge density between these parts. So that's your predicted value based on the data that comes out of the partition. So I claim that this is the actual number of edges. This is the predicted number of edges. And those two numbers should be similar to each other bt at most epsilon n, where n is the number of vertices. So this is the definition of what it means for a partition to be weakly epsilon regular. So it's important to think about why this is weaker. It's called weak, right? So why is it weaker than a notion of epsilon regularity? So why is it weaker? So previously, we had epsilon-regular partition in the definition of Szemerédi's regularity lemma, this epsilon-regular partition. And here, notion of weakly epsilon regular. So why is this a lot weaker? It is not saying that individual pairs of parts are epsilon regular. And eventually, we're going to have this number of parts. So I'll state a theorem in a second. So the sizes of the parts are much smaller than epsilon fraction. But what this weak notion of regularity says, if you look at it globally-- so not looking at specific parts, but looking at it globally-- then this partition is a good approximation of what's going on in the actual graph, whereas-- OK, so it's worth thinking about. It's really worth thinking about what's the difference between this weak notion and the usual notion. But first, let me state this regularity lemma. So the weak regularity lemma for graphs says that for every epsilon and every graph G, there exists a weakly epsilon-regular partition of the vertex set of G into at most 4 to the 1 over epsilon squared parts. Now, you might wonder why did Frieze and Kannan come up with this notion of regularity. It's a weaker result if you don't care about the bounds, because an epsilon-regular partition will be automatically weakly epsilon regular. So maybe with small changes of epsilon if you wish, but basically, this is a weaker notion compared to what we had before. But of course, the advantage is that you have a much more reasonable number of parts. It's not a tower. It's just a single exponential. And this is important. And their motivation was a computer science and algorithm application. So I want to take a brief detour and mention why you might care about weakly epsilon-regular partitions. In particular, the problem that is of interest is in approximating something called a max cut. So the max cut problem asks you to determine-- given a graph G, find the maximum over all subsets of vertices, the maximum number of vertices between a set and its complement. That's called a cut. I give you a graph, and I want to know-- find this s so that it can have as many edges across this set as possible. This is an important problem in computer science, extremely important problem. And the status of this problem is that it is known to be difficult to get it even within 1%. So the best algorithm is due to Goemans and Williamson. It's an important algorithm that was one of the foundational algorithms in semidefinite programming, so related-- the words "semidefinite programming" came up earlier in this course when we discussed growth index inequality. So they came up with an approximation algorithm. So here, I'm only talking about polynomial time, so efficient algorithms. Approximation algorithm with approximation ratio around 0.878. So one can obtain a cut that is within basically 13% of the maximum. So it's an approximation algorithm. However, it is known that it is hard in the sense of complexity theory. It'd be hard to approximate beyond the ratio 16 over 17, which is around 0.491. And there is an important conjecture in computer science called a unique games conjecture that, if that conjecture were true, then it would be difficult. It would be hard to approximate beyond the Goemans-Williamson ratio. So this indicates the status of this problem. It is difficult to do an epsilon approximation. But if the graph I give you is dense-- "dense" meaning a quadratic number of edges, where n is a number of vertices-- then it turns out that the regularity-type algorithms-- so that theorem combined with the algorithmic versions allows you to get polynomial time approximation algorithms. So this is polynomial time approximation schemes. So one can approximate up to 1 minus epsilon ratio. So one can approximate up to epsilon n squared additive error in polynomial time. So in particular, if I'm willing to lose 0.01 n squared, then there is an algorithm to approximate the size of the max cut. And that algorithm basically comes from-- without giving you any details whatsoever, the algorithm essentially comes from first finding a regularity partition. So the partition breaks the set of vertices into some number of pieces. And now I search over all possible ratios to divide each piece. So there is a bounded number of parts. Each one of those, I decide, do I cut this up half-half? Do I cut it up 1/3, 2/3, and so on? And those numbers alone, because of this definition of weakly epsilon regular, once you know what the intersection of A, B is, let's say, a complement is with individual sets, then I basically know the number of edges. So I can approximate the size of the max cut using a weakly epsilon-regular partition. So that was the motivation for these weakly epsilon partitions, at least the algorithmic application. OK. Any questions? OK. So let's take a quick break. And then afterwards, I want to show you the proof of the weak regularity lemma. All right. So let me start the proof of the weak regularity lemma. And the proof is by this energy increment argument. So let's see what this energy increment argument looks like in the language of graphons. So energy now means L2, so L2 energy increment. So the statement of this lemma is that if you have w, a graphon, and p, a partition, of 0, comma, 1 interval such that-- always measurable pieces. I'm not going to even write it. It's always measurable pieces-- such that the difference between w and w averaged over steps p is bigger than epsilon. So this is the notion of being not epsilon regular in the weak sense, not weakly epsilon regular. Then there exists a refinement, p prime of p, dividing each part of p into at most four parts such that the true norm increases by more than epsilon squared under this refinement. So it should be similar. It should be familiar to you, because we have similar arguments from Szemerédi's regularity lemma. So let's see the proof. Because you have violation of weak epsilon regularity, there exists sets S and T, measurable subsets of 0, 1 interval, such that this integral evaluated over S cross T is more than epsilon in absolute value. So now let me take p prime to be the common refinement of p by introducing S and T into this partition. So throw S and T in and break everything according to S and T. And so each part becomes at most four subparts. So that's the at most four subparts. I now need to show that I have an energy increment. And to do this, let me first perform the following calculation. So remember, this symbol here is the inner product obtained by multiplying and integrating over the entire box. I claim that that inner product equals to the inner product between wp and wp prime, because what happens here is we are looking at a situation where wp prime is constant on each part. So when I do this inner product, I can replace w by its average. And likewise, over here, I can also replace it by its average. And you end up having the same average. And these two averages are both just what happens if you do stepping by p. You also have that w has inner product with 1 sub S cross T the same as that of p prime by the same reason, because over S cross T. So S cross T is a union of the parts of p prime. So S is union of parts of p prime. OK. So let's see. With those observations, you find that-- so this is true. This is from the first equality. So now let me draw you a right triangle. So you have a right angle, because you have an inner product that is 0. So by Pythagorean theorem, so what is this hypotenuse? So you add these two vectors. And you find out this wp prime. So by Pythagorean theorem, you find that the L2 norm of wp prime equals to the L2 norm of the sum of the L2 norm squares of the two legs of this right triangle. On the other hand, this quantity here. So let's think about that quantity over there. It's an L2 norm. So in particular, it is at least this quantity here, which you can derive in one of many ways-- for example, by Cauchy-Schwarz inequality or go from L2 to L1 and then pass down to L1. So this is true. So let's say by Cauchy-Schwarz. But this quantity here, we said was bigger than epsilon. So as a result, this final quantity, this L2 norm of the new refinement, increases from the previous one by more than epsilon squared. OK. So this is the L2 energy increment argument. I claim it's the same argument, basically, as the one that we did for Szemerédi's regularity lemma. And I encourage you to go back and compare them to see why they're the same. All right, moving on. So the other part of regularity lemma is to iterate this approach. So if you have something which is not epsilon regular, refine it. And then iterate. And you cannot perceive more than a bounded number of times, because energy is always bounded between 0 and 1. So for every epsilon bigger than 0 and graphon w, suppose you have P0, a partition of 0, 1 interval into measurable sets. Then there exists a partition p that cuts up each part of P0 into at most 4 to the 1 over epsilon parts such that w minus w sub p is at most epsilon. So I'm basically restating the weak regularity lemma over there but with a small difference, which will become useful later on when we prove compactness. Namely, I'm allowed to start with any partition. Instead of starting with a trivial partition, I can start with any partition. This was also true when we were talking about Szemerédi's regularity lemma, although I didn't stress that point. That's certainly the case here. I mean, the proof is exactly the same with or without this extra. This extra P0 really plays an insignificant role. What happens, as in the proof of Szemerédi's regularity lemma, is that we repeatedly apply the previous lemma to obtain the sequence of partitions of the 0, 1 interval where, each step, either we find that we obtain some partition p sub i such that it's a good approximation of w, in which case we stop, or the L2 energy increases by more than epsilon squared. And since the final energy is always at most 1-- so it's always bounded between 0 and 1-- we must stop after at most 1 over epsilon steps. And if you calculate the number of parts, each part is subdivided into at most four parts at each step, which gives you the conclusion on the final number of parts. OK, so very similar to what we did before. All right. So that concludes the discussion of the weak regularity lemma. So basically the same proof. Weaker conclusion and better quantitative balance. The next thing and the final thing I want to discuss today is a new ingredient which we haven't seen before but that will play an important role in the proof of the compactness-- in particular, the proof of the existence of the limit. And this is something where I need to discuss martingales. So martingale gill is an important object in probability theory. And it's a random sequence. So we'll look at discrete sequences, so indexed by non-negative integers. And is martingale is such a sequence where if I'm interested in the expectation of the next term and even if you know all the previous terms-- so you have full knowledge of the sequence before time n, and you want to predict on the expectation what the nth term is-- then you cannot do better than simply predicting the last term that you saw. So this is the definition of a martingale. Now, to do this formally, I need to talk about filtrations and what not in measured theory. But let me not do that. OK, so this is how you should think about martingales and a couple of important examples of martingales. So the first one comes from-- the reason why these things are called martingales is that there is a gambling strategy which is related to such a sequence where let's say you consider a sequence of fair coin tosses. So here's what we're going to do. So suppose we consider a betting strategy. And x sub n is equal to your balance time n. And suppose that we're looking at a fair casino where the expectation of every game is exactly 0. Then this is a martingale. So imagine you have a sequence of coin flips, and you win $1 for each head and lose $1 for each tail. When you're at time five, you should have $2 in your pocket. Then time five plus 1, you expect to also have that many dollars. It might go up. It might go down. But in expectation, it doesn't change. Is there a question? OK. So they're asking about, is there some independence condition required? And the answer is no. So there's no independence condition that is required. So the definition of a martingale is just if, even with complete knowledge of the sequence up to a certain point, the difference going forward is 0 in expectation. OK, so here's another example of a martingale, which actually turns out to be more relevant to our use-- namely, that if I have some hidden-- think of x as some hidden random variable, so something that you have no idea. But you can observe it at time n based on information up to time n. So for example, suppose you have no idea who is going to win the presidential election. And really, nobody has any idea. But as time proceeds, you make an educated guess based on the information that you have, all the information you have up to that point. And that information becomes a larger and larger set as time moves forward. Your prediction is going to be a random variable that goes up and down. And that will be a martingale, because-- so how I predict today based on what are all the possibilities happening going forward, well, one of many things could happen. But if I knew that my prediction is going to, in expectation, shift upwards, then I shouldn't have predicted what I predict today. I should have predicted upwards anyway. OK. So this is another construction of martingales. So this also comes up. You could have other more pure mathematics-type explanations, where suppose I want to know what is the chromatic number of a random graph. And I show you that graph one edge at a time. You can predict the expectation. You can find the expectation of this graph's statistic based on what you've seen up to time n. And that sequence will be a martingale. An important property of a martingale, which is known as the martingale convergence theorem-- and so that's what we'll need for the proof of the existence of the limit next time-- says that every bounded martingale-- so for example, suppose your martingale only takes values between 0 and 1. So every bounded martingale converges almost surely. You cannot have a martingale which you expect to constantly go up and down. So I want to show you a proof of this fact. Let me just mention that the bounded condition is a little bit stronger than what we actually need. From the proof, you'll see that you really only need them to be L1 bounded. It's enough. And more generally, there is a condition called uniform integrability, which I won't explain. All right. OK. So let me show you a proof of the martingale convergence theorem. And I'm going to be somewhat informal and somewhat cavalier, because I don't want to get into some of the fine details of probability theory. But if you have taken something like 18.675 probability theory, then you can fill in all those details. So I like this proof, because it's kind of a proof by gambling. So I want to tell you a story which should convince you that a martingale cannot keep going up and down. It must converge almost surely. So suppose x sub n doesn't converge. OK, so this is why I say I'm going to be somewhat cavalier with probability theory. So when I say this doesn't converge, I mean a specific instance of the sequence doesn't converge or some specific realization. If it doesn't converge, then there exists a and b, both rational numbers between 0 and 1, such that the sequence crosses the interval a, b infinitely many times. So by crossing this interval, what I mean is the following. OK. So there's an important picture which will help a lot in understanding this theorem. So imagine I have this time n, and I have a and b. So I have this martingale. It's realization curve will be like that. So that's an instance of this martingale. And by crossing, I mean a sequence that-- OK, so here's what I mean by crossing. I start below a and-- let me use a different color. So I start below a, and I go above b and then wait until I come back below a. And I go above b. Wait until I come back. So do like that. Like that. So I start below a until the first time I go above b. And then I stop that sequence. So those are the upcrossings of this martingale. So upcrossing is when you start below a, and then you end up above b. So if you don't converge, then there exists such a and b such that there are infinitely many such crossings. So this is just a fact. It's not hard to see. And what we'll show is that this doesn't happen except with probability 0. So we'll show that this occurs with probability 0. And because there are only countably many rational numbers, we find that x sub n converges with probability 1. So these are upcrossings. So I didn't define it, but hopefully you understood from my picture and my description. And let me define by u sub n to be the number of upcrossings up to time n, so the number of such upcrossings. Now let me consider a betting strategy. Basically, I want to make money. And I want to make money by following these upcrossings. OK. So every time you give me a number and-- so think of this as the stock market. So it's a fair stock market where you tell me the price, and I get to decide, do I want to buy? Or do I want to sell? So consider the betting strategy where at any time, we're going to hold either 0 or 1 share of the stock, which has these moving prices. And what we're going to do is if xn is less than a, is less than the lower bound, then we're going to buy and hold, meaning 1, until the first time that the price reaches above b and then sell as soon as the first time we see the price goes above b. So this is the betting strategy. And it's something which you can implement. If you see a sequence of prices, you can implement this strategy. And you already hopefully see, if you have many upcrossings, then each upcrossing, you make money. Each upcrossing, you make money. And this is almost too good to be true. And in fact, we see that the total gain from this strategy-- so if you start with some balance, what you get at the end-- is at least this difference from a to b times the number of upcrossings. You might start somewhere. You buy, and then you just lose everything. So there might be an initial cost. And that cost is bounded, because we start with a bounded martingale. So suppose the martingale is always between 0 and 1. We start with a bounded martingale. But on the other hand, there is a theorem about martingales, which is not hard to deduce from the definition, that no matter what the betting strategy is, the gain at any particular time must be 0 in expectation. So this is just the property of the martingale. So 0 equals the expected gain, which is at least b minus a times the expected number of upcrossings minus 1. And thus the expected number of upcrossings up to time n is at most 1 over b minus a. Now, we let n go to infinity. And let u sub infinity be the total number of upcrossings. By the monotone convergence theorem in this limit, the limit of these u sub n's, it can never go down. It's always weakly increasing. It converges to the expectation of the total number of upcrossings. So now, in particular, you know that the total number of upcrossings is at most some finite number. So in particular, the probability that you have infinitely many crossings is 0. So with probability 0, you cross infinitely many times, which proves the claim over there and which concludes the proof of the claim that the martingale converges almost surely. OK, so that proves the martingale converge theorem. So next time, we'll combine everything that we did today to prove the three main theorems that we stated last time on graph limits.
MIT_18217_Graph_Theory_and_Additive_Combinatorics_Fall_2019
22_Structure_of_set_addition_II_groups_of_bounded_exponent_and_modeling_lemma.txt
YUFEI ZHAO: The goal for the next few lecturers is to prove Freiman's theorem, which we discussed last time. And so we started with this tool that we proved called the Plunnecke-Ruzsa inequality, which tells us that if you have a set A now in an arbitrary abelian group, and if A has controlled doubling, it has bounded doubling, then all the further iterated sumsets have bounded growth, as well. So this is what we proved last time. And so we're going to be using the Plunnecke-Ruzsa inequality many times. But there are some other tools I need to tell you about. So the next tool is known as Ruzsa covering lemma. All right. So let me first give you the statement, and then I explain some intuition. I think the technique is more important than the statement. But in any case, here's what it says. If you have X and B-- they're subsets of some arbitrary abelian group-- if you have the inequality that X plus B is at most K times the size of B-- so the size of X plus B is at most K times the size of B-- then there exists some subset T of X, with the size of T being, at most, K, such that X is contained in T plus B minus B. So that's the statement of Ruzsa covering lemma. So let me play to explain what it is about. So the idea here is that, if you're in a situation where if it looks like-- so you should think of B as a ball. So if it looks like X plus B might be coverable by K translates of B-- so here, you're supposed to think of B like a ball in the metric space-- then X is actually coverable. So if it looks like, meaning just by size alone, by size info alone-- so just based on the size, if it looks like X plus B might be coverable by K different translates of B, then actually X is coverable, but you have to use slightly larger balls by K copies of B minus B. So you should think of B minus B as slightly larger balls than B itself. So if B were an actual ball in Euclidean space, then B minus B is the same ball with twice the radius. And so Ruzsa covering lemma is a really important tool. The proof is not very long. And it's important to understand the idea of this proof. So this is a proof not just in additive combinatorics but something that happens-- it's a standard idea in analysis. So it's very important to understand this idea. And the key idea-- here, I think the proof is more important than the statement up there-- the key idea here is that if you want to produce a covering, one way to produce a covering is to take a maximal packing. A maximal packing using with balls, for instance, implies a covering with balls twice as large. So let me illustrate that with a picture. Suppose you have some space that I want to cover. But you get to use, let's say, unit balls. So how can I make sure I can cover the space using unit balls? And I don't want to use too many unit balls. So what you can do is, let me start by a maximal set of unit balls, so with centers-- so it's now half unit balls, so the radius is 1/2. So I put in as many as I can so that I cannot put in any more. So that's what maximal means-- "mal" here doesn't mean the maximum number. Although, if you put in the maximum number, that's also OK. But maximal means that I just cannot put in any more balls that are not overlapping. If I have this configuration, now what I do is I double the radii of all the balls. So whoever takes today's notes will have a fun time drawing this picture. So this has to be a covering of the original space. Because if you had missed some point, I could have put a blue ball in. Yes? AUDIENCE: What if a space has some narrow portion? YUFEI ZHAO: Question-- what if a space has some narrow portion? It doesn't matter. If you formulate this correctly-- if you miss some point-- so here, it depends on how you formulate this covering. The point is that, if you take a maximal set of points, when you expand, you have to cover the whole space. Because if you missed some point-- so imagine if you had-- for example, suppose you missed-- suppose you had missed some point over here. Then I could have put an extra ball in. So if you had missed some point over here, that means that I should have been able to put in that ball there initially. So this is a very simple idea, but a very powerful idea. And it comes up all the time in analysis and geometry. And it also comes up here. So let's do the actual proof. So let me let T be a subset of X be a maximal subset of X, such that the sets little t plus B are disjoint for all elements little t of a set big T. So it's like this picture. I pick a subset of X so that if I center balls around these t's, then these translates of B's are destroyed. Then you put in a maximal such set. So due to the disjointness, we find that the product of the sizes of t and B equals to the size of the sum, because there's no overlaps. But t plus B has size at most X plus B, because T is a subset of X. But also, from the assumption we knew that X is-- size of x plus B is upper bounded by the size B times K. So in particular, we get that the size of T is at most K. So in other words, over here, the number of blue balls you can control by simply the volume. Now, since T is maximal, we have that for every little x, there exists some little t such that the translate of B given by x intersects one of my children translates. Because if this were not true, then I could have put in an extra translate of B. So in other words, there exist two elements, b and b prime, such that t plus b equals to x plus b prime. And hence, x lies in little t plus B minus B, which implies that the set X lies in T plus B minus B. OK. So that's the Ruzsa covering lemma. So it's an execution of this idea I mentioned earlier that a maximal packing implies a good covering. Any questions? Right. So now we have this tool, we can prove an easier version of Freiman's theorem where, instead of working in integers, we're going to work in the finite field model. So usually, it's good to start with finite field models. Things are a bit cleaner. And in this case, it actually only requires a subset of the tools that we need for the full theorem. So instead of working in finite field model, we're going to be working in something just slightly more general. But it's the same proof. So we're going to be working in a group of bounded exponent. Freiman's theorem in groups of bounded exponent. And the word "exponent" in group theory means the following-- so the exponent of an abelian group is the smallest positive integer, if it exists. So the smallest positive integer r so that rx equals to 0 for all x in the group. So for example, if you are working Fp to the n, then the exponent is p. So every element has-- if you add to itself r times, then it vanishes the element. The word "exponent" comes from-- I mean, you can define the same thing for non-abelian groups, in which case you should write this expression using the exponent. Instead of addition, you have multiplication. So that's why it's called exponent. So the name is stuck, even though we're still working on the additive setting. We're going to write this, so the angle brackets, to mean the subgroup generated by the subset A, where A is a subset of the group elements. And so the exponent of a group, then, is equal to the maximum number of-- so if you pick a group element, look at how many group elements does it generate. So it's equal to the max. All right. I mean, the example you can have in mind is F2 to the n, which has exponent 2. So in general, we're going to be looking at a special case in a group with bounded exponent. And Freiman's theorem, in this case, is due to Ruzsa, who showed that if you have a finite subset in an abelian group with exponent r-- so a finite exponent-- if A has doubling constant at most K, then what do we want to say? We want to say, just like in Freiman's theorem, that if A has bounded doubling, then it is a large portion of some structured set. And here "structured set" means subgroup. Well, if you're going to be looking at subgroups that contain A, you might as well look at the subgroup generated by A. So the claim, then, is that the subgroup generated by A is only a constant factor bigger than A itself. So let me say it again. If you have a set A in a group of bounded exponent, and A has bounded doubling, then conclusion is that A is a large proportion of some subgroup. Conversely-- we saw last time-- if you take a subgroup, it has doubling constant 1, so if you take a positive proportional constant proportion of the subgroup, it also has bounded doubling. And this statement is in some sense the converse of that observation. This bound here is not optimal. I'll make some comments about what we know about these bounds. But for now, just view this number as a constant. Any questions about the statement? All right. So let's prove this Ruzsa's theorem, giving you Freiman's theorem in groups with bounded exponent. So we're going to be applying the tools we've seen so far, starting with Plunnecke-Ruzsa. So by Plunnecke-Ruzsa inequality, we find that-- so then I'm going to write down some expression. You may wonder, why this expression? Because we're going to apply covering lemma in a second. So this set by Plunnecke-Ruzsa, its size is bounded, because the set can also be written as 3A minus A, so its size is bounded by K to the fourth times the size of A. And so Plunnecke-Ruzsa is a very nice tool to have. It basically tells you, if you have bounded doubling, then all the other iterated sums are essentially controlled in size. And then we're using that here. OK. So now we're in the setting where we can apply the Ruzsa covering lemma. So by covering lemma, we're going to apply the covering lemma-- so using the notation earlier-- with X equal to 2A minus A and B equal to A. By covering lemma, there exists some T being a subset of 2A minus A, with T not too large-- because of our earlier estimate-- and such that 2A minus A is contained in T plus A minus A. So it's easy to get lost in the details. But what's happening here is that I start with A, and I'm looking at how big these iterated growing sumsets can be. And if I keep on doing this operation, like if I just apply Plunnecke-Ruzsa, I can't really control the iterated sums by an absolute bound. This bound will keep growing if I take bigger iterations. But Ruzsa covering lemma gives me another way to control the bounded iterations. It says there is some bounded set T such that this iterated sum is nicely controlled. So let me iterate this down even further. We're going to iterate the containment by adding A to both sides. And we obtain that 3A minus A is contained in T plus 2A minus A. But now, 2A minus A was containing T plus A minus A. So we get 2T plus A minus A. So now we've gained quite a bit, because we can make this iteration go up, but not at the cost of iterating A but at the cost of iterating T. But T is a bounded size. T is bounded size, so T will have very nice control. We'll be able to very nicely control the iterations of T. So if we keep going, we find that for all integer n, positive integer n, n plus 1 A minus A is contained in nT plus A minus A. But for every integer n, the iterated sums of T are contained in the subgroup generated by T. So therefore, if we take n to be as large as you want, see that the left-hand side eventually becomes the subgroup generated by A, and the right hand side does not depend on n. So you have this containment over here. We would like to estimate how large the subgroup generated by A is. So we look at that formula, and we see that the size of the subgroup generated by T-- and here is where we're going to use the fact that the assumption that a group has bounded exponent. So think F2 to the n, if you will. So in F2 to the n, if I give you a set T, what's the subspace spanned by T? What's the maximum possible size? It's, at most, 2 raised to the number. So in general, you also have that the subgroup generated by T has size at most r raised to the size of T in a group of exponent r, which we can control, because T has bounded size. And the second term, A minus A, so we also know how to control that by Plunnecke-Ruzsa. Therefore, putting these two together, we see that the size up here is at most r to the K to the 4 times K squared size of A, which is the bound that we claimed. Any questions? So the trick here is to only use Plunnecke-Ruzsa somehow a bounded number of times. If you use it too many times, this bound blows up. But you use it only a small number of times, and then you use the Ruzsa covering lemma so that you get this bounded set T that I can iterate. Let me make some comments about the bounds that come out of this proof. So you get this very clean bound, following this proof. But what about examples? So what kinds of examples can you think of where you have a set of bounded doubling, but the set generated, the group generated by that set, is potentially large? So even in F2 to the n, so if A is an independent set-- so a basis or a subset of a basis, for instance-- then K-- so A is independent set, so all the pairwise sums are distinct-- so K is about the size of A/2. That's the doubling constant. Whereas, the group generated by A has size 2 to the size of A, which is around 2 to the 2K times the size of A. OK. So you see that you do need some exponential blow-up from K to this constant over here. And turns out, that's more or less correct. So the optimal constant for F2 to the n is now known very precisely. And so if you give me a real value of K, then I can tell you there are some recent results that tells you exactly what is the optimal constant you can put in front of the A. So very precise. But asymptotically, it looks like 2 to the 2K divided by. K So that's what it looks like. So this example is basically correct. For general r, we expect a similar phenomenon. So Ruzsa conjectured that-- in the hypothesis of the theorem, the constant you can take is only exponential in K, the r to the some constant C times K. And it has been verified for some values of r, but not in general-- for some r, for example primes. OK. Any questions? All right. So this is a good milestone. So we've developed some tools, and we were able to prove a easier version of Freiman's theorem in a group of bounded exponent. And you can ask yourself, does this proof work in the integers? And well, literally no. Because if you look at this proof, this set here is infinite, unlike in the finite field setting. In the integers, well, that's not very good. So the strategy of Freiman's theorem, the proof of Freiman's theorem, is to start with the integers, and then try to, not work in the integers, but try to work in a smaller group. Even though you start in a maybe very spread out set of integers, I want to work in a much smaller group so that I can control things within that group. And this is an idea called modeling. So I have a big set and want to model it by something in a small group. So we're going to see this idea. But to understand what does it mean to have a good model for a set in the sense of additive combinatorics, I need to introduce the notion of Freiman homomorphisms. So one of the central philosophies across mathematics is that if you want to study objects, you should try to understand maps between objects and understand properties that are preserved under those maps. So if you want to study groups, I don't really care how you label your group elements-- by 1, 2, 3, or A, B, C. What I care about is the relationships. And those are the data that I care about. And then, of course, then you have concepts like group homomorphisms, group isomorphisms, that preserve all the relevant data. Similarly in any other area-- in geometry you have manifolds. You understand not specifically how they embed into space but what are the intrinsic properties. So we would like to understand what are the intrinsic properties of a subset of an abelian group that we care about for the purpose of additive combinatorics, and specifically for Friedman's theorem. And what we care about is what kinds of additive relationships are preserved. And Freiman's homomorphisms capture that notion. So roughly speaking, we would like to understand maps between sets in possibly different groups-- in possibly different abelian groups-- that only partially preserve additive structure. So here's a definition. Suppose we have A and B, and they're subsets in possibly different abelian groups. Could be the same, but possibly different. And everything's written under addition, as usual. So we say that a map phi from A to B is a Freiman s-homomorphism. So that's the term-- Freiman s-homomorphism, sometimes also Freiman homomorphisms of order s, so equivalently I can call it that, as well-- if the following holds. If we have the equation phi of A plus dot, dot, dot plus phi of a sub s equal to phi of a prime 1 plus dot, dot, dot plus phi of a prime s, whenever a through as, a prime through as prime, so a1 prime through as prime, satisfy the equation a1 plus dot, dot, dot plus as equal to a1 prime plus dot, dot, dot plus as prime. OK. So that's the definition of a Freiman s-homomorphism. It should remind you of the definition of a group homomorphism, which completely preserves additive structure, let's say, between abelian groups. And for Freiman homomorphisms, I'm only asking you to partially preserve additive structure. So the point here is that if I only care about, let's say, pairwise sums, if that's the only data I care about, then Freiman homomorphisms preserve that data. To give you some-- OK, so one more thing. If phi from A to B is, furthermore, a bijection, and both phi and phi inverse are Freiman s-homomorphisms, then we say that phi is a Freiman s-isomorphism. So it's not enough just to be a bijection, but it's a bijection and both the forward and the inverse maps are Freiman homomorphisms. So these are the definitions we're going to use. Let me give you some examples. So every group homomorphism is a Freiman homomorphism of every order. So group homomorphisms preserve all additive structure, and Freiman homomorphisms only partially preserve additive structure. A composition-- so if phi 1 and phi 2 are Freiman s-homomorphisms, then phi 1 composed with phi 2 is a Freiman s-homomorphism. So compositions preserve this property. And likewise, instead of homomorphisms, if you have isomorphisms, then that's also true, as well. So these are straightforward things to check. So a concrete example that shows you a difference between group homomorphisms and Freiman homomorphisms is, suppose you take an arbitrary map phi from a set that has no additive structure. So it's a four-element set, has no additive structure. And I map it to the integers, claim that this is a Freiman 2-homomorphism. So you can check. So whenever this is satisfied, but that's never non-trivially satisfied. So an arbitrary map here is a Freiman 2-homomorphism. And if furthermore-- so if you have, let's say, bijection between two sets, both having no additive structure, if it's a bijection, it's a Freiman isomorphism of here, order 2. Let me give you a few more examples. When you look at homomorphisms between finite groups, so you know that if you have a homomorphism and it's also a bijection, then it's an isomorphism. But that's not true for this notion of homomorphisms. So the natural embedding that sends the Boolean cube to the Boolean cube viewed as Z mod 2 to the n. So what's happening here? This is a part of a group homomorphism. And so if you look at Z to the n, and I do mod 2, and I restrict to this Boolean cube, that's the group homomorphism If I view this as a subset of a bigger group. So it is a Freiman homomorphisms of every order. And it's bijective. But it is not a Freiman 2-isomorphism. Because you have additive relations here that are not present over here. So if you read the definition, the inverse map, there are some additive relations here that are not preserved if you pull back. Here's another example that will be more relevant to our subsequent discussion. So the mod N map, which sends Z to Z mod N, so this is a group homomorphisms. So hence, it's a Freiman homomorphism of every order. But it's not-- OK, so if you look at this map, and even if I restrict to here, so it's not a Freiman isomorphism just like earlier. However-- let me go back to Z. So if A is a subset of integers with diameter less than N/s, then this map mod N maps A Freiman s-isomorphically onto its image. So even though mod N restricted to 1 through N is not a Freiman isomorphism of order 2, if I restrict to a subset that's, let's say, contained in some small interval, then all the additive structures are preserved. So let me show you why. So this is not too hard once you get your head around the definition. So indeed, if you have group elements a1 through as, a prime 1 through a prime s, and if they satisfy the equation-- so if they satisfy this equations, so we're trying to verify that it is a Freiman s-isomorphism, namely the inverse of this map is a Freiman s-homomorphism, so if they satisfy this equation, so this is satisfying this additive relation in the image, in Z mod N, then note that the left-hand side-- so all of these A's are contained in a small interval, because the diameter of the set is less than N/s. So if you look at how big a1 minus a1 prime can be it's, at most-- or, it's less than N/s in size. So the left-hand side, in absolute value, viewed as an integer-- so the left-hand side is less than N in absolute value, since the diameter of a is less than N/s. So you have some number here, which is strictly less than N in absolute value, and it's 0 mod N, so it must be actually equal to 0 as a number as an integer. So this verifies that the additive relations up to s-wise sums are preserved under the mod N map, if you restrict to a small interval. Any questions so far? So in additive combinatorics, we are trying to understand properties, specific additive properties. And the notion of Freiman homomorphisms Freiman isomorphism capture what specific properties we need to study and what are the maps that preserve those properties. And the next thing we will do is to understand this model lemma, the modeling lemma, that tells us that if you start with a set A with small doubling, initially A may be very much spread out in the integers-- it may have very large elements, very small elements, very spread out. But if A small doubling properties, then I can model A inside a small group, such that all the relevant data, namely relative to these Freiman homomorphisms, are preserved under this model. So let's move on to the modeling lemma. The main message of the modeling lemma is that if A has small doubling, then A can be modeled-- and here, that means being Freiman isomorphic-- to a subset of a small group. So first as a warm-up, let's work in the finite field model, just to see what such a result looks like. And it contains most of the ideas, but it's much more clean. It's much cleaner than in the integers. So in the finite field model, specifically F2 to the n, what do we want to say? Suppose you have A, a subset of F2 to the n, and suppose that m is some number such that 2 to the m is at least as large as sA minus sA. So remember, from Plunnecke-Ruzsa, you know that if A plus A is small, then this iterated sum is small. So suppose we have these parameters and sets. The conclusion is that A is Freiman s-isomorphic to some subset of F2 raised to m. So initially, A is in a potentially very large vector space, or it could be all over the place. And what we are trying to say here is that if A has small doubling, then by Plunnecke-Ruzsa, sA minus sA has size not too much bigger than A itself-- only bounded times the size of A itself. So I can take an m so that the size of this group is only a constant factor in larger than the size of A itself. So we're in a pretty small group. So we are able to model A, even though initially it sits inside a pretty large abelian group, by some subset in a pretty small group. So let's see how to prove this modeling lemma. So the finite field setting is much easier to-- it's not too hard to deal with, because we can look at linear maps. So the following are equivalent for linear maps phi, so for group homomorphisms. So phi is a Freiman s-isomorphism when restricted to A. So here, phi I'm going to let it be a linear map from F2 to the n to F2 to the m. The following are equivalent. So we would like phi to be a Freiman s-isomorphism when restricted to A. Because this means that when I restrict to A and I restrict to its image, it Freiman isomorphically maps onto the image. So what does that mean? So phi is already a homomorphism. So it's automatically a Freiman s-homomorphism. For it to be an s-isomorphism, it just means that there are no additional linear relations in the image that were not present earlier, which means that phi is injective on sA. So let's just think about the definition. And in the definition, if you know additionally that phi is a homomorphism, everything's much cleaner. It is also equivalent to that phi of x is non-zero for every non-zero element x of sA minus sA. So this is a very clean characterization of what it means to be in Freiman s-isomorphism when you are in an abelian group and you have linear maps of homomorphisms. So if we start by taking phi to be a uniformly random linear map-- so for example, you pick a basis, and you send each basis element to a uniformly random element-- then we find that if 2m is at least A at minus sA, then-- so let me call these properties 1, 2, 3-- so then the probability that 3 is satisfied is positive. Because each element of sA minus sA-- I can also ignore 0-- so each non-zero element of sA minus sA violates this property with probability exactly 2 to the minus m, everything is uniform. So if there are very few elements, and the space is large enough, then the third bullet is satisfied with positive probability. So you get Freiman s-isomorphism. Any questions? To get this model, in this case in the finite field setting, it's not so hard. So you kind of project the whole set, even though initially it might have or be very spread out into a lot of dimensions. You project it down to a small dimensional subspace randomly, and that works. So then with high probability, it preserves all the additive structures that you want, provided that you have small doubling. Now, let's look at what happens in Z. So in Z, things are a bit more involved. But the ideas-- actually, a lot of the ideas-- come from this proof, as well. So Ruzsa's modeling lemma tells us that if you have a set of integers-- always a finite set-- and integers s and N are such that N is at least sA minus sA, then so it turns out you might not be able to model the whole set A. But it will be good enough for us to model a large fraction. So then there exists an A prime subset of A, with A prime being at least an s fraction of the original set. And A prime is Freiman s-isomorphic to a subset of Z mod N. So same message as before, with an extra ingredient that we did not see before. But the point is that if you have a set A with controlled doubling, then well, now you can take a large fraction of A that is Freiman isomorphism to a subset of a small group. This is small, because we only need n to exceed sA minus sA, which are only constant factor more than the size of A. Yeah? AUDIENCE: Is s greater than 2 comma N? YUFEI ZHAO: Sorry. S greater than 2 separate and some integer. Thank you. So in our application, s will be a constant. s will be 8. So think of s as some specific constant. Any questions? So let's prove this modeling lemma. We want to try to do some kind of random map. But it's not clear how to start doing a random map if you just start in the integers. So what we want to do is first place ourself in some group where we can consider random automorphisms. So we start by-- perhaps we're very wastefully choosing a prime q bigger than the maximum possible sA minus sA. And so just choose a large enough prime. I don't care how large you pick. q can be very, very large. Pick a prime. And now I work inside Z mod q. I noticed that if you make q large enough, then A sits inside Z mod q Freiman isomorphically, or s-isomorphically. So just pick q large enough so that you don't have to worry about any issues. So Yeah. So the mod q map from A to Z mod q-- so this is Freiman s-isomorphic-- onto its image. So let's now consider a sequence of maps. And we're going to denote the sequence like this. So we start with Z. That's where A originally sits. And maps to Z mod q-- so that was the first map that we saw. And now we want to do a random automorphism, kind of like the random map earlier. But in Z mod q, there are lots of nice random automorphisms, namely multiplication by some non-zero element. And finally, we can consider the representative map, where every element of Z mod q, I can associate to it a positive integer from 1 to q which agrees with the Z mod q. So the final step is not a group homomorphism. So we need to be more careful. So let me denote by phi this entire map. So from the beginning to the end, this composition I'll denote by phi. And lambda here is some element between 1 and q minus 1. Now, remember what we said earlier, that this map, this final map here, might not be a Freiman homomorphism, because there are some additive relations here that are not preserved over here. But if I restrict myself to a small interval, then it is a Freiman homomorphism if we restrict to that interval. If you restrict yourself to an interval, you cannot have extra relations over here, because they cannot-- the interval is small enough, you can't wrap around. So let's consider restrictions to small intervals. I start with A over here. So I want to restrict myself to some interval so that I still have lots of A in that restriction. And you can do this by pigeonhole. So by pigeonhole, for every lambda there exists some interval we'll denote by I sub lambda inside q. So the length of this interval will be at most q/s, such that if I look at the restriction of this interval, I pull it all the way back to the beginning then I still get a lot of elements of A. So A sub lambda, namely the elements of A whose map gets sent to this interval, has at least A/s elements. So for instance, you can chop up q into s different intervals. So one of them will have lots of elements that came from A. And this is why in the end we only get a large subset of the A. So we're going to forget about everything else and focus our attention on the set here. So thus, by our earlier discussion having to do with the final map being a Freiman s-homomorphism when you're working inside a short interval, we see that phi, if you restrict to this A sub lambda, is a Freiman s-homomorphism. Each step is a Freiman homomorphism, because it's a group homomorphism. And the final step in the restriction is also a Freiman s-homomorphism, because what we said about working inside short intervals. So this part is very good. All right. So now let me consider one more composition. So at the end of the day, we would like to model this A lambda inside some small cyclic group. So far, we don't have small cyclic groups here. But I'm going to manufacture a small cyclic group. So we're going to consider the map where, first we take our phi all the way until the end, and now you take mod m map. So if I don't write anything, if it goes to Z mod m, it means the mod m map. So let me consider psi, which is the composition of these two maps. All right. So we would like to say that you can choose this lambda so that this A sub lambda gets mapped Freiman s-isomorphically until to the end. So far, everything looks pretty good. So you have Freiman s-homomorphism, and you have a group homomorphism. So the whole thing is a Freiman s-homomorphism. So psi is restricted to A sub lambda, is a Freiman s-homomorphism. But now the thing that we really want to check is if there are some relationships that are present at the end in Z mod m that were not present earlier. And so we need to check that-- we claim that if psi does not map A sub lambda Freiman isomorphically, then something has to have gone wrong. So if it does not map A sub lambda Freiman isomorphically onto its image, then what could have gone wrong? Claim that there must be some d which depends on lambda in sA minus sA, and d not 0, such that phi of d is 0 mod m. So we'll prove this. But like before, it's a very similar idea to what's happening earlier. The idea is that if you have-- we want to show that there are no additional additive relations in the image. So we would like-- so if it's a Freiman isomorphism, then there has to be some accidental collisions. And that accidental collision has to be witnessed by some d. So this requires some checking. So suppose-- indeed, suppose the hypothesis-- suppose that the psi does not map A sub lambda Freiman s-isomorphically onto its image. Then there exists a1 through as, a1 prime through as prime, in A lambda such that they do not have additive relation, but their images do have this additive relation. Their images all the way until the end having the additive relation means that phi has this additive relation mod m. OK. So how can this be? Recall that since the image-- so all of these elements-- lie inside some short interval. The interval has length less than 2 minus s. So we saw this argument, very similar argument earlier, before the break. Because everything lies in the short interval, we see that this difference between the left- and the right-hand sides, this difference is strictly less than q. Now, by switching the a's and a primes if necessary, we may assume that this difference is non-negative. Otherwise, it's just a labeling issue. Otherwise, I relabel them. So then this here-- so what's inside this expression-- we call this expression inside the absolute value, we call it star. So star is some number between 0 and strictly-- so at least 0 and strictly less than q. Right. So if we set d to be this expression, so the difference between these two sums, on one hand this d here-- sorry, that's what I want to say. Suppose you don't have-- if you are not mapping Freiman isomorphically onto the image, then I can exhibit some witness for that non-isomorphism, meaning a bunch of elements that do not have additive relations in the domain. But I do have additive relation in image. So if we set this d, then it's some element of sA minus sA. And it's non-zero, because we assume that d is non-zero. And so then, what can we say about phi of d? So phi of d, I claim, must be this expression over here, the difference of the corresponding sums in the image. Because the two sides are congruent mod q-- two sides are congruent mod q. And furthermore, they are in the interval from 0 to strictly less than q. So this is a slightly subtle argument. But the idea is all very simple. Just have to keep track of the relationships between what's happening in the domain, what's happening in the image. Somehow, I think the finite field case is quite illustrative of-- there, what goes wrong is similar to what goes wrong here. Except here, you have to keep track a bit more things. OK. So consequently, thus phi of d is congruent to 0 mod m, which is what we're looking for. So that proves the claim. Any questions? Yeah? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: OK. So this part? All right. So we set d to be this expression. I claim this equality. So why is it true? First, the left-hand side and the right-hand side, they are congruent to each other mod q. So why is that? AUDIENCE: [INAUDIBLE] YUFEI ZHAO: Sorry? Yeah. AUDIENCE: [INAUDIBLE] every part of phi should preserve-- the first two parts are group homomorphisms. And then we just take [INAUDIBLE] mod q. YUFEI ZHAO: Exactly. If you look at phi-- where was it?-- up there, you see that everything preserves. Even though the very last step is not a group homomorphism, mod q is preserved. So even though the last step is not group homomorphism, it is all mod q. So if you're looking at mod q, everything is group homomorphism. So here we're good. Both are in this interval. The right-hand side is in the interval because of our assumption about short-- everything living inside a short interval. And the left-hand side is by definition. Because the image of phi is in that interval, especially if given that d is not equal to 0. Is that OK? So it's not hard, but it's a bit confusing. So think about it. All right. So we're almost done. So we're almost done proving the Ruzsa modeling lemma in Z mod m. So let me finish off the proof. So for each non-zero d in this iterated sumset, basically, we would like to pick a lambda so that that map up there does what we want to do. If it doesn't do what we want to do, then it is witnessed by some d, like this. Those are the bad lambda. So if there exists a d, then this lambda is bad. So for each d that potentially witnessed some bad lambda, the number of bad lambda, i.e., such that phi of d is congruent to 0 mod m, so here we're no longer even thinking about group homomorphisms anymore or the Freiman homomorphisms. It's just a question of, if I give you a non-zero integer, how many lambdas are there so that phi of d is divisible by m? Remember that we picked q large enough so that, initially, you are sitting very much inside-- everything's really between 0 and q. So this dot lambda up there lacks uniformity. So the number of such bad lambdas is exactly the number of elements in this interval that are divisible by m. So everything's more or less a bijection if you restrict to the right places. And the number of such elements is, at most, q minus 1 over m. So therefore, the total number of bad lambdas is, at most, for each element d of sA minus sA, a non-zero element. We have, at most, q minus 1 over m bad lambdas. So the total number of bad lambdas is strictly less than q minus 1. So there exists some lambda such that psi, when restricted to A sub lambda, maps Freiman s-isomorphically onto the image. Somehow, I think it's really the same kind of proof as the one in the finite field case, except you have this extra wrinkle about restricting too short diameter intervals, to short intervals. But the idea is very similar. OK. So that's the Freiman model lemma in the integers. And let me summarize what we know so far. And so that will give you a sense of where we're going in the proof of Freiman's theorem. So what we know so far is that if you have a subset of integers, a finite subset, such that A plus A is size A times K at most, doubling constant at most K, then there exists some prime N at most 2K to the 16th times the size of A and some subset A prime of A such that A prime is Freiman 8-isomorphic to a subset of Z mod NZ. So it follows from two things we've seen so far. Because by the Plunnecke-Ruzsa inequality 8A minus 8A is, at most, K to the 16 times A. And now we can choose a prime N between K to the 16 and 2 times K to the 16 and apply the modeling lemma. So that's where we are at. So you start with a set of integers with small doubling. Then we can conclude that by keeping a large fraction of A, keeping-- I forgot to-- so very important. So there exist some A, which is a large fraction. Keeping a large fraction of A, I can model this large subset of A by some subset of a cyclic group, where the size of the cyclic group is only a constant times more than the size of A. So now, we are going to work inside a cyclic group and working with a set inside a cyclic group that's a constant size, a constant fraction of the cyclic group. Question is why 8? So that will come up later. So basically, you need to choose some numbers so that you want to preserve the structure of GAPs. So that will come up later. And now we're inside a cyclic group. And you have a constant fraction of a cyclic group. Where have we seen this before? So when we proved Roth's theorem, that was the setting. So in cyclic loop, you have a constant fraction of cyclic group, and you can do Fourier analysis. Initially, you could not do Fourier analysis starting with Freiman's theorem, because the set may be very, very spread out. But now, you are a large fraction of a cyclic group. So we're going to do Fourier analysis next time to show that such a set must contain lots of structure, just from the fact that it is large. So that will be the next step. Good. Happy Thanksgiving.
MIT_18S190_Introduction_To_Metric_Spaces_IAP_2023
Lecture_1_Motivation_Intuition_and_Examples.txt
[SQUEAKING] [RUSTLING] [CLICKING] PAIGE BRIGHT: So welcome to 18.S190, Intro to Metric Spaces. My name is Paige BRIGHT, though sometimes you'll see it as Paige Bright online. That's just because I'm going to change my name, but you can just call me Paige. And, yeah, this is Intro to Metric Spaces, where today we're going to talk about the connection between what's covered in 18.100A and what's covered in 18.100B, or, respectively, p and q. And for those of you who don't know, there's this nice little rubric that I like to draw every time I teach this class, because it was drawn for me in my first year, and it helped me really understand what the differences between the courses were, which were not CI-M, the communications class, communications class, real analysis on Euclidean space, and-- or I guess I should say more than Euclidean space. And then the classes go as follows 18.100A, B, P, and Q. But realistically what I hope to highlight today is the fact that there's not too much different between these two courses. It's just a conceptual leap. But this conceptual leap is really important to have for next courses, so 18.101, 102, 103, 901. There's any number of classes that come after this that having intuition of metric spaces will be deeply helpful for. So, yeah, metric workspaces are going to be what lies on the interim between A, B, and P and Q. A little bit about me before I jump right in. I'm a third year at MIT. I've been teaching this class for two years. This is my second year. And I started teaching it because one of my friends was in 18.102 and having a really hard time with things like norms, which is a version of a metric space which we'll talk about a little bit in this class. But it's rather unfortunate if you get to the next class, and it seems like you're unprepared for one of the introductory tools, but one that comes with a lot of baggage, a lot of conceptual baggage, to keep in mind. And hopefully today, through today's lecture, you'll see what some of those concepts are. But let's start with a simpler example before we jump right into metric spaces. We're just going to talk about what makes real analysis work. What's the basic tool of real analysis that makes it work? And as I'm certain y'all have realized through the definitions that we use in real analysis, like convergence sequences, continuous functions, all of it relies on a notion of absolute values and Euclidean distance. And so let me write out what that Euclidean distance is. So given two points x and y in Euclidean space, we call the distance between them x minus y and Rn to be the sum of the distances between each of the components squared to the 1/2. This is just the Pythagorean theorem. And this definition is known as Euclidean distance, as I'm certain y'all know from 18.02. Mostly speaking, though, we focus on R, just because most of the theory breaks down to studying everything in R. So if you don't prefer thinking in n dimensions, you can always think in one dimension, and, for the most part, everything will be fine. And in fact, we'll see why it will be fine in a moment. But what are the most important properties of this distance? Well, firstly, we have that it's symmetric, meaning that the distance from x to y is the same as the distance from y to x over Rn. And this is something we should expect. We should expect the distance for me to you is the same as the distance from you to me. We also have that it's positive or positive definite, specifically that the distances between points is always bigger than or equal to 0. And if the distance is exactly equal to 0, this is true only if-- if, and only if-- the two points are the same. So the distance between you and someone else is only 0 if you are the same person. And if any of the notation I use doesn't make sense, I'm happy to elaborate. But this is the notation for if and only if. And then finally, the most important one, arguably, though the hardest one to show most of the time, is the triangle inequality, that the distance between x and z is less than or equal to the distance from x to y plus the distance from y to z. These are the three properties that make real analysis work. Now if you think back to all those definitions that I was talking about earlier, convergence sequences, Cauchy sequences, and so on, all of them have something to do with these absolute values. And that's because these absolute values are, in fact, a metric. When I define what a metric space is, which will just be in a moment, it's simply going to be a set with these three properties. So let me go ahead and write that down. A metric space is a set x with a function d, which will be called our metric, which takes in two points in x and spits out a real number. In fact, it doesn't just spit out a real number. It spits out one that is between 0 and infinity, and not including infinity. And it satisfies the three properties listed here. And d satisfies-- I can rewrite these definitions if y'all would like, but just replace the-- what's really happening is you replace the absolute values between x and y with the distance from x to y. Does this make sense to everyone? And one point of confusion last year was, what does the notation with this times thing mean? It just means that we're taking in one point in x and another point in x, so distance from x to y. This is the definition of a metric space. It's not terribly insane. But what it allows us to do is study all the tools that we did in real analysis all of a sudden in a completely new setting. So now the first few examples I'm going to talk about are ones on Euclidean space, just because we have a lot of intuition about that already. But soon, as we'll see, we can study this on plain old sets. You can study this on topological spaces. You can study this on functions. It's a very powerful tool that generalized a lot of math when it was first invented. All right, so let's start off with some more of these examples. So another distance on our end that we can consider-- hi-- is one that takes in two points, x and y. In fact, I'll call this one d infinity for the metric-- that takes in x and y and spits out the maximum over every component of the distances. So in other words, if I have a vector here in 3-dimensional space x, and one that goes here, the largest distance would probably be the vertical one, I will just say. And that would be the Euclidean distance. It's a maximum distance between the components. So if I write x as x1 through xn, symmetric just gives out the maximum value of the differences of these components. Since you just walked in, let me just briefly explain what's happening. So we define what a metric space is as simply as set with a function that acts like a distance. And the three properties we want it to have is we want it to be symmetric. So the distance from me to you is the same as you to me. We want it to be positive definite. We don't want our distances to be negative, and the triangle inequality, which we should be a little bit more familiar with. So, yeah, that's the definition of a metric space, and I'm just defining a new example. But if I ask you on a problem set, which I will do, to prove that something is a metric, you have to prove the three properties, which isn't, in this case, terrible to do, because-- I guess prove, technically-- one, it's definitely positive-- or it's definitely positive because we're using absolute values. It's not going to give us something negative. But the thing that's a little bit harder to check sometimes is you have to check it's positive definite, so definite. So if the distance d infinity from x to y is 0, what does that tell us? Well, that tells us that for all i, the distance between xy and yi must be 0. Why is this true? Well, if it wasn't 0, if it was something slightly bigger than 0, then the supremum metric on this d infinity would make this have to be bigger than zero, right? So this implies the x is, in fact equal, to y. That's how we define equality on Euclidean space. And the opposite direction, usually a little bit easier, if x is equal to y, then the same conclusion holds. So usually that direction is a little bit easier if you assume equality showing that it's 0. But the other direction can be a little bit harder. Assume it's 0. What can you say? Or assume it's not 0. What can you say to do a proof by contradiction? And then lastly, we have to-- or, sorry, no, two more. It's definitely Symmetric the fact that it's symmetric just follows from the fact that you can just swap any of the two in the maximum. It's not too bad. And three, you have to check the triangle inequality. And this is where it can be a little bit more tricky because you might be inclined to just say, oh, it's the maximum of things that satisfy the triangle inequality. But we have to be a little bit more careful than that, and I'll explain why in a moment. So let's consider two-- three points, x, y, and z in Euclidean space, Rn. And let's suppose we're considering the distance d infinity from x to z, which I'll write down again is the maximum of i between 1 and n of the different distances between the components. Now we want to go from these to information about y. And what we want to do is exploit the fact that we know that the absolute value satisfies the triangle inequality. But this maximum function makes it a little bit more difficult. Any ideas what we could do before we use the triangle inequality? Well, in this case, there's only finitely many terms that we're considering. So we know that a maximum has to exist. So I'll just call that term j. This is as opposed to looking at the maximum, I'll just consider the distances of xj to zj. And then I can apply the triangle inequality. And this is where it's important to double-check that everything is working out. We had to know that a maximum existed before we could apply the triangle inequality. So then this is less than or equal to the distance from xj to yj plus the distance from yj to zj. And now we can apply the maximum operator again to both of these because we know that this one is going to be less than or equal to the maximum over i of xi to yi. And we know that this term will be less than or equal to the maximum of that side, so plus the maximum yi to zi. And then we're done. Any questions? Happy to reiterate any of these points. So that shows that it is, in fact, a metric. And it's a pretty useful one at that. What this is telling you is if I want to study things on Rn, I can essentially just study them on R, and things will work out the same, because we can just study the maximum of the differences between their components. Let's look at one more example on Euclidean space. Now as opposed to looking at the maximum, I could instead sum over all of the terms, which will, in fact, be a little bit easier. So I'll call this the d1 distance between x and y in Euclidean space. This is the sum of the distances between the components, so yi, xi to yi. And I'll leave you to prove that this is, in fact, a metric. But let's run through the checklist one time. One, we know that it's definitely positive. Positive definite is a little bit harder, right? I mean, if they're equal, then we definitely have that the distances between these ones are all 0. If the distance is 0, then note that we're taking the sum over non-negative things. So if the thing on the left-hand side is 0, every single term in the sum must be 0. And that allows us to use the same conclusion. And then you can apply the triangle inequality right now because we are just summing over terms. So the triangle inequality automatically applies. So if you want to write through that, feel free to. The thing that's more important that I want to note here is that this is known as the l1 metric. In fact, if we wanted to, I could just replace this d1 with a dp, raise this to the pth power, and raise this to the 1 over p. And that would give me the lp metric, which is deeply important in functional analysis. It's a tool that-- it's a surprise tool that will help us later. So if you take a class on functional analysis, this is likely one of the first ones that you'll consider, though to check that the lp metric is, in fact, a metric is a little bit more difficult. There's a problem on the first problem set that's optional if you want to work through those details. And there's a lot of hints, so that's helpful. So now that we've gone through all of these examples on Euclidean space, before I go much further, I want to explain how this actually relates to the definitions that we already have, because you might be sitting here, thinking to yourself, why does this matter? We've been studying real analysis, and now we're going back to studying a function that acts like a distance. So what? Well, these four definitions we're going to be able to rewrite in terms of sequences of a metric space. So let's write out what a conversion sequence is. Let xi be a sequence the metric space xd. And let x be just a point in the metric space. Here our sequence is just as we defined it in real analysis, it's a bijection between this set of points and the natural numbers. And then we defined convergent sequence. For all epsilon bigger than 0, there exists an n in the natural numbers such that for all n bigger than or equal to n, the distance between xn and x is less than epsilon. So this is what we mean for converge-- a sequence that converges to the point x. And this is essentially just the same definition that we've been dealing with in real analysis. Just replace the distance with absolute values. And I'm just going to go through these other definitions as well. So, same setup let xi be a sequence in xd. Then a Cauchy sequence is such that for all epsilon bigger than 0, there exists an n in the natural numbers such that for all n and m bigger than or equal to n, the distance between individual points of a sequence is less than epsilon. So here's the definition of a Cauchy sequence. Who here hasn't heard of a definition of an open set before? No worries if not. Some classes don't cover it. Yeah, let me just briefly explain up here what an open-- actually, I'll do it down here-- what an open set is. Actually, no, I'll do it up here so that y'all can keep taking notes if you need to. So an open set is just a generalization of what an open interval is. So an example of an open set in R is, for instance, 0 to 1. It's such that if I'm considering the interval from 0 to 1, the thing that makes it an open interval is the fact that for any point I consider, let's say, that one, that I can choose a ball of radius epsilon around it, so a distance epsilon on both sides, such that the ball of radius epsilon around the point x is contained between 0 and 1. And I can do this for every single point in the interval 0 to 1. This is what makes it an open interval. And the definition of an open set is going to be essentially the same, where instead, the definition of the ball of radius epsilon, this is defined as the set of points in your metric space such that the distance from x to y is less than epsilon. So that's the only difference between what's happening on a metric space rather than Euclidean space. Before, again, this would be absolute values. So to define an open set, a set A contained in x where x is a metric space is open if for all points A and A, there exists an epsilon bigger than 0 such that the ball of radius epsilon around A is contained in A. Pictorially, if you prefer to think about it this way, this essentially means that there is no boundary on your set. So if I were to draw a conceptual diagram of what's happening here, here's my set x. Here's my subset A that lies in it. And I'm just cutting out the boundary. And this allows us to say, OK, I can get for any point, even as close as we want to the boundary of A, I can still squeeze in a little ball there. And in fact, thinking about it in this conceptual drawing is deeply important. And I'll give an example of this in a moment. But this is the definition of an open set. Does everyone feel a little bit comfortable with what this is? Cool. I know it's just a little bit weird, because some classes don't cover it, because on 100A, you don't particularly need it as much. Now continuous functions are going to be a little bit weirder, because before when we studied continuous functions, we studied ones from R to R. It takes in values in the real numbers, and it spits out a real number. But now that we have a notion of a metric space, we can now study continuous functions between them, because all we really need again, is absolute values, right? So let f be a function between x and y where both of these are metric spaces. In fact, I'll write that out now. I'm going to say that x has metric dx, and y has metric dy. And this is the appropriate notation for noting which one goes with which one. Then I say that f is continuous, continuous, if for all epsilon bigger than 0 there exists a delta bigger than 0, such that if the distance between two points x and y is less than delta-- or, sorry, dx-- then the distances in the space y of f of x and f of y is less than epsilon. Now let's double-check that this definition actually makes sense. This is a very helpful way to remember what metric goes where. x and y, again, are two points in x, right? So it only makes sense to consider the distance on the metric space x acting on them. And then f takes points in x to points in y So here on the right-hand side, we should have the metric on y. And this is the definition of continuous. I'm going to prove that a very specific special operator is continuous today. And that will be helpful conceptually. But this is far more general. If you prefer, you can just let y be the set of real numbers. And this is still a really powerful tool already because now we can study them between metric spaces x and the real numbers. So I just wanted to highlight what these four definitions convert over to in this class. Tomorrow-- not tomorrow, sorry. On Thursday, we'll be proving quite a few theorems about all of these spaces. Yeah, on Thursday we'll be proving quite a bit of properties about these definitions. In fact, I'm calling that the general theory of metric spaces. It will be a very intense class in terms of proof writing. But I wanted to bring them up today so that y'all all saw that it wasn't just random stuff. So though this is all good and dandy, we've only talked about Euclidean metrics. And this is fine. But things get a little bit weirder once you go from finite dimensions to infinite dimensions. And in fact, the definitions of, for instance, a continuous function will look a little bit different, right? So let's give an example of one that's just a little bit weirder than a set of points in Euclidean space. Actually, I'm going to rearrange this a little bit. First, I want to give a really weird example that's pretty simple, but interesting, nonetheless. Let x be any set. Then I define the metric on x, so the metric between two points x and y, could be as follows-- 1 if x is not equal to y, and 0 otherwise. So if x is equal to y, then it's going to be 0. Why is this important? Because this is telling you that every single set you can give a metric to. Now that-- this metric isn't all too interesting, but it is a useful example to keep in mind, right? And let's prove that this is a metric, because even though it's a simple function, it's going to be a little bit weird. One, definitely positive or specifically non-negative. I guess I should say that, non-negative. Now we just have to check positive definiteness. So if x is equal to y, then the distance from x to y is 0. And the distance from x to y is 0 only if x is equal to y. That's the definition of the metric. Two, it's definitely symmetric because equality is symmetric. And finally, three, the hardest one, triangle inequality. Why is this one the hardest? Because now we have three points, and we have a binary acting on them, right? So let x, y, and z be in x. Then we have three possibilities, or quite a few more, but I'll state it in generality. One, x is not equal to y, y is not equal to z, and z is not equal to x. Two, x is equal to y, but y is not equal to z. Or three, all of them are, in fact, equal. Why are these the only three cases? Well, because x, y, and z that we're choosing are just arbitrary, right? So if I wanted to decide the case where y is equal to z, but x is not equal to y, I would just relabel them. And so these are the three cases we have to check to make sure that our metric is actually a metric, which happens in cases like this, where our metric is defined via a binary or similarly simple cases. So let's check each of these. Actually, I'll just do it here. I can squeeze it in here. If none of them are equal, then the distance from x to z is just going to be 1, by definition. And this is certainly less than or equal to 2, which is the sum of the other two distances. So we're good there. Two, the distance from x to z-- what's this going to be? Anyone? If x is equal to y, and y is equal to z, what's the distance from x to z? Yeah? STUDENT: 1? PAIGE BRIGHT: 1, exactly, because x being equal to y means x is also not equal to z. And this is equal to 1, which is the distance from x to y plus the distance from y to z, because only one of them is 1. And then, of course, the last case, when you plug in the x equal to y is equal to 0, we're just going to get 0 is equal to 0 plus 0. So we're all good there. Now you're going to have an example on your homework that is similarly based off of a binary, where instead, the binary is going to be based off of if three points x, y, and z or specifically-- sorry, I'll just say x and y-- where if x and y are in R2, then the distance between x and y is just going to be the regular distance in R2 if x, y, and 0 are colinear and the sum of the two otherwise. So to draw a picture of what I mean here, if x and y lie on the same line, then the distance is just as normal. If they're not, then I have to add up the two distances between them. This is similarly a binary. And when you're doing this on your homework, one thing I would suggest is breaking it up into similar casework. Either they all are colinear, or none of them are, or only one of them is. That's the sort of thing that you should do on a problem like this on the homework. This casework it can be a little bit annoying, but is useful to do. So even though this metric is vaguely interesting in that it makes any set a metric space, let's start studying a set that we care about a little bit more, continuous functions. And we can, in fact, define a metric on them. So I'm going to define c0 of ab to be the set of continuous functions from ab to R. And I'm going to define a metric on them. And the metric-- so, example-- or I guess I should say this is the definition. Example, if I'm considering two functions f and g in the set of continuous functions on the interval a to b, then the distance between these two functions is going to be the supremum over all x in a to b of the distances between the two points or between the two functions. This is what I'm claiming is a metric. And it's going to be a little bit difficult to prove in one of the steps, if you had to guess, probably the triangle inequality, because that's where everything messes up. But let's go ahead and check that this is a metric. Well, specifically, first, it's definitely going to be symmetric because the distance from f to g is equal to the supremum of the distances between the functions f and g, which is certainly equal to the distances of the supremum from g of x minus f of x. Now one thing to be careful about here, though, is-- you don't have to be as careful about it with symmetry, because symmetry is a little bit more clear that you can mess around with these operations. With the triangle inequality, it becomes a little bit harder. There is a technical thing I'll come back to you later about this step. But essentially, symmetry is done. Two, the positive definite. Well, it's definitely going to be positive because it's absolute value of zn or non-negative. And we have to check definiteness. Well, if f is equal to g, then by definition, f of x is equal to f of g of x everywhere for all x, which implies that the distance between f and g is 0. Now what if the distance between f and g is 0? How do we conclude that f and g are, in fact, equal? You know that because it's a supremum, everywhere, at every single point, they have to be equal. You can also make an argument via continuity, which is-- which makes use of the extreme value theorem. But we don't have to get into that much detail. The harder part, though-- so the positive definiteness. The hardest part, though, is going to be the triangle inequality, because here, we do have to use the extreme value theorem. A triangle inequality-- so let f, g, and h be a continuous functions on a to b. Then let's consider the distance from f to h minus h of x. We want to go from this to information about f and g and g and h so we can apply the triangle inequality. Well, to do so, this is the only part where we use the fact that the functions are continuous, because knowing that they're continuous, we can apply the extreme value theorem to them, right? So we know that this supremum has to exist somewhere. And I'm going to just let that point be y. So this is going to be equal to the distance from f of y to h of y. So what now? Well, now we can just directly apply the triangle inequality because these are absolute values. And we know that this is going to be less than or equal to the distance from f of y to g of y plus absolute value of g of y minus h of y. And then we can take the supremum of both of these individually, and we'll conclude the proof. But the thing we had to make sure of first was we couldn't just apply the triangle inequality right away. The supremum operator acting on it made it so that we had to go through these steps individually. And in fact, if we wanted to be really careful, it might have made sense to do it here as well, where we know the supremum exists, and it's going to be equal to if I swap the two orders. But really, it's important in the triangle inequality. And the thing to note here is that this is precisely the same sort of argument we had to do right here for the infinity metric. We knew that maximum existed, so we just went with it and ran. And that's, generally speaking, good advice. If we're dealing with something that's continuous, try narrowing your focus down as much as possible to a single point or to a single function, whatever you can do to finish off your proof. Now, depending on your background with analysis, you might be wondering to yourself, why do we only care about c0? Why don't we care about-- what is c, c0? Why is that 0 there? Well, in fact, the reason it's there is because we can study differentiable functions, specifically continuously differentiable functions. So, definition-- actually, I'll just define it for ck. This is the set of continuous functions on ab such that the first k derivatives of f1 exist, and two are continuous. This is the set of what are known as continuously differentiable functions, so ones in which when I take the derivative, this-- the derivative is going to be continuous. And once we have this new set defined, we can find even weirder metrics. Well, I guess not all that much weirder, but ones that are slightly more complicated, I guess I should say. So, example-- let's just consider the ones on c1 of ab. We want to show that-- the following, dc1 of f and g. I'm going to define what the metric is first. And what we want to show is that this is, in fact, a metric. This is going to be the supremum over x in a to b of f of x minus g of x plus the supremum of x and ab of f prime of x plus g prime-- or, sorry, minus g prime of x. This is going to be our new metric. Proof that this is, in fact, a metric-- proof-- one, definitely non-negative. But we have to check positive definiteness. Well, if f is equal to g, then f of x is equal to g of x everywhere. And in fact, f prime will be equal to a g prime, which implies the same result, because you can just take the derivative of this. So that includes one direction. If the distance on c1 of f to g is 0, what can we say? Well, again, we're summing over non-negative things. So if the sum of two non-negative things is 0, that implies that both terms must be 0. This fact, I cannot iterate it enough, is deeply important. When you're working on your problem set, you will have to use this fact repeatedly, that the sum of non-negative things being 0 implies that the individual terms must be 0 as well, to prove positive definiteness. That's where it mostly comes at. So what this implies is that the supremum of x in ab of f of x minus g of x equals 0. And we've already explained above how this implies that this metric or that f equals g. So this is one of those examples where you want to boil it down to the examples you've already done before. So that proves positive definiteness. Symmetry is pretty immediate. And the triangle inequality follows by the same argument up here, right? We know that the-- we know that it-- the supremum exists for each of the terms, because it is-- because all of the terms are continuous. But the thing I want to note here is, what stops us from doing this at c1? Can we do this at ck? The answer is, yes, it's not too much more difficult. You instead sum over the 0th derivative, the first derivative, so on and so forth, up until the kth derivative. And that's fine. What if I want to study it on smooth functions, which I'll define right now? We define c infinity ab to be the set of smooth functions, i.e., infinitely differentiable. Notice that if it's infinitely differentiable, each of the derivatives must be continuous, because if it wasn't, then the next derivative wouldn't exist, or the next derivative wouldn't be continuous. So this is the set of smooth functions. What stops us from just taking the sum over all of these terms, over all infinitely many terms? The issue is that there are infinitely many, right? Again, a metric must be-- it must be the case that for a metric that the value that you get out is not infinity. I guess in theory you could mess around with this a little bit, and you'd get weirder types of metrics. But for our purposes, you don't want it to be infinity. But remark, there is a not-too-bad addendum to this. On the P set, there's an example that you can work through that's an optional one, where you define a metric on this, where what you do is you sum over this interesting fraction, dck(fg), 1 plus dck(fg). So it's summing over a bunch of metrics. You have to, in fact, show that this is a metric, which is one of the other problem set problems. But it's not that you can't define a metric on smooth functions. It's that you just have to be careful of the fact that there are infinitely many terms. And one small thing to note-- is it possible that we could have gotten away with not including this term in the sum? The answer is no. And I'll let you think about this some more. But the idea ends up coming from if we take this away, and the distances is 0, then what's the distances between f and g prime imply about f and g? So I'll let y'all think about that some more because that is a problem set problem. But the answer is no. And it's a bit interesting to think about why. And that reasoning is exactly why we have to be careful about the infinitely differentiable case. So now that we've done this, now that we've defined a metric on c1 and c0? Yeah? STUDENT: Can we use the very first metric as a metric on all of these spaces? Does it still read valid? PAIGE BRIGHT: Yeah, yeah, there's nothing wrong with that. The issue is that it's not encapsulating as much information as we want it to. So as we'll see in a moment, we want to understand differentiability and integration as functions that are continuous. So that's-- great question. Yes, we could have just considered the first one as a metric on all of them in the same way that we can consider the trivial metric, this one-- this is known as the trivial metric-- to be a metric on all of them, right? But we want more information, when possible. Great question. So I will come back over here. So now that we have c0 and c1 defined, we can, in fact, define differentiation and integration-- or not define. We can state that differentiation and integration are, in fact, continuous. And I'll do that right now. So I guess this is a proposition. If I consider differentiation as a map between c0 of ab, so continuous functions to-- or, sorry, c1-- to continuous functions, my claim is that differentiation is continuous as a function-- or I guess map is a better word-- but a map between metric spaces. Does this notation make sense to everyone, the differentiation as a map? Cool. So let's check that this is, in fact, a continuous function, which is pretty nice to do. By this setup, it's made to be nice, which addresses your question about why we sum over the two terms. So to prove continuity, what's often best to do is just to consider what the distances, in fact, are. Let's write out the left-hand side and the right-hand side of this implication. So let f and g be in c1 ab. And then we consider the distances on c1 of f and g. And we want to say that if this is less than delta, then the distances on c0 of the derivatives-- so I'll just say f prime and g prime-- is less than epsilon. We want to find for all epsilon bigger than 0, there exists a delta such that this is true. Well, let's state out what these two metrics actually are. And that'll make it clear that the reduction is not that bad. So again, this is equal to the supremum over x and ab of f of x minus g of x plus the supremum of x in a to b of f prime of x plus g prime-- or, sorry, minus g prime of x. And here, this is simply the supremum over the distances of f prime of x minus g prime of x. The reason why this step is nice to do, first, is because we notice that this term is precisely the same as this one on the left-hand side. And all the terms are non-negative. So let delta equal epsilon. If you let delta equal epsilon, then this term being dc1 of f to g being less than delta implies that this term must be less than delta, or, in fact, less than epsilon by its construction. So that implies that this term is less than epsilon, which is what we wanted to show. So this shows-- or that's the end of the proof. This shows that differentiation is a continuous map. And here, we use the fact that we're summing over the two terms. Now on your problem set what you'll do is you'll show that integration from a to a point t from a to b is, in fact, also a continuous operator. So that, I think, is pretty interesting. It's going to be a little bit harder than this one because this direction doesn't include both terms. But I think it will be a worthwhile exercise to work through. Let's see what's next. So while you're going to show that integration is a continuous operator, this doesn't stop us from studying integration as a worthwhile metric right now, because we can define integration not too bad. So we define i1 to be a metric on c0 ab times c0-- actually I'll change this a little bit-- from ab 0, 1, 0, 1, to 0 to infinity given by i1 of f and g. It's simply the integral from 0 to 1 of f of x minus g of x dx. All right, we're going to show that this is, in fact, a metric. Now what are the three components? One, symmetry. Here we have to use the fact from Riemann integration that because f of x minus g of x is equal to g of x minus f of x under absolute values, then we get that the integrals are, in fact, the same from 0 to 1. You can just apply-- has everyone seen this fact about integration? Cool. One way you can prove it is that if they're less than or equal to each other, then the integration still follows, and you can apply the inequality both ways. So I just wanted to say that. But symmetry works out pretty nicely. Two, we know it's non-negative. But positive definiteness is going to be a little bit harder this time. If f is equal to g, then clearly i1 of f to g is 0. This comes from the fact that the distances between the two points is just literally 0. But how to do the other direction? Because this is where it can be a little bit more complicated. What if, for instance, the two functions differ by a point? Then their integrals are still the same, but what does this actually tell us? Let me write this out. What if i1 fg equals to 0? Well here's-- yeah, go for it. STUDENT: Well, it would continuous, though. So it would have a little ball where it all is, like where the [INAUDIBLE] is. PAIGE BRIGHT: Exactly, right. So because it's continuous-- suppose that this was not 0, proof by contradiction. And I'll draw a nice picture of what's happening here. So assuming that the integral of them is non-zero, then there must at least be one point. Let's say this is f, and this is g. It's pretty close by. Then there must exist at least one point such that they're different. Otherwise, if there was no singular point such that they were different, then they must be equal everywhere. And we reduce to the first case, right? Or, then, in fact, that's what we want to show. But it not being equal to 0 means that there must exist a point where they're not the same. And what this tells you is there must exist a little ball around x, ball of radius epsilon, let's say, such that they're, in fact, not equal on that entire ball. But we can use this to show that the integrals over that, the integral over the entire interval 0 to 1, must therefore not be 0, and reach a contradiction. So specifically we would reach the conclusion that f is not equal to g. And I'll let you all work through the statements of that. But, yeah, it is precisely the fact that if there's a point where the integral is non-zero, then there must be a ball around that point such that the distances is not 0. Cool. And this just highlights why the fact-- why continuity is important here yet again, not just to show that nice things that are continuous. Lastly, the triangle inequality-- I just want to state this, because it's not too bad. We just note before even integrating that the distances between f of x and h of x is less than or equal to f of x minus g of x plus g of x minus h of x. And then integrate on both sides of the inequality. And that allows us to reach our conclusion. So in statements like this, either choosing a point such that the supremum exists. If it's based off the supremum, that's a good way to do it. The other way to do it is try to utilize facts about the absolute values before you even apply the metric. So before you even apply integration, see what you can say. These are the few techniques that are deeply helpful. And there's one thing I want to note here. Do I still have it up? I do not because it was a while ago. We can view integration as a sum. It's an infinite sum. Sure, it's Riemann integration, but a sum nonetheless. What this is called-- I'll note it here-- this is known as the capital L1 metric on c0 ab. Or specifically, the notation would be it's L1 ab. This is symmetric. The reason it's called L1 might remind you of the fact that we had a little l1 that we'd find earlier, where instead there, we're summing over finitely many terms. It's the same exact notion. It's just the only difference here is that we're summing over infinitely many terms. And that's why the notation is the same. Now this notion is deeply important for 18.102 because this is the definition-- this leads to definitions of Lebesque integration, which we'll talk a little bit about in this class. But really, it's a question of integration as we already understand it via Riemann integration. It's really, really bad, right? What if we don't want to study things on continuous functions but instead ones with finitely many discontinuities? The Riemann integral just becomes so much more annoying to deal with. So the Lebesque integral is the way around that. And it's called capital L1 because of Lebesque, the person who invented it. But one thing I want to note is that all of the spaces that we've considered so far are vector spaces. So for those of you all who have studied linear algebra, this is-- this will be slight review. But I just want to note it here because it is somewhat important and will come up a little bit later in the class. A vector space is simply a space in which you can add two terms together, and it'll stay in the same space. This isn't a class on linear algebra, so there's not going to be too many times when the definition of a vector space comes up. But it's a useful thing to keep in mind and to know about. So, vector space. This is the TLDR. I'm not going to write out the entire definition because it's, in fact, quite lengthy. But it's a space V such that we have addition, which maps from V and V to V, and multiplication, which maps from real numbers across V to V. And ways to think about this is, for instance, the set of continuous functions, c0 ab. We have addition defined on this such that we define f plus g to just be f of x plus g of x everywhere. And we define a constant times a function, which is what R here is doing, as just being the constant times f of x everywhere. The basic idea for vector space is that addition acts how you would want it to. It maps from two points in the vector space to a point in the vector space. And scalar multiplication acts the same way. It takes a scalar and a point in your vector space and maps it to another point in your vector space. Why do I note this? Because metric spaces do not only have to be on vector spaces. We can have it be much weirder. A good example of this-- so if you did not catch the basic idea of what a vector space is, I'll-- this example might highlight it. Example-- consider the sphere S1 to be the set of x in, let's say, R3 such that-- actually, you know what? I'll just do a circle, the circle of radius 1 in R2. This is not a vector space under usual pointwise addition, because if I take a point on my vector space-- let's say that one-- let's call it x. And I do x plus x, I'm going to end up with a point that's not on the circle. This is not a vector space in the usual sense because we want addition to be contained in the vector space. Why is this important? Because we can still define distances on the circle, right? We can define distances on the circle via just the regular distances, so the distance from x to y being the usual distance on R2. You can also define it via the shortest distance between them which is known as a geodesic. I don't like this chalk, but anyways. So the point that I'm trying to highlight here is you don't always have addition defined, which is going to limit the number of theorems we can actually say about metrics, right? Nowhere in our definition of a metric does it tell us how to add two functions together because sometimes we just simply can't. The set of-- or when we can, it starts becoming more like functional analysis, which is 18.102. We'll talk briefly about that in two lectures from now. But this notion of it not being a vector space is pretty important. Was there anything else I wanted to note? We might end today a little bit early, which will be nice. The main other thing I wanted to note here was something that I noted earlier for the little l1 spaces, is you can also define lp spaces f of x minus h of x to the p-- to the 1 over p. This is known as lp spaces. So functions in which this term is finite is an lp space. All right, so I actually went through this 20 minutes earlier-- faster than last time. So if y'all have any questions, I'm happy to talk about more of the material. But we might just end today a little bit early. I don't want to get into the general theory quite yet. What I would highly suggest is trying to sit with this notion a little bit, because though it seems like a simple notion, what we've really done here today is gone from understanding, for instance, functions as things that take in points and spit out points as things that can be manipulated, as things that have a distance between them. And once we study sequences of functions, which is deeply important in analysis, it's going to be-- become more and more important.
MIT_18S190_Introduction_To_Metric_Spaces_IAP_2023
Lecture_2_General_Theory.txt
[SQUEAKING] [RUSTLING] [CLICKING] PAIGE BRIGHT: So today, in case you were thinking that there wasn't enough theorems on Tuesday, we're going to prove a lot of them today on the general metrics based theory. I like this notion of a theory of the spaces. This is a notation that Dr. Casey Rodriguez taught me, loosely speaking, where you're just trying to encapsulate what are all the things we can say about the most fundamental objects in the space that you're considering? And essentially, a number of the theorems that we'll prove are essentially the same as what we would picture in Euclidean space, but we have to be slightly more careful sometimes. And all of this theory is going to be building up to understanding something that you probably haven't seen before if you haven't done 100B, known as compact sets. So we're going to be building up to talking about compact sets. But first, we have to understand what are the fundamental tools in our tool belt with metrics based theory? And I do want to note one thing. Even though last time, I said we can put a metric on every set, which is true, not every set has a nice metric on it. Have any of y'all taken topology before? No worries if not. It's a class that usually comes after real analysis, but in topology talk, not every topological space is metrizable-- i.e. there's not necessarily a metric on it that will make the topology nice. And we'll talk about what topology is a little bit today. So in case you're interested in what that statement is, we'll come to it, but the basic idea is even though we can put a metric on every set, sometimes the trivial one is not that interesting. So all right. Let's go ahead and jump into the metrics based theory for today. So first off, we're going to be talking about convergent sequences, just like we did in RN. So throughout this, let xn and yn be the sequences we're considering. In the first statement theorem, suppose xn converges to x. What should we expect about the point that it converges to? Well, in Euclidean space, we knew that this limit point had to be unique. And in fact, we can prove that this must be the case. So then x is unique. To prove a statement like this, we choose a different point y in our set to consider. So what we're going to show now is that convergent sequences have unique limit points. So proof. Suppose there exists a y in your metric space x-- or I should say sequences in your metric space xd, so metric x with metric d. And suppose that there exists a y and x such that xn converges to y. Our end goal is to show then that x must be equal to y, in fact. Now, to do this, what do we know about equality for points? Well, by the axioms of what we're defining a metric space to be, two points are going to be the same if the distance is precisely 0. It's an if and only if statement. So that's what we want to prove. We want to prove that the distance between x and y is 0. So how can we do that? Well, firstly, let's write down what the definitions of them converging to x and y is respectively. So xn converges to x. Means that for all epsilon bigger than 0, there exists an n in the natural numbers such that for all n bigger than or equal to n-- and I'll call this n1-- we have that the distance from xn to x is less than epsilon. This is the definition of convergent sequence that we're dealing with. And similarly, xn converges to y if there exists an n2 in the natural numbers for this epsilon such that for n bigger than or equal to n2, the distance from xn to y is less than epsilon. So now, what do we want to do? We want to use this to show that the distance between x and y is small. And to do this, we will apply the triangle inequality to understand the distance from x to y. So by the triangle inequality, this is less than or equal to the distance from xn to x plus the distance from xn to y, where here I'm applying symmetry so that I can move the terms around, right? And we know that for all epsilon bigger than 0, there exists an n-- let's say this is equal to the maximum of n1 and n2 in the natural numbers such that both of these terms are now less than epsilon. In fact, you know what? To make it nicer, I'll make this epsilon over 2 both spots. So then this is less than epsilon over 2 plus epsilon over 2, which is equal to epsilon. Now, does this imply immediately that the distance between them is 0? Essentially, we have to note one more thing, which is that distances are positive definite. So this is going to be bigger than or equal to 0. So what this tells you is that because the distance between x and y gets arbitrarily small and arbitrarily small and close to 0, it has to be 0. This is from the real number theory that we already know. So this implies that the distance from x to y is 0. And therefore, x is equal to y, which is what we wanted to prove. So notice that a lot of this used all of the axioms that we were talking about. At one point, we use symmetry, which wasn't too important, but was useful to note so that we can swap xn and x around and apply the triangle inequality. We applied the triangle inequality, and we had to use positive definiteness. So these are the bare bones of what we really need in the theory. All right. So now what we're going to do is show that not only do limit points exist nicely, but the distances between limit points act nicely. So to state that more clearly, theorem-- let y be an x and xn converge to x. Then my claim is that the distance from xn to y will converge from the distance from x to y, which makes relative sense. But notice here the statement is slightly different. Before, we were purely dealing with sequences of a metric space. And so we had to deal with the metric d itself. But when we're saying that this converges to d xy, what space am I considering these distances in? Anyone? No worries. We're looking at it in r, right? Because for every single n, this is just a real number. And so when I say the distance from xn to y converges to the distance from x to y, I mean that this happens in Euclidean space. And in fact, you can likely-- I don't want to state explicitly, but I'm pretty certain can prove this as well in a general setting, so between two metric spaces. OK, so let's prove this. To do this, ultimately, we want to show that for all epsilon bigger than 0, there exists an n in the natural numbers such that the distance from xn to y minus the distance from x to y is less than epsilon in absolute values. And to do so, we can just find an upper and lower bound on the distance from xn to y. So let's do that. The first direction is nice. The distance from xn to y is less than or equal to, by the triangle inequality, the distance from xn to x plus the distance from x to y. And we can make this term arbitrarily small. So I'm just going to choose the same n to be the one for this convergent sequence. So this is less than epsilon plus the distance from x to y. So this gives us the upper bound that we're wanting on the distance from xn to y. Let's prove the lower bound. And the lower bound is very similar, but it's just a slightly different manipulation, where what we're then going to do-- so the distance from xn to y-- or actually, I'll look at the distance from x to y. This is less than or equal to the distance from x to xn plus the distance from x to y. And then what we can do is subtract these terms. So specifically, we'll get that the distance from x to y minus the distance from x to xn is less than or equal to the distance from x to y. And we can make this term arbitrarily small. So this is less than the distance from x to y minus epsilon. And notice that this implies the result because then we have that the distance from xn to y is less-- oh, shoot. Did I get this wrong? No, no. OK, yeah. So then the distance from xn y is less than-- sorry, this should be the distance from x to y. There we go. The distance from xn to-- let me double-check this. Distance from xn to y plus xn to y. There we go. Sorry about that. I mixed up what they were converging to. So what this tells us is that the distance from xn to y minus the distance from x to y gets arbitrarily small, right? I can write that out, if y'all would prefer, but that's just the next step. You write it in terms of the absolute values. OK? Any questions? So not only are limit points unique, but the distances between limit points are also going to be unique, at least in the real number sense, which is pretty nice. I'll remark. One proposition, which is pretty interesting-- you can study this for two convergent sequences at once. I'll state this in two parts. Suppose xn converges to x and yn converges to y. Then the distance from xn to yn converges to the distance from x to y. This relatively makes sense, but we can also state this slightly differently, or a different type of this theorem. Suppose xn and yn are Cauchy. So they're Cauchy sequences. So they get arbitrarily close together, but don't necessarily have a limit point. Then we know that the distance from x to yn converges. Notice that I'm not saying what it converges to. I don't know that Cauchy sequences have limit points in my metric space. And we'll get back to that in a moment. But nonetheless, the sequence itself in the real numbers will converge. And that uses the fact that the real numbers is Cauchy complete. Now, I'm not going to prove these two facts. These are facts that are on your problem set, but I do want to point out-- because this happened a lot last year-- you cannot assume that the limit points in the second part exist. You can note that this implication implies the top one by uniqueness because convergent sequences are Cauchy, as we're going to prove right now. So if you prove the second one first, then you can make a small remark. You have to explain why the limit point is the distance from x to y, but that's not too bad. So let's prove that theorem. Is that the next theorem I wanted to say? Yeah, I'll do that one now. Cauchy sequences-- ope, sorry. Convergent sequences are Cauchy sequences. Suppose xn is the convergent sequence we're considering, and that xn converges to x. Then we know that-- and I'm going to say this over and over again, just like we do in real analysis, but at a certain point, it just becomes second-hand nature-- so therefore, for all epsilon bigger than 0, there exists n in the natural numbers such that for all n bigger than or equal to n, the distance from xn to x is less than epsilon. So what does this tell us? Well, what we're ultimately interested in is the distance from xn to xm, right? Let m be bigger than or equal to n. What we're interested in is the distance from xn to xm to state that it's a Cauchy sequence. So what can we do? Anyone? Want y'all to think about it because we want to bound this and show that this is less than epsilon. AUDIENCE: Triangle inequality. PAIGE BRIGHT: Exactly. The triangle inequality. This is less than or equal to the distance from xn to x plus the distance from x to xm. And we can make both of these less than epsilon. Now, if I was really careful, I could have made these epsilons over 2. And then this would have been exactly epsilon. Most of the time, if you show it's less than a constant times epsilon, you're good. You would just have to relabel things. So all good things here. But yeah, exactly. You apply the triangle inequality. Oh. And then this is precisely what we want for Cauchy sequence, right? This implies that it is, in fact, a Cauchy sequence. So we're done. Now, the real question is, if Cauchy sequences are convergent sequences-- and as we know from the real numbers, from the real numbers, we know that there it's Cauchy complete, but here we don't. And what do I mean by Cauchy complete, which I've stated a few times? Definition-- a space is Cauchy complete if and only if-- i.e. the definition-- the Cauchy sequences are convergent. So this is the definition of what Cauchy complete actually means. And this is a very powerful fact, right? We use Cauchy completeness everywhere in real analysis to state that limit points exist, to state that differentiability was nice. Things like that-- we needed Cauchy completeness there. Now, not every space is Cauchy complete, unfortunately, but the ones that we really want them to be are, in fact. So proposition-- this is on your homework-- we have that yeah, the set of continuous functions on 0 to 1 is Cauchy complete. This is a statement on your homework that I've broken up into individual steps. So this should be a little bit nicer. But yeah, a very useful proposition to show. OK. Now, let me go back to what I meant to do slightly earlier. OK. We have a few more things to say about convergent sequences. And to do so, I'm going to define what it means for a set to be bounded, right? When we are looking at real analysis, one of the most useful theorems we had in our tool belt was the Bolzano-Weierstrass theorem. Now, we're not going to have an analog of it, but we still have some statements akin to it that we want to show. And this will be especially important as we build up to compact sets. So definition-- a sequence xn is bounded by, let's say, b bigger than 0 if, for all n in the natural numbers-- oh, sorry. I should say if there exists a point p in your metrics base such that for all n in the natural numbers the distance from xn to p is less than B-- which is what we would expect. We want the sequence itself to be-- the distance between the point and your sequence to be bounded. That's what it means for a sequence to be bounded. And if you want to picture this, I always like to draw little pictures in my notes about this. If this is our metric space x, here's our point p. And what this is stating is that there exists a radius large enough of radius B such that it completely contains x. Now, granted, what does it mean to have stuff outside of x? Nothing, really. So really, we just mean the region in the middle is contained in the ball of radius B. But that's a nice picture, if you prefer. And not only can we say this about sequences in general, we can also define what this means for a set. A set A in x is bounded by B if, for all p in x-- oh, sorry. If there exists a p in x such that for all A in A the distance from A to p is less than B. Does the set theory notation that I'm using make sense, everyone? There exists, and for all? Cool. So this is what it means for a sequence in a set to be bounded. Why would I bring this up? Because convergent sequences are bounded sequences. We have this from real analysis. But I'm going to prove it now in metric spaces. So proposition-- or I'll just write it out how I've been writing out the other ones. If xn converges to x, then xn is the bounded sequence. OK. Let's start off with the proof, which will start off, as all of our statements have so far, with the statement of what convergence means. In the statement of what convergence means, we're going to choose the epsilon that I want so that I'm not dealing with all the possible epsilons. So I'm going to note for epsilon equal to 1 bigger than 0, we know that there exists an n in the natural numbers such that for all n bigger than or equal to n, the distance from xn to x is less than epsilon, which again, I'm assuming is 1. Now, does this complete our statement? The answer is no. Here we only have that this is true for all n bigger than or equal to capital N, right? But what we want for our statement is to show that for every single natural number, this sequence is bounded. But this isn't an issue because what we can note is that there are finitely many terms less than n in the natural numbers, right? So what we can do is that B be the maximum of the finitely many terms of the distance from xi to x, or, let's say, 1 for i between 1 and capital N. So what I'm doing here is I'm noting that here I have my convergent sequence. I know that most of them-- in fact, infinitely many of them-- are contained in a ball of radius 1. And I only have finitely many terms outside of that ball. So what I'm doing is I'm saying, OK, well, it's either in this ball, or in the next one, or the next one, or so on and so forth. This is what the statement is that we wanted to show. And we know that this is, in fact, finite, as there is only finitely many terms. So this is the B that it's bounded by. I can write that out some more if y'all would prefer, but this is essentially the statement. Cool. So I think that's mostly-- oh, there's one more thing we want to say about convergent sequences, which is that their subsequences act nicely, which will be essentially how we want it to be. So proposition-- let xn converge to x and xnk be a subsequence of xn. Then my claim is that xnk is convergent. And in fact, to show that it's convergent, we're going to show that it converges to x, which is what we should expect of a subsequence. Does everyone here know what a subsequence is? Happy to redefine. Cool. So let's go with this statement. So proof-- notice that my subsequence here is arbitrary. I'm not going to rewrite that out, but we could choose any subsequence of our sequence. And what we're interested in is we want to show for all epsilon bigger than 0, there exists an n in the natural numbers such that for all nk now bigger than or equal to n, the distance from xnk to x is less than epsilon. And now, again, any guesses as to what we should do? Triangle inequality. Precisely. Yeah. So we're considering the distance from xnk to x. What we can do is apply the fact that we know that convergent sequences are Cauchy. So we can write this as less than or equal to the distance from xnk, let's say, to xm for m bigger than or equal to n plus the distance from xm to x. And now, I can choose a larger natural number, if necessary, such that this term gets less than epsilon over 2. And this term, because it's Cauchy, is less than epsilon over 2. So without writing all those steps, this is less than epsilon. So this shows that the distance from xnk to x gets arbitrarily small. So therefore, it's convergent. Yeah. This is fairly important. And it highlights one of the many ways in which Cauchy sequences are deeply important, as we'll talk about in the fifth lecture in the specific module of this class. Having things be Cauchy complete-- really, really helpful. OK. Oh, one more note. This is something that I vaguely noted on Tuesday, but these are all of the major theorems and propositions that we're going to need for convergent sequences. You might be wondering why there aren't more. We had a ton more in real analysis. In fact, it took a month of our time. The reason we don't have more is because the real numbers are a vector space. I can add two real numbers and get a real number. I can multiply them and get a real number. Everything there was nicer, and we had addition defined. So one thing we could state in real analysis is that for instance, the sum of two convergent sequences is convergent. Here that doesn't make sense, necessarily. You can't always add two points in a metric space and get a point in your metric space, let alone have addition be well defined. Similarly, we have the squeeze theorem in real analysis. We have a version of it based off of distances, but we don't have a squeeze theorem for points because we don't necessarily have an ordering on our set, right? You can picture the complex numbers. The complex numbers don't have an ordering on them. You don't have a sense in which one complex number is bigger than the other. Yeah, I'm just going to leave it there. If you've done complex analysis, then you'll know what I'm talking about. But yeah. So that's why we don't have more theorems as we used to. So now, we're going to move on to the next major theorem that I've talked about, or the next major definition that I've talked about. We've talked about convergent sequences. We've talked a little bit about Cauchy sequences. Cauchy sequences are helpful, but in terms of a metric space, we've mostly stated what we need for the moment being. Now, we're going to move on to open sets. So I'm going to recall what this definition is because it's the one that can be a little bit the weirdest. So recall-- a set A contained in your metric space x is open if, for every single point in A, there exists an epsilon bigger than 0 such that the ball of radius epsilon around A is completely contained in your set. And this is the set of y in your metric space such that the distance from y to A is less than epsilon. Just to bring it up because sometimes, the notation comes up, this is also sometimes denoted the ball around A of radius epsilon. So use whichever notation you prefer. Here-- I can lift it a little bit up. So use whichever notation you prefer. I prefer this one just because it means less commas, but whichever. OK. So open sets are going to have a huge connection between topology, and continuity, and other definitions that are important. Now, the first thing that we're going to prove is topological properties of open sets. So because y'all haven't taken topology, this is, in fact, really, really helpful. We're going to show three major properties of open sets in metric spaces. So a theorem-- and you can call this topological properties, if you want. Or I'll write it out because-- topological properties of open sets. OK. Firstly, we have that the empty set and the entire metric space itself are open. Two, given Ai are open sets, then the union of all of these sets from i equals, let's say, 1 to infinity-- or I'll just use 0, why not-- is open. So in other words, the arbitrary union of open sets is open. And finally, the finite intersections of open sets are open. In other words, if I intersect from i equals 0 to capital M of Ai, then this is open. The fact that we can't intersect infinitely many of them will become apparent in the proof. So in the proof of these three properties, it will become apparent why we can't do infinitely many of them. But let's see this. One, so consider the empty set. How do we know that the empty set is open? Well, it's vacuously true. It's true all the time because for every single point in the empty set, there are no points in the empty set. So as soon as that statement's false, we have that the empty set is open. So this is open vacuously. Two, let's consider the set x. Well, for all epsilon-- or sorry, for every single point x in x, does there exist a ball around x in x? Yes, there is because just pick your favorite epsilon bigger than 0. And then the ball of radius epsilon around that point x is, by definition, a subset of your metric space, right? Because the definition of a ball of radius epsilon is only the point in x. So we're not going to end up with some weird thing outside of x. So this shows that x itself is open. Two, we want to show that the union of open sets is open. How do we do that? i equals 0 to infinity is open. This is what we want to show. Well, as with the definition, we're just going to pick a point in the union. So pick arbitrary x in the union-- equals 0 to infinity of Ai. And we want to show that there exists a ball of radius epsilon around x that's contained in the set. Well, because it's in the union, there has to exist at least 1 Ai that contains x. So therefore, there exists an Aj in your set of open sets Ai such that x is in Aj. And then we apply the fact that Aj itself is open. So therefore, there exists an epsilon bigger than 0 such that the ball of radius epsilon around x is contained in Aj, but this is a subset of the union. So this shows that the ball of radius epsilon is contained in the infinite union, or potentially infinite union. This also works if it's finite, notice. But that's just a subcase. OK. And can I squeeze it in here? No, I cannot. So I will move over here. Any questions so far? Actually, let me move this. Any questions? Open sets are weird in that when we learn about them in real analysis, I feel like they're pretty unintuitively important. Everyone says that they're important, but I never got why. And I'll explain why they are just in a moment. But yeah, open sets are deeply important in real analysis and topology. And we're going to prove some more statements besides these topological properties that will be important and will relate to the major definitions in our toolbox. OK. So now, we're interested in the third case, the intersection from i equals 0 to m of the Ai. Well, we're going to do exactly as we did before. If I have some x in the intersection of the Ai from i equals 0 to m, then what do I know? Before, we knew that it was in any of the Ai's. But here we know even more information. Here we know that x is in Ai for every single i. So what can we do? Well, we know that therefore, because each Ai is open, there exists an epsilon i bigger than 0 such that the ball of radius epsilon i around x is contained in Ai. The issue is that the entirety of Ai might not be in the intersection. So what can we do? We have this for every single i. And we have finitely many of them. So choose epsilon to be equal to the minimum of the epsilon i's from i equals 0 or 0 to M-- capital M. This is bigger than 0, right? Because we're only choosing finite many. This is where we're using the fact that there's the finite intersection. If you had an infinite intersection, you can picture taking more and more, smaller and smaller epsilon i's such that this infimum is 0. And that's an issue, but if it's finitely many, we can take the minimum of them. And then what we know is that the ball of radius epsilon around x is a subset of Ai for all i because it's contained in the ball of radius epsilon i. Or I guess I should say that this is contained in the ball of radius epsilon i of x, which is contained in Ai for all i, which implies that the ball of radius epsilon around x is a subset of the finite intersection. All right? So why is this called the topological properties of open sets? Because if you were to take 18901, which is a class that I would highly suggest you take at some point if you're interested in this sort of analysis, as we're going to see, open sets allow us to state things about continuity. You can define continuity in terms of open sets. You can define convergent sequences in terms of open sets. Basically, everything we can redo in terms of open sets. And that is a topology. If I have a set of subsets such that these three properties hold, that's known as a topology. And in 18901, that's the basic toolbox that you're given. And then you go from there in terms of redefining everything. So this is just an example of a more general thing than metric spaces. I'm going to talk a little bit about that in the final lecture, where I talk about where things go from here. But for now, this is the basic idea. All right. Now what I'm going to state is some facts about closed sets. But what is a closed set, which I haven't defined before? So this is useful to know. A subset A in x is closed if the complement of A, which is known as the metric space x minus A, is open. Now, if a set is closed, does that imply that it's not open? Yeah. The answer is no. How can we see that? Well, let's consider the real numbers as an example. Notice-- what's the complement of the empty set in the real numbers? Anyone? It's a good exercise in thinking about what the complement means. It's the real numbers minus the empty set. So it's just going to be all of the real numbers. But this, as we've stated before in our first topological property-- this is open. So therefore, the empty set is closed by definition. But we also know, by our topological properties, again, that the empty set is open. What the heck? Well, this is just one of those things in topology. It's weird. Open sets-- you can prove properties of closed sets instead, and everything's fine. I'll note-- there's a notion called connectedness, which I'll briefly define, but ultimately will be more of a problem set problem. But if your metric space is connected-- let me state it that way. If your metric space x is connected, which I'll define in a moment, this is true if and only if the only open and closed sets-- so sets that are both open and closed-- are the empty set and the metric space itself. Now, what does it mean for your metric space to be connected? We have a nice picture of it in our heads because connected is a pretty visual thing, but this is one definition. Once you prove this, this is a definition. How do you prove this? Well, you prove it based off of the definition of disconnected. So definition-- x is disconnected by definition if there exist two open sets, U1 and U2, that are disjoint-- disjoint-- and nonempty such that the union of them is exactly x. This is what it means for your space to be disconnected. And from here a set is connected if it is not disconnected. So it's circular reasoning. And then once you have that, you can prove this note, which I'll put in the third problem set, which is an interesting problem. But I just want to note-- showing something is closed does not necessarily imply that it's open. I'll give one short example of why this is true. And then I'll move on to more properties of open sets. Anyone have questions? Cool. Feel free to shout out questions if you have any, or interrupt. I'm more than happy to be interrupted. OK, so I'm just going to give one example of a metric space that is disconnected, which is one that makes relative sense, at least on the face of it. Example-- I can take a unit of two open intervals, CR to 1 and 1 to 2, and give it the usual metric on R. What do I mean by usual metric? I mean the distance between two points in the set is just the distance normally in R-- absolute values. This is an example of one that's disconnected, right? It is the union of two open sets that are disjoint. And you can notice here the interval 0, 1 is both open and closed because the complement of it is just 1, 2, and that's open. So this is an interesting example to think through. This is known as the subspace metric or the subspace topology, technically. But yeah. OK. Let's go back to proving things about the general theory of open sets. Proposition-- I'm first going to point out that closed sets are very similar to open sets. In fact, we can define everything in terms of closed sets, if we wanted to. Properties of closed sets. Here we have that the empty set in x are closed. Two, the potentially infinite intersection of closed sets, Ai closed, is closed. And three, the finite union of closed sets is closed. Nearly identical to the topological properties of open sets. How do we prove this? It's not too bad. I'm not going to actually do it. The fact that the empty set and x are closed is just the proof that I did above for R, right? You can replace R with just any metric space. And we have that this first property holds. How about the other two? Well, for the other two, I'm just going to quickly note what are known as De Morgan's laws. So this is a nice lemma from set theory. These are known as De Morgan's laws. I'm going to switch which chalk I'm using. This states that the complement of a union is the intersection of the complements-- so i am A. And similarly, the complement of an intersection-- Ui complement i and A-- is equal to the union of the complements. These are known as De Morgan's laws. They make relative sense? You can prove them for two sets as opposed to infinitely many of them. But once you have these two properties, then you can rewrite these two properties in terms of open sets. That's why there's this duality, right? I can write the intersection of closed sets as the union of open sets. That's how you apply De Morgan's laws. And I'm not going to prove that, or I'm not going to prove these topological properties. This is in the lecture notes, but I do want to note-- you could do everything in terms of closed sets if you really truly wanted to. We just don't because open sets are nice. OK. We'll show why some of those things are actually nice. What more things do I want to note? OK, cool. Now, I want to state some specific examples of open sets in metric spaces, which we can already do quite a bit of. Example-- did I erase it already? Oh, wait, no. I did it here. Where did I do it? Oh, I did at the very top. The ball of radius epsilon around A is sometimes referred to as an open ball, but you should prove that the ball is, in fact, open, which will be our first example. So given some point x in your metric space x and epsilon bigger than 0, we have that the ball of radius epsilon around x is open. How do we do this? Well, we choose an arbitrary point in your ball of radius epsilon, right? So proof-- choose some y in the ball of radius epsilon around x. We want to find a ball of radius, let's say, delta around y contained in the ball of radius epsilon round x. That's the definition of open set. Well, notice that the distance from x to y is less than epsilon by definition, right? Because we're assuming that the distance between points in our ball is less than epsilon. So let delta be the distance from x to y over 2. Why does this work? Because then we'll notice that the ball of radius delta around y has to be contained in the ball of radius epsilon around x. Why is this true? Essentially, the triangle inequality, right? We have a ball of radius epsilon centered at x. We have a point y here. And I'm noting that the distance from here to here is less than epsilon, by definition. And I'm just choosing a ball around y to be 1/2 of that distance. And then you can apply the triangle inequality to show that this ball must actually be contained in the bigger one. Anyone want me to work through that? I'm happy to. Oh, wait a moment. Let me write it up. You know what? I'll write it out on the back board. So let z be in the ball of radius delta around y. Then we have that the distance from x to z is less than or equal to, by the triangle inequality, the distance from x to y plus the distance from y to z-- distance from x to y. And the distance from y to z is less than the distance from x to y over 2. Is this what I wanted to do? Oh, I should say, no. This is not what I wanted to do. Let R be the distance from x to y minus epsilon. Then what we know is that the distance from y to z is less than R. What am I trying to say? Distance from y to z is less than-- or sorry, this should be epsilon minus the distance. There we go. And this is then less than epsilon. So this is in the lecture notes. I would highly suggest drawing up the picture and working through it separately. But loosely speaking, it's just this diagram on the right-hand side. It's an application of the triangle inequality. OK. So the ball of radius epsilon is an open ball. In fact, you can state much more than this. This is an optional problem on your second problem set. And for the record, I do suggest y'all look at the optional problems because they will potentially give you more intuition, even if you don't solve them. But yeah, this is just one of those cases where this isn't a topology class, so I'm not going to ask that y'all do a bunch of topology, but it's pretty nice. OK. So just a small note. Any open set U contained in your metric space can be written as a union, a potentially infinite union, of open balls. I'll outline briefly how to do this right now. So if y'all want to do it later, you can do it. But the idea is that for any single point in your open set, there exists a ball of radius epsilon around that point. And then you can just take the union of all those balls of radius epsilon. The epsilon might change for every single point, but that doesn't matter, right? Because it's an infinite union. So I'll let y'all work through those details, but this is just a proposition you can show. OK, let's also just prove an example of a closed set, which will come important for next time. Just an example. Oh, sorry. I should say let x be a point in your metric space. Then the set x is closed. So this is just a singular point in your metric space. How are we going to prove this? Well, we only have one definition of closed. We have that the complement is open, right? So proof-- consider the complement of x, which is x not including x, and let y be a point here. What we want to show is that there's a ball of radius epsilon-- want to show-- around y-- or not in-- a subset of x not including x. Let's draw a picture of what this should actually look like, right? Well, I didn't want to do that. Here's our metric space. Here's our singular point x. And here's our point, let's say, y. We want to choose a ball around y that doesn't contain x. How can we do it? Anyone? You could choose whatever epsilon you want, so it's a question of what epsilon you want to choose. We can choose epsilon to be 1/2 the distance between x and y. I think that's what I was mixing up on the other proof, but it's in the lecture notes, so I'll let y'all do the other one. But choose epsilon to be the distance from x to y over 2. And then pictorially, we have the rest of the problem, but let's actually work through the details. So then let z be in your ball of radius epsilon around y. Then we want to show that z cannot be x, right? Well, notice-- this implies that the distance from y to z has to be less than epsilon, right? This is by construction, but can we get a lower bound on this? We can. We can note that the distance from-- I want to make sure I'm getting this right. Actually, you know what? I'll do this directly, or I'll do this by contradiction. Suppose, for the sake of contradiction, x was in the ball of radius epsilon around y. Then this would imply that the distance from x to y is less than epsilon, but this can't be the case because again, we're assuming that epsilon is the distance from x to y over 2. And the distance from x to y cannot be less than the distance from x to y over 2. So that is our contradiction. I'll make one more small note here, which will become important for Tuesday, which is, as opposed to looking at a singular point, you can do this with a finite union of points, right? And the argument would be essentially the same as our finite intersection of open sets argument. You have a finite Union of open sets-- or sorry, a finite union of closed things. So it should be closed. In fact, you could use that to prove it immediately. But you can also prove it directly, like this. OK. So now, what we're going to do is talk about how this definition of open sets relates to convergent sequences and to continuity. Proposition-- let xn-- to do this, I'm going to do it in the real numbers, but note that everything can be done in your metric space. I just want to use the notation from the real numbers. Let xn be a sequence-- am I writing it backwards? I am-- in R that converges to x. So then this is true, I claim, if and only if, for all epsilon bigger than 0, all but finitely many xi are in x minus epsilon to x plus epsilon. So this is the ball of radius epsilon around x, right? How do we prove this if and only if? Proof-- well, OK. I should say convergent sequence. Let's prove the forward direction first. So if xn is convergent, then for all epsilon bigger than 0, there exists n in the natural numbers such that the ball of radius epsilon around x-- I should say such that for all n bigger than or equal to n, xn is in the ball of radius epsilon, x plus epsilon. This is true because the distance from xn to x is less than epsilon. This is just the open set way to view it, right? And this capital N-- notice that there are only finitely many terms less than it. So this shows that infinitely many of the terms are in this open set. And in fact, only finitely many are outside of it. So that proves the forward direction. Let's prove the opposite direction. I should write-- suppose, for all epsilon bigger than 0, all but finitely many xi are in x minus epsilon to x plus epsilon. I want to choose a sequence that will converge to x then, right? Well, how can we do that? We can construct a subsequence that is going to be convergent. And remember, a subsequence, if it's convergent, converges to what the original sequence converges to, right? So this is where we're using a fact from earlier that we proved. OK. So suppose epsilon is 1 over n for n in the natural numbers, or I'll say m. Then choose x and m in the ball of radius epsilon-- so 1 over m to x plus 1 over m. What does this tell us? I claim that this convergent sequence converges to x. Well, to show that, we need to write down the definition. When Professor Rodriguez was erasing the board, he'd always tell some joke, but I don't have any jokes right now, unfortunately. Though he would do them two at a time-- so he could actually have time to actually say the joke before he finished. OK. So we want to show that x and m converges to x, right? Proof-- for all epsilon bigger than 0, we can choose M large enough-- or I should say capital M-- such that for all nm bigger than or equal to M, we have that 1 over nm is less than epsilon, right? Because epsilon is bigger than 0, I can find a large enough natural number such that 1 over that natural number is less than epsilon. That's a fact from real analysis. So what this tells us is that we know, then, that x and M is in x minus epsilon to x plus epsilon because this is from x over 1 over nm to x plus 1 over nm, which is a subset of the ball of radius epsilon, by construction, right? Because 1 over nm is less than epsilon. So what this tells you is that the distance from x nm minus x is less than epsilon, which implies that the convergent sequence or the convergent subsequence converges to x. So to reiterate how the proof went-- we had a convergent sequence. We had a subsequence that we showed converges to x. And by what we showed earlier, x must be the limit point of the original convergence sequence, then. So we're done. All right. I won't write out the analogous statement for metric spaces. You just change x minus epsilon to x plus epsilon with the ball of radius epsilon. I just stated it in real numbers because I prefer the interval notation to think about it. And it's a fact that not everyone fully sees in real analysis because sometimes, you don't know what an open set is in 100A. It depends on who's teaching it, realistically. OK. We're going to prove one more-- oh, wait. What do I want to say? Yeah, OK. This was just how I wanted to show that it's related to convergent sequences. Now, what I'm going to do is show that it's related to continuous functions. And I'm going to recall what that definition is because that can, again, be a little bit weird between metric spaces. So a function f from a metric space x to a metric space y, let's say, with metric dx and dy, is continuous if, for all epsilon bigger than 0, there exists a delta bigger than 0 such that if the distance between two points x and y is less than delta, then the distance in the metric space y of the images of x and y is less than epsilon. This is the definition of continuous in the metric space setting. And so now what we're going to do is prove a statement akin to convergent sequences, but for continuity. We can write the definition of continuity in terms of open sets. And not only can we do that, we can write continuity in terms of convergence of sequences. So I'll do that right now. Theorem-- under all the same assumptions here I'm going to let x and y be metric spaces with distances dx and dy. And yeah, OK. Claim-- f from x to y is continuous at a point c in x if and only if xn is a sequence that converges to x. Then f of xn converges to f of x. This is my statement. When I say continuous at the point c, I mean just replace y with c here. That means that this is continuous at every point, continuous at x in x. And it's continuous if it's continuous everywhere, right? This is the same as in real analysis. We had continuity at a point and continuity everywhere. This is just continuity at a point. OK. Proof-- I always prefer going the downwards direction first. So let f be continuous at c. Then by definition, for all epsilon bigger than 0, there exists a delta such that this implication holds. What does this actually tell us? Well, I'm going to suppose the thing in the second hypothesis. Suppose xn is a sequence in x that converges to x. Then what can I say? Well, we know that there exists-- or sorry, I should say to c. Then by definition of convergent sequence, we know that for all epsilon bigger than 0, there exists an n in the natural numbers such that for all n bigger than or equal to n, the distance from xn to c is less than epsilon. But we know that for all epsilon bigger than 0, furthermore, there exists a delta bigger than 0 such that the distance from f of xn to f of c is less than delta over 2-- or sorry, not delta over 2, just delta. No. I should have relabeled this. I should have said for all delta bigger than 0, there exists one such that this is less than delta. And then there exists an epsilon bigger than 0 such that-- no, no. I said it right the first time. For all epsiolon bigger than 0, there exists a delta bigger than 0 such that this term is less than delta. And furthermore, the distance between the images is less than epsilon. There we go. That's what I meant to say. And so what this tells you is that f of xn must converge to f of c, right? Because there exists an n in the natural numbers such that the distance between the two points is less than epsilon, which is the definition of convergent sequence. And then to prove the other direction, I'm just going to state it because-- are we running out of time? No, I'll write it out. I'll write it out over here. To prove the other direction, we're going to use it by contradiction. We're going to assume that it's not continuous at c, and then choose a sequence that doesn't converge, or such that the images don't converge. OK. If and only if. Suppose f is not continuous. This is the proof of the upwards direction. Suppose it's not continuous at c. What does this tell you? Well, saying something is not continuous this is a little bit weird, but I'll state out what that definition means. Given some epsilon bigger than 0, we know that there exists an xn in the sequence such that the distance from xn to c gets small. Do I want to say over n? No, I'll just say less than-- given epsilon bigger than 0, there exists a delta bigger than 0 in an xn in your sequence such that the distances get small-- so let's say 1 over n. But the distances between the images of them, f of c, is bigger than epsilon, right? Because the opposite would be if this was less than or equal to epsilon, right? And so then what you can show is that therefore, xn converges to c, but f of xn does not converge to f of c. How do we know that it doesn't converge to f of c? Because we just proved up here that if a sequence is to converge all but finitely many of them, i.e. infinitely many of them-- or that's not quite the same-- but all but finitely many of them must exist in any ball of radius epsilon, right? And we just showed here that there are, in fact, infinitely many that don't exist in a ball of radius epsilon. So that proves the other direction. This is known as sequential continuity, sometimes. If you want to study this in more generality, as opposed to assuming continuous, you can assume this definition on a topological space. And that's slightly different. But for metric spaces, the two definitions are the same, which will be the case a lot of the time. A lot of the time in this class, a definition that's specific to metric spaces will imply other very useful definitions that might not hold in more generality. OK. I just have one more statement to show, which is how continuity relates to open sets. Let me write out what this lemma will be. Oh, sorry. This is poor board manners. I should have raised this and lowered this. OK, OK. Just one more lemma. f is continuous at c if and only if-- oh, I need to state a definition first. I'll state it over here. Definition-- a neighborhood of a point y is simply an open set that contains y-- containing y. For our purposes, though, you can just think of this open neighborhood as a ball, right? Because I've stated that every open set can be written as a union of infinitely many balls. So this is what a definition of a neighborhood is, and that lets us state the lemma, which is for every open neighborhood of f of c-- this will be our y-- then we have that the inverse image, f inverse of-- oh, sorry. I should say every open neighborhood U of f of c, f inverse of U is an open neighborhood of f of c-- oh, sorry, of c because we're in the inverse image. OK. Has anyone seen this definition before regarding open sets? Yeah. It doesn't come up all the time because it is essentially topology. If you take a topology class, you would see this definition of-- this would be the definition of continuity in a topological class. But here we're going to actually prove it for a metric space. And because not all of y'all have seen it before, we'll just go through that proof. So proof-- we'll prove the downwards direction. So we have that f is continuous at c. And I'm going to let U be an open neighborhood of f of c. And so then I want to show that f inverse of c-- oh, sorry, f inverse of U-- is an open set of c, open neighborhood of c, right? What I'm going to do instead-- instead of considering this entire open neighborhood, I'm going to consider a ball of radius epsilon around f of c. So because U is open, we know that there exists an epsilon bigger than 0 such that the ball of radius epsilon with metric dy around f of c is contained in U, right? We can find a ball around f of c such that is contained in U because it's open. What does this tell you? Well, by continuity, furthermore, there exists a delta bigger than 0 such that if the distance from x to c is less than delta, then the distance of f of x to f of c is less than epsilon. What does this tell you? Well, this tells you that the image of the ball of radius delta around c is contained in the ball of radius epsilon around f of c, right? Because if I look at the images of these points, I'm going to get that the distance is less than epsilon around f of c. And what you can check-- I won't state this right now, or I won't prove it-- but you can check that if I do f inverse here, it's going to be contained in f inverse here. Why is this helpful? Because this is a subset of f inverse of U. So what we've done is we've gone from an open neighborhood of f of c to an open neighborhood of c, which is what we wanted to show, right? We wanted to show that f inverse of U is an open neighborhood of c. OK. So we now need to prove the other direction, which is sometimes a little bit harder. Only a little bit harder, but should be able to fit right here. OK, so now we're going to suppose that let epsilon be bigger than 0. And consider the open neighborhood of f of c given by the ball-- and consider the ball of radius epsilon around f of c. Then we know, by assumption, that f inverse of this ball of radius epsilon of f of c-- 1, 2, 3-- is an open neighborhood of c. But because this is an open set around c, there exists a delta bigger than 0 such that the ball of radius delta around c is contained in f inverse of the ball of radius epsilon of f of c, right? Because this is an open set. And therefore, we know that there exists a radius such that the ball of radius delta of that radius is contained in the open set. And then we're done, right? Because then we can apply f to both sides. And we get that f is a ball of radius delta around c is contained in the ball of radius epsilon of f of c, which is exactly the statement of continuity. You can just write this out in terms of the metrics, right? Because then the distance from x to c being less than delta implies that the distance from f of x to f of c is less than epsilon. So this concludes the proof. OK. This was the general theory of metric spaces. We've talked about convergent sequences, which was essentially just like the real numbers. We talked about open sets-- specifically, the topological properties of them, which is important for topology. And then we related those two concepts to continuity, right? We talked about how continuity is sequentially continuous at a point c if it's continuous on the metric space. And we've talked about how continuity can be defined in terms of these open sets. This is by no means a simple connection. It can take a while to feel comfortable with these ideas, but I highly suggest, if you're finding this a little bit confusing, to try and draw out the pictures. Why does this conclusion hold? Why can I apply f to both sides and have things to be fine? It's just a practice in set theory, but it is helpful to do at some point. OK? Starting next time, we're going to start talking about something you might not have seen before at all known as compact sets. Until then have a great weekend. Remember that the p set is due on the 13th, OK? All right, have a great day.
MIT_18S190_Introduction_To_Metric_Spaces_IAP_2023
Lecture_3_Compact_Sets_in_Rⁿ.txt
[SQUEAKING] [RUSTLING] [CLICKING] PAIGE BRIGHT: So far this class, we've talked about a lot of examples which were over and over again the same three concepts. And then we talked about the general theory, which is new because it's on metric spaces, but it's very similar to the stuff we've already seen for Euclidean space, right? Compact sets will be slightly different. It's not that compact sets don't exist in Euclidean space. It's that they're pretty nice there. So I hope to highlight why it's so nice on Euclidean space, but also highlight the definitions that will become more difficult to manage on metric spaces. All right. So what is so amazing about compact sets? Well, if you look up what's the intuition behind compact sets, the first thing you'll see on Stack Exchange is a conversation about finiteness. Now, we have finiteness in two senses-- one in terms of integrals. One is an integral finite. And secondly, we have it in terms of finite sets. We can consider what statements can I make about finite sets? These two notions drive the understanding of compact sets. And so today, my goal is to start off with motivation using norms to describe finiteness, and then I'm going to talk about finite sets and the analysis we could do on those. OK, so let's start with norms. So to define a norm, which you might have heard about before, either in this class or in 1802, we're going to need the notion of a vector space. And so let's define what that is now. A vector space is a set V with addition which takes in 2 vectors-- so V cross V-- and maps it to another vector. So when I add 2 vectors, I get another vector. And secondly, multiplication-- if I multiply by, let's say, a real number on a vector, I should get another vector back out. So this is very fancy notation, but essentially, what this is saying, again, is that if I have 2 vectors, I can add them together and get another. And if I multiply it by a real number, I'll get another vector. And specifically, a vector space is a set V with these 2 operations on it with nice properties. So those nice properties that we want it to define or want it to have are as follows. I'll just state a few of them just to point out why they're so nice. But essentially, it's the field axioms. We want this vector space to be one, commutative. So the order in which I add two things doesn't matter. "-tative." Two, associative, meaning that A plus B plus C is A plus B plus C. We want there to be a distributive law. "-butative." And I'll just state one more. We have the identity, by which I mean 0 and 1-- or sorry. I should say 0 is in your vector space, and minus V is in your vector space, so I can always get back to 0. There's a few more axioms than this to define a vector space. This is mostly covered in 1806. But I'll just state a few of them right now. Now, the reason that vector spaces are so helpful is because of this notion of addition, right? As we've talked about before, for metric spaces, there's not too much more we can do. We can't state facts about how you add two elements in a metric space because it doesn't always make sense. We don't have addition always defined. So just to recap, today, we're talking about compact sets. And I just somewhat rigorously defined what a vector space is. So didn't miss too much. All right, so now that we have this definition of a vector space, we're now going to define what a norm is. A norm. A norm is essentially going to be like a metric on your vector space. It's a map that's denoted like this-- which would make a bit more sense in a moment-- with map taking in vectors in a vector space V. It's spitting out a positive number. And in particular, we want this norm to satisfy three properties, just like we did for metric spaces. One, of course, we want positive definiteness, where here we want V bigger than or equal to 0 for all V. And the norm of V is equal to 0 if and only if V equals 0. So that's the definition of positive definiteness. Now, why didn't this work on metric spaces? Because there we didn't have a notion of what it meant for the number 0 to be in your vector space. But here we do, right? In the vector space, we're assuming that we have an element 0 in our vector space. So we're all good there. Positive defniteness is essentially the same. Secondly, we have homogeneity, which essentially just states that we can pull out constants. So the norm of lambda V where lambda is just a constant and V as a vector is equal to the absolute value of lambda times V, times the norm of V. And finally, we have the triangle inequalities. But here we don't need to have three vectors because this one's just defined in terms of 1. So all we need is that V plus W is less than or equal to the norm of V plus the norm of W. This is it. These are the three properties that define what a norm space is. And in fact, you can see right here-- the setup of this is very, very similar to metric spaces, the only difference being that we now know what it means to multiply by constants. We now know what it means for a vector to be 0. And we have addition defined on our space. OK? Now, let's look at some example of norms. And the examples that we're going to look at are, in fact, actually very similar to metrics themselves. So example one-- suppose we're looking at the set of continuous functions on the integral a to b-- or actually, I'll just do 0 to 1 for now so that it's simpler. Then consider the norm acting on this z0 0 to 1, two positive or non-negative numbers with the norm of a function f being equal to the supremum of all x in 0 to 1 of absolute value f of x. I claim that this is, in fact, norm. So let's show the three properties. Proof-- well, the first one-- positive definiteness-- this one's mostly done. We know that it's positive or non-negative. More accurately, it's non-negative, but let's look at the case where it is equal to 0. So if the norm of f equals 0, which again, under the definition, is the supremum of x and 0 to 1 of absolute value f of x, this implies that the function must be 0 everywhere, right? Assume it wasn't 0 at one point. Then the supremum would be that maximum value. So this implies that f must be equal to-- this is true if and only if f equals 0 everywhere. So we're good on that front. Secondly, homogeneity-- this one isn't that bad as well because then we have the norm of lambda f being equal to the supremum of lambda f of x. And of course, we can pull out this lambda now because it's just a constant times the function. So this is, in fact, equal to the supremum of lambda times f of x. And then we can pull out lambda. So then this would be lambda times the supremum of f of x, which is absolute value lambda times norm of f. So all go down the parts with homogeneity. And I won't prove it again. Actually, I'll just state it because why not? Because we're going to look at one more example after this anyways. So for the triangle inequality, here we have that f of x plus g of x-- or I should say minus, right? No, g, plus. That's fine. This is less than or equal to absolute value f of x plus absolute value g of x. And then on the right-hand side, I can apply the supremum, right? So applying the supremum on the right-hand side of both f and g will get that it's less than or equal to the supremum of f of x plus the supremum g of x, which is, of course, the norm of f plus the norm of g. And then I can take the supremum of the left-hand side to complete the proof, right? So this is what just having two extreme values here in the proof-- here all we used was bounds using the supremum which is how we could have done the other proof This is just a slightly more elegant way to do that. All right. So that's it on this example. And this is, again, essentially what we looked at last time when we were looking at the metric except there we were looking at the distances between two functions itself. Here we can just look at the norm of a single function. All right. Let's look at one more example, which would be akin to what we were looking at before. So I'm going to define an L1 norm. This notation will come up a bit later in lecture 5, but here we have the norm acting on c0 again-- so continuous functions on 0 to 1 and spitting out a non-negative number, where here specifically, the norm of a function f-- Let's say L1-- is equal to the integral from 0 to 1 of absolute value f of x dx. And now, let's just talk through why this isn't back to norm. One-- positive definiteness, we've already talked about in regards to metrics, right? This is definitely going to be positive or non-negative because it's absolute values on the inside here, and we're integrating over 0 to 1. But the fact that-- its positive definiteness comes from continuity, right? Because if at any point this is bigger than 0, there must be a point in which f of x is not 0. Homogeneity follows from homogeneity of the integral, right? Because we can pull out constants if we multiply them by f. And finally, the triangle inequality is precisely the same as before because we can apply the triangle inequality to, let's say, f plus g, and then separate out using linearity of the integral. So again, three statements are very analogous. So you might be wondering, why do we care, right? Why are we looking at norm sets or norm spaces? Oh, I should say if you have a vector space with the norm on it, then it's called a norm space. I'll write that down, in fact. Definition-- a vector space v with norm absolute values is called a norm space. Space-- just like a metric space. So why do we even care? Well, firstly, 18102 talks about this concept quite a bit further, as you can note. We're doing all of this on vector spaces. And what this lets us do, essentially, is linear algebra. Once we start studying norms, we can look at linear algebra on vector spaces, and in fact, look at infinite dimensional vector spaces. That's the major difference. If that terminology does not make sense because y'all haven't done a linear algebra class, necessarily, totally fair. Don't worry about it. It's just what happens in 18102, which is a class I highly recommend. But secondly, the process of figuring out that a norm is, in fact, a norm is very similar to showing that a metric is, in fact, a metric. It's just checking all three properties and going through the details. So that's the second reason why I bring it up here. But the third reason in particular with compact sets is it's going to help us understand this notion of finiteness. So for instance, one question we could ask-- question-- when is the L1 or mono function finite? Well, here it's pretty straightforward, right? Because here we're only looking at functions on 0 to 1. So if it's continuous on 0 to 1, we know that it has the maximum value. And we know that this term is always going to be finite. But I guess I should say more rigorously, this is clear if we're on 0 to 1. But what about when we're looking at functions on R? Well, here we can similarly define L1 functions on R to be the set of functions-- this is loosely-- functions f such that the integral from minus infinity to infinity of f of x dx is less than infinity. So then this turns into a question of when are these integrals, in fact, finite? And there's a few answers to this, but the one that I want to point out is perhaps the most natural. The L1 norm of a function f on R will be finite if f is 0 outside of some interval. So let's just say the interval is minus n to n. Right? If I have a function that is oscillating wildly in between two points, that doesn't matter as long as at a certain point-- let's say from n to minus n-- it goes to 0, right? Then we're going to know that the integral is, in fact, finite. This is not an if and only if statement, of course. We could have functions like a Gaussian. So note-- this treatment is not if and only if. We can have functions like e to the minus x squared, where here it never quite goes to 0, but rapidly decays to 0. So note that the L1 functions are not simply ones that eventually go to 0, but this is a pretty good answer to our question, right? This is a big set of functions. I can just take functions that are continuous and then eventually go to 0. That's a pretty big set. To be more explicit here, this is how it's defined on R, but we can be even more general than functions on R. So if a function satisfies this, we say that it has compact support. But what is the support of a function? Well, definition-- consider some f a function on R. And then consider the set where it's non-0. So the x such that f of x is non-0. One question I could ask you before I get into what the actual definition is that we're looking at is, is this set open, or is it closed? Any ideas? Yeah, it's open. We can write this equivalently as f inverse of the complement of 0, right? We're looking at the points such that the image of them is non-0. And we know that the singular set 0 is a closed set, which means that the complement is open and the inverse image of an open set is open. So this is open. And then we define the support of a function to be the closure of the set. So I should say x such that f of x is non-0-- we define the support of the function to be the closure of the set. Now, I haven't defined what closure is. What closure is is the smallest closed set that contains it. All this is doing is adding the boundary points, right? So if I have a function-- here's a little aside. If I have a function that's non-0 from minus a to a-- I can just draw out a little picture, a function that acts like this for minus a to a. Then the support will be adding the boundary points. So this is a little confusing to anyone who's just watching the board right now, but verbally, that's what's happening. We're just adding the boundary points of our intervals. And this is a closed set. So when I say that a function has compact support, I mean that the support of the function is a compact set. I haven't defined what a complex set is. I'm going to do that in just a moment, but this is how it's connected to the notion of finite nodes, right? Again, the way that we got here was asking a very simple question. If I have a function on the real numbers, when is its integral going to be finite? And a nice answer is if it eventually goes to 0. And that is somehow connected to the notion of a compact set. So any questions thus far? I know I'm being very hypothetical here. I know that I'm talking about a definition without saying what it is quite yet, but this is the logic that we're following. Any questions? Cool. All right. So let's look at some analysis on finite sets. So this was the first part on norms. Now, we're looking at finite sets. So let A be a subset of a metric space x with metric d. And suppose that A is finite. Then I claim that we have three properties of A. Then one-- let me switch-- every convergent sequence-- actually, I'll just say every sequence of points of A-- so let's say xi-- contained in A has a convergent subsequence. And I'm going to prove these things, right? If you're looking at this and wondering how are these statements true, I'm going to quickly prove them, but they aren't too hard to show. Two, A is closed and bounded. And three, if f is a function on A-- so f is a function that maps A to R-- then f has a maximum and a minimum. These are three properties of finite sets. And let's actually prove these three properties, right? It's pretty nice to show them. Three, OK. So let's prove these three properties. Proof-- consider some sequence xi in A. How can we find a convergent subsequence then? Well, because there's only finitely many points we can consider, and a sequence has infinitely many terms, we know that one of those terms must be repeated infinitely many times, right? Well, let's just say there exists an xj in A with xj infinitely many times inner sequence. And because it appears infinitely many times, this is going to imply that the sequence of xnk's, which are simply the points in our sequence which are equal to xj-- this is a convergent subsequence. The fact that it's convergent is simply stating that if I have a sequence that's the same point over and over and over again, it's going to converge to that point. Right? So again, to reiterate-- here we had a finite set. And we had a sequence on our finite set. And we wanted to say that it has a convergent subsequence. We know that this has to be true because there is at least one term in our set A that's repeated infinitely many times in our sequence. And then we can just choose the subsequence to be that point over and over and over again. So that's our convergent subsequence. Two-- how do we know that it's closed and bounded? Well, closure pulls from last time because we showed that if I'm looking at a single point, we know that that is a closed set. And we know that a union of closed sets, a finite union of closed sets, is closed. So we know that it's closed. The fact that it's bounded comes from the fact that there's only finitely many points. So what we can consider is fix any point p in your metric space x-- so any point-- and consider our bound b to be the maximum of the distances from p to xi where xi is in set A. And here we know that this maximum actually exists, right? Because it's a finite set. So if I'm looking at the maximum of a finite thing, this bound would be finite. So this implies that it's closed and it's bounded, right? We were able to find an upper bound on our set. Cool. And lastly, three-- the fact that a function on A to R has a max and min follows from the fact that we can just consider the set f of A. This will be a finite set in R. And because it's finite, I can order them, right? So I can just choose the point where the maximum is achieved and choose the point where the minimum is achieved. The thing that's important to note is that the maximum and the minimum is achieved at a single-- or at least a point of your set A. But yeah, these are the three properties of finite sets that we can do analysis on. And notice here that we're not assuming that the sequence is convergent. We're not assuming that the function f is continuous. Everything is super nice when we're looking at finite sets. Now, these three properties are ones that we're, in fact, going to show are true about compact sets. Either we're going to show that they're true or we're simply going to define them to be so. Let's go ahead and do that. So this is for the motivation about compact sets. I'm going to define what a compact set is now and prove some nice facts about it. But before I do so, again, any questions about what I've done so far? A lot of motivation before we've actually defined what we're looking at. All right. So I'll go ahead and erase the boards. And I'll move on to the definitions of compact sets. For the definition of compact sets, we're going to have two of them. I'll explain why we have two of them in a moment. But really, we're going to show that on metric spaces, they're the same. But before I define what a compact set is, I just have to say what a cover is. So a cover of a set A is simply a union of sets-- let's say those sets are called Ui-- such that A is contained in the union. So I should say a collection of sets rather than e again. It's a collection of sets such that A is contained in their union, all right? This is the definition of a cover. And an open cover is the same exact thing, except then I'm going to assume that my Ui's are open. I'm messing with the mic too much. So same definition as a cover. I'm just assuming that the sets that are covering my set A are open. All right. So let's define what a compact is. And this will become apparent when I find this first. Definition-- a set A contained in a metric space x is compact of-- because we're going to have two definitions, we're going to be careful here. I'm going to say it's topologically compact if every open cover-- so if every open cover of A has a finite subcover. Now, what do I mean by finite subcover? I mean that I can just choose finitely many of these Ui's to still cover our set, all right? And notice that this only depends on open sets, which is why it's called the topological compact, right? This definition of compactness only requires that we know what open sets are, which is why we can do them in topology 18901. All right. Our second definition is going to be one that has a much closer connection to finite sets-- at least apparently-- or it'll be more apparent. So suppose again that we have a set A in x. And this will be sequentially compact if every sequence of A has a convergent subsequence. And now, this should become apparent-- how it directly relates to finite sets, right? Because there we had sequences. And we knew that every single sequence had a convergent subsequence, right? So these are our two definitions of compactness. They are not equivalent always. In fact, in topological spaces, these are very different. But on metric spaces, our ultimate goal for this class, it's to show that they're the same. But before we get to that on metric spaces, let's just talk about this on the real numbers because on the real numbers, there's quite a bit we can actually say. I just want to note one more notational thing. As opposed to running compact sets every single time, I'm just going to use A is a subset of x if it's compact, where here I have two subsets inside of each other. And if you're using LaTeX to type up your p sets-- this is just slash capital S subset. So no worries there. But that's the limitation for compact subsets that I'll be using. And it's, in fact, the common one to use. OK. So what sets do we know that are sequentially compact from the real numbers? You might have an answer for this already in your head, especially if you've already read the lecture notes, but we'll go through them. An immediate example that we can consider, which is pretty nice, is simply the set of real numbers themselves. So let's do that. So example-- consider the entire set of real numbers contained in R. Any guesses as to whether this will be compact or not? Yeah, it will not be a compact set. It's not going to be compact in either sense of the terms. Proof-- the proof of sequential compactness is not that bad, right? I can just choose the sequence that I'm considering-- xn-- to simply be equal to n for every natural number. Firstly, this is not convergent, which is how we should expect, right? If the sequence that I'm considering is convergent, then every subsequence will be convergent. So start off with a divergent set or a divergent sequence. But furthermore, any subsequence of this I choose is going to go off to infinity, right? So no subsequence of this will converge. So this shows not sequentially compact. But furthermore, how do we show that the entire set of real numbers is not compact topologically? Just choose an open cover that's nice. So consider the open cover given by the union of open intervals minus n to n or n in the natural numbers. So here I'm just considering nested open intervals, right? 1 instead of the other instead of the other. We know that this is going to contain the real numbers. Any real number that you choose is going to be in one of these open intervals, right? Just choose one large enough. But does there exists a finite subcover? The answer is no. Suppose there existed a subsequence nk such that the union of nk in the natural numbers-- or I should say the union of k equal to 1 to some m of minus nk to nk-- so this is our finite subcover. Suppose that this contains the real numbers. This cannot be the case, right? Choose the largest possible nk, right? If I choose the largest possible nk, then I know that there exists a real number larger than that. And so then I know that the real numbers are not contained in this finite subcover. And this is true for every possible finite subcover. Realistically, yeah, so in statements like this, you need to show it's false for every subcover of the open cover you chose to choose. OK? But this shows both not compact and not sequentially compact. Any questions here? These examples are very important to understand, so I'm happy to reiterate any points about them. Cool. Second example-- what about instead of considering all the real numbers, we consider just nice intervals? Let's consider one that's half open and half closed. So 0 to 1, but not exceeding 0-- this is also not compact. And we can get this, right? Because I'm claiming that topological compactness and sequential compactness are going to be false for metric spaces. And we know that the set is not sequentially compact. How do we know that? Because 0 is not contained. So just consider this xn to be 1 over n for n in the natural numbers. This is a sequence that converges in R, right? What's it going to converge to? 0. So xn-- so the sequence does converge to 0. It doesn't converge in our subset, but it does converge. What this tells us is that every possible subsequence xnk also contributes to 0. But this is not in our open interval 0 to 1, which implies not sequentially impact. Right? How do we show it's not topologically compact? Well, we just choose an open cover that works out in our favor again. Consider here is the one I'm going to consider is the open cover 5 1 over n to 2. Why do I go to 2? So that I know that it contains 1, right? If I'm going to cover an open interval, it has to contain 1. So I'm going to go slightly past it. In fact, you could choose this 2 to be 1.5 if you really wanted do from n equal to 1 to infinity. I know that this contains my open interval 0 to 1. How do I know this is true? Choose any point. We know that there exists an n large enough such that that point is in one of these open intervals. But again, if you choose any finite subcover of this, there's going to exist a number between 0 and that largest nk that doesn't exist in our finite subcover. So I can write this out again, but it's the same as this one, right? Here the issue was any finite subcover I have, I can choose the real number larger than that maximum number. Here any finite subcover I have will have a finite lower bound. And I can find a real number smaller than that. So this implies not topologically compact, which is all great, right? These are all things that we want to be true about our definitions because if they weren't true, then they wouldn't be equivalent, which is what we're hoping to show. Do I want to use this one? I'll use this one. So now, we're going to look at an example that is compact. But notice-- that's going to be a little bit harder to show, at least on the face of it, because here for these examples, we only had to come up with examples of sequences and of open covers that fail to be sequentially and topologically compact. So when we're looking at an actual compact set, we're going to have to consider every open cover and every sequence, which would be a little bit harder. So example-- here I'm just going to add in the point 0. It'd be 0 to 1. This is a compact subset of the real numbers. How do we actually show this? Well, I'm going to show topologically compact, but how do we show sequentially compact? Sequentially compact, I'm not actually going to show right now, but I'll make a note of it. Consider the sequence xn, which is contained in 0 to 1. How do I know that this has convergent subsequences? This is a fact from 18100A, but does anyone remember the name of it? No worries. It's one of the ones that's not used too often. But it's Bolzano-Weierstrass theorem. The Bolzano-Weierstrass theorem tells us-- so Bolzano-Weierstrass-- tells us there exists a convergence subsequence, where here the only assumptions Bolzano-Weierstrass has is that your set be closed and bounded. So again, to reiterate, Bolzano-Weierstrass says if I have a closed and bounded subset of a metric space-- or not a metric space-- I should say it's the real numbers. If I have a closed and bounded subset of the real numbers, Bolzano-Weierstrass says that every sequence has a convergent subsequence. Unfortunately, unless y'all would like me to next lecture, I will not be presenting this proof. This is one that I'm going to black-box for the moment being because it's one that we did in 100A. But if y'all would like me to, feel free to send you an email. And I'm happy to do it next time. All right? But the one that I think is slightly more interesting is the finite subcover, right? We want to show that for every open cover of the closed and bounded set 0 to 1, we want to show that there exists a finite subcover. And I think that's slightly more interesting. So let me go through that proof. Let 0 to 1 be contained in the union of Ui's from i equal to 1 to infinity. And I'm going to assume that this is an open cover. So each of these Ui's are open. In fact, I can assume slightly more because last time what we talked about is that every open set in a metric space can be covered in open balls. This is a fact that is showing up on your second problem set, but is one that I feel is generally safe to assume. So if you prefer, you can think about this as an open cover by open intervals, OK? Now, how do I show that there is this finite subcover? Well, to do so, what I'll do is I'll consider the following set. Consider the set of elements c such that the interval 0 to c-- or I should say c between 0 and 1 such that equal 0 to 1-- such that the interval 0 to c has a finite step subcover. Right? So I want to consider the subset of element c between 0 to 1 such that the closed interval 0 to c has a finite subcover. What is my goal here? My goal is to show the supremum of such c is 1, right? Because if I show it that that c is 1, then I'm done, right? But how do we know that a supremum exists? How do we know the supremum exists-- in particular, the supremum of this set? This is a property of real numbers. And I want to recall this because it's one that-- it's been months at this point, if you've done real analysis. It's one that you might have forgotten. This is a bounded set, right? It's bounded because it's just a subset of 0 to 1. But it's also-- yeah, it's a bounded subset of the real numbers, which means that we have the least upper bound property, which guarantees that the supremum exists. That's how we define the real numbers. So that's the answer to the question. We know that a supremum exists by the least upper bound property. And because I know that one exists, I might as well give it a name. So I'm going to call this bound c prime. And I want to show that c prime is, in fact, equal to 1. Suppose for the sake of contradiction that c prime was less than 1. What would this tell us? Well, let's consider this pictorially because this will be the easiest way to see what's happening. Here we have the interval from 0 to 1. And I have some c prime less than that, right? And I'm covering the interval from 0 to c prime in finitely many open intervals. So maybe it looks something like that. Union with that. Union with that. Union with something like that. The issue is that when I'm covering this in open intervals, c prime cannot be the maximum of such elements. How do we rigorously see this? This is a point that I want to be careful about. The fact that there exists this wiggle room between them follows from openness. So notice then that c prime is an element of the union of this subcover-- so k equal to 1 to m. Right? So I take this finite cover from 0 to c prime. And I know that then c prime is an element of that. What this tells us-- this set is open. So there exists an epsilon bigger than 0 such that the ball of radius epsilon around c prime is contained in our set. So it's contained in the union of these Ui. Right? This is the definition of open. We have a union of open set, so I know that it's open. c prime is an element of this open set, so there exists a ball of radius epsilon around that c prime. But then notice that c prime plus epsilon over 2 is bigger than c prime, but it also has a finite subcover, right? Because the finite subcover that works for c prime will also work for c prime plus epsilon over 2. So this is a contradiction, right? Because this implies that we have an element bigger than the supremum essentially that it has a finite subcover. And that's our contradiction. OK, so what does this imply? This implies that c prime must be equal to 1, which is what we wanted to show. All right? So I'll move this up so y'all can see. Any questions about this proof? Feel free to shout them out as I'm erasing the board, if any. OK. So this shows that 0 to 1 is going to be-- or the closed interval 0 to 1 is compact. And in fact, I'll make a small remark, which is-- let me just get some new chalk. And I'll say this how out loud. The new remark thing to note is that this would have worked for any closed interval a to b, right? The proof would work by considering the set of c between a to b such that the interval a to b has a finite subcover. And the proof would work exactly the same. So remark-- a to b is compact. In fact, if I'm considering this in Rn-- so let's say an R2, I know that a, b cross c, d is compact. The proof of the second statement would also work out the same way. One, we can prove it directly. I can prove-- for sequential compactness, I can apply Bolzano-Weierstrass again. For topological compactness, I can just do one element at a time, just choose one active set at a time and go through the proof. So these two problems-- to show these two things rigorously are on your problem set, which is an optional problem. But it is a helpful thing to work through. So if you're interested in working through it, I would highly recommend this problem. But it, in fact, gives us the next direction to go, right? Because what can we know about all of these sets? They are both closed and bounded, which on the Bolzano-Weierstrass theorem is a good thing, right? Because the Bolzano-Weierstrass theorem, which is over there, says that if your set is closed and bounded, it has a convergent subsequence. So it would be great if topologically compact sets in Rn are both closed and bounded, right? Because that will be halfway to proving that the two are the same, at least on Euclidean space. And so let's actually start proving that because in fact, that will be true. So proposition-- now, we're going back to proofs or the general theory. Compact sets in R are closed and bounded. Let's prove this. Proof-- the fact that it's bounded is not too bad. Sorry, let me just-- yeah, the fact that it's bounded is not too bad to show. The proof of bounded-- just fix any point p in x-- oh, sorry. I should be more careful. I should say let a be a compact subset of x. Now, we're going to prove closed and boundedness of the set A. Boundedness is nice. Just consider the union-- or I should say fix p in x and consider the union from i equal to 1 to infinity of the balls of radius i around p, right? Then what do we know about this union? We know if we choose our p correctly, that this is going to contain our set a, right? Because again, this is just because we're taking the unions of infinitely large balls, right? But this is compact, right? So I know there exists a finite subcover. And the finite subcover will look like this. A is the subset of the union from i equal 1 to, let's say, m of balls of radius i around p. But each of these balls are contained in one another, right? As I make the radius larger, it's going to be contained to the next one. So this is a subset of the ball of radius m in our p, or in fact, is equal to p, I should say. So what this tells us is our set A is bounded, right? This is the definition of boundedness. There exists point p in a finite radius such that A is contained in the ball around p of that radius. So that implies boundedness. And notice here nowhere did I use the real numbers. In fact, this statement is going to be true about compact sets in a metric space. So good things there. Let's prove closure now. Or I should say closed. Proving closedness is going to be a little bit harder, as one would expect, right? There should be some point in this proof set that it becomes quite a bit harder. To prove closure, what we're going to want to do is show that R not including A is open. Realistically, if you feel comfortable enough with it now, you can replace this R with any metric space. The proofs that I present today will be true for any metric space. And now, let me draw a picture of what we're going to do essentially in our proof. So let's say that this blob is our metric space x. And I'm going to consider my set A should be this yellow blob. I want to show that x minus A, i.e. the region outside of A, is going to be open. So let's consider some p out here. How do I show that there exists a ball around p of radius epsilon such that it doesn't intersect with A? That's what we want to show for openness, right? Well, what I can do is compare p to every single q in A. So let's say q1, q2, q3. And what I can do is just cover this and the ball of 1/2 of that radius. So let's say that's the ball of around p of 1/2 the radius and another ball of 1/2 this radius, and so on and so forth. My goal is to take the intersection of all of these such that they don't intersect with A, right? I know that this ball won't intersect with q1, and so on and so forth. The issue here, though, the issue that we're going to have to work around, is the fact that the intersection of infinitely many open sets is not inherently an open set, right? And we know that the intersection of finitely many sets is open, but we don't know that the intersection of infinitely many sets is open. And that's the issue. So in our proof of showing openness, we're going to essentially use topological compactness to go from this definition to-- to complete the proof, to get a finite subcover. And that will be enough for our proof. So that's the outline of how this proof is going to go, but let's do it 10 times as rigorously as the proof. OK. So fix your point p in x minus A. And what we're going to do is start constructing an open cover of the set A and go from there. And this group will highlight where we use topological compactness very directly. So let p be in x minus A, or again, you can just let this be the real numbers if you prefer. And consider for all q in A-- I'm going to consider the open cover of A by-- I'm going to consider vq to be the ball around p of radius distance from p to q over 2 and wq to be the ball around q of the same radius, right? So what I'm doing now is I'm creating both an open cover of A and an open cover of p-- or sorry, yeah, an open cover of p. You want to find a neighborhood of p. So is this, in fact, an open cover of A? Is A subset of the union of wq's for q in A? The answer is yes. It's pretty much on the face of it, right? Because every element q is in wq. So the answer is yes. And so what this tells us is I can choose a finite subcover because A, we're assuming, is compact. So A is a subset of i equal to 1 to, let's say, m of wq i, which I'll again write out what this is. This is the union from i equal 1 to m of balls around qi of radius distance p to qi over 2. Everyone follow so far? Now, the goal is to claim that intersection of-- now, the corresponding v qi is not going to intersect A. So I claim that that is true. Claim-- the intersection of v qi from i equal 1 to m does not intersect A. How do I know that this is true? Well, suppose-- I'll just state this verbally. Suppose that there existed an element of this intersection that was also in A. Then that would have to be in one of these balls, right? The ball from qi to p of 1/2 the radius. We know that this cannot be true by the triangle inequality, right? Because this ball is open. And we know that it must be-- if it's contained in one of these balls, it's not going to intersect the neighborhood of p. Right? I would highly suggest working through this detail. It's a very specific point, but it's one to double-check and make sure you fully understand. But what this is telling us is that one, the set doesn't intersect A; two, our point p is definitely in this intersection because it's in all of the balls; and three, this tells us that we're done because this set is a finite intersection of open set. So this is open. So we're done because we've found an open neighborhood around p that doesn't intersect with A. And that tells us that A is a closed set. So we're done. This implies closedness. Any questions? This proof was certainly difficult, especially if you don't have much experience with topology specifically. But I hope that this picture helps quite a bit. OK. So we've shown that compact sets are both closed and bounded. Now, is the converse true? Are closed and bounded sets compact? The answer will be not all of the time. In fact, it will be true for the real numbers, but it will not be true all of the time. To prove that it's true with real numbers-- because again, we're looking at compact subsets of R today-- it's not too bad. I'll just note one small lemma. If f is a subset of a compact set k, say in x, is closed-- so here I'm assuming that f is closed-- then f is itself compact. I wrote this out in notationally, but what this tells you, again in words, is that a closed subset of a compact set is compact itself. Let's go through the proof. Well, because f this closed, we know that the complement is open. That's the definition of closed. So in fact, this is if and only if. f complement is open. And we want to show topological compactness of f. So let Ui be an open cover of f. We want to go from this to a finite subcover, right? That will show compactness of f. Well, how do we go from this to using the fact that k is topologically compact? Well, let's draw out the picture. Here we have our set x, which doesn't really matter too much in this position. Here we have k, which we know is compact. And let's consider our set f. What we've done now is we've covered our f in an open cover. And in fact, how do we go from this to an open cover of k? Well, we know that f complement-- everything outside of f-- is going to be open. So what this tells us is that k is contained in the union of these Ui's from i equal 1 to, let's say, infinity union with f complement. And this is an open cover of k. This is the open cover that we're interested in, right? So we want to go from this to a finite subcover of f or to a finite subcover. And we know that we can do so because k is compact. So because k is compact, we know that k is contained in the union from i equal 1 to m of Ui. Potentially, union this f complement-- it's not going to hurt the union one more set, so let's keep it for now. And what we know is that f is a subset of k. So we've gone from an open cover of f to an open cover of k. We've gone from that open cover of k to a finite subcover. And we note that that finite subcover of k is also going to cover f. So we're done, right? Because we showed that every open cover of f-- I'll just take one more line to be extra clear-- f is therefore contained in the union from i to 1 to m of Ui because no element of f is in f complement. So we've gone from an open cover of f to a finite subcover, which implies topological compactness. right? That's the definition of topological compactness. And we're going to use this very directly to show that closed and bounded sets in Euclidean space are compact. So how do we do that? Well, we're just going to shove our compact set in a closed and bounded interval. So proposition-- compact subsets of R are precisely closed and bounded sets of R, right? This is the statement. We've proven one of the directions so far. We've shown that compact implies closed and bounded. Let's go the other direction, that closed and bounded implies compact. Proof-- let A be subset of R be closed and bounded. Because A is closed and bounded, we know that A must be contained in some minus n to n because of boundedness. And here n is finite. And now, what can we do to show that A is compact from here? We can use the lemma we just dated. A is a closed subset of this integral. So by closedness, this implies that A is compact because it's a closed subset of a compact-- it is a closed subset of a compact set. And the fact that minus n to n is compact is precisely the proof we did earlier, right? So I won't go for that proof again. But this implies the result that we wanted, so we're done. I'll move it back down in case you want to see it again. This statement is known as the Heine-Borel theorem. It's not, on the face of it, the most easy thing to show, but we're able to cover it in a lecture. But the issue is that this statement is not going to be true about metric spaces, right? We are not going to have that closed and bounded sets are the same as compact sets in a metric space. But it's so important on Euclidean space that I wanted to bring this proof up today because it does give a good example of how we think about things in Euclidean space and why metric spaces are so important. All right? So now what we're going to do is show that closed and bounded is the same as sequentially compact in Euclidean space. And that will prove that the same-- all three are equivalent. All right? So note-- theorem-- by Bolzano-Weierstrass-- I'm just going to state it again-- closed and bounded implies sequentially continuous. Sorry, not sequentially continuous-- sequentially compact. So to show that sequentially compact is the same definition as compact subset R, we want to prove the other direction, right? Yeah. Actually, yeah. Yeah. So let's prove the other direction. Proposition-- sequentially compact implies closed and bounded. Well firstly, to show closure, what I'll note is that this is going to show up on your second problem set. On problem set 2, you'll show that closed sets are the same as ones which contain all of their limit points, meaning if I take a sequence, and I know that it converges, the thing it converges to must be in your closed set. This is a fact that you'll prove on the second problem set, so I won't write it out right now. But what this tells you is that by sequential compactness, if I take a sequence that converges anywhere, then it's going to have a-- sorry, let me say this again. If I take a sequence in my closed set, I know that it's going to-- if I take a sequence, I know that it's going to have a convergent subsequence, which tells us this contains all of its limit points, which tells us that it's closed, right? This is a fact that I will leave to you to show because I believe that you all can do it. Two, to show boundedness, it's going to be a little bit easier to do because assume A is unbounded, where A here is the sequentially compact set I'm considering. Then what this tells us is I can find a sequence xi which goes to infinity. And this is a subset of A. Right? So if I'm assuming my set A is unbounded, I can find a sequence which goes off to infinity. What this tells us is that there's not going to be any convergent subsequence of the sequence, right? This is the same thing that we did before, right? We wanted to show that the real numbers were not sequentially compact. Just take a sequence that goes off to infinity, and we're done. And the same is true here. Every subsequence of xi or xn diverges, which is a contradiction-- or sorry, not a contradiction. We showed the unbounded implies not sequentially compact. So we know that the converse is true. Sequentially compact implies bounded. So we've proven the contrapositive. All right? And this shows closed and bounded. So I know that I'm leaving part of the proof left to you, but I would highly recommend working through it, working through that portion yourself. OK? And in fact, we'll talk more about it on Thursday. So what things do I want to note now? We've shown compact subsets are the same as closed and bounded sets in R. And we've in fact showed that closed and bounded is the same as sequential compactness. So in the case of Euclidean space, we're all done, right? We've done what I've set off to do. I want to show that topological compactness is the same as sequentially compact in R. Great. The issue, again, is closed and bounded sets are not always the same as compact subsets of R because here we inherently used the fact that I can shove A into a compact subset, right? I can set A to be a subset of minus n to n if it's bounded. This fact is not going to be true about metric spaces, right? Because what does minus n to n mean in a metric? Space it doesn't make sense. But I'll make one small note. Note-- metric spaces with the following, following being the fact that closed and bounded implies compact, if I have closed and bounded implies compact, then I say that the metric space has the Heine-Borel property. Property. So it's not true all the time, but if it's true that closed and bounded implies topologically compact, then it's a specific name. It's a special name known Heine-Borel, which makes sense because the theorem is called the Heine-Borel theorem. But that's what's going to fill for metric spaces. Notice that we should get for that point to be the case because sequential compactness, once you've fully worked through the proof-- the proof of sequential compactness being the same as closed and bounded, that's going to work out in metric space as well. So we should guess that if anything is going to break, it's going to be closed and bounded by topologically compact, OK? So that's it for today. I do want to iterate that this is not meant to be the easiest portion of this class. This is, in fact, one of the hardest portions of this class. But it's, in fact, probably the most important. If you ask any analysis professor what's the major difference between 100B and 100A, it's experience with compact sets. It's hard, and it's brutal sometimes, but it is worthwhile to do.
MIT_18S190_Introduction_To_Metric_Spaces_IAP_2023
Lecture_5_Complete_Metric_Spaces.txt
[SQUEAKING] [RUSTLING] [CLICKING] PAIGE BRIGHT: Let's just go ahead and get started. So as a recap of what we've been up to, the first day, we talked about what a metric space is and went through a ton of examples, which was mostly what that day was supposed to be for. The next day, we went through some of the general theory, which was helpful, I believe. But it's really short, all things considered, right? It's just a single lecture of most of what real analysis took up for us, right-- what are sequences, what are Cauchy sequences, et cetera. That's what the beauty of metric spaces is to cover now, because because we have done all of that work in real analysis, now we can simply state and prove the theorems that we know and love for metric spaces because they're very, very much analogous. The one thing that wasn't analogous, though, was compact metric spaces. If you've never seen compact metric spaces, that's totally fair because if you only studied it-- if you only studied real analysis on Euclidean space, they're just closed and bounded sets. And those you can define without compactness. But there's clearly a ton of important statements you can make about compactness. We had the four-part huge theorem last time, where we showed that topologically compact is the same as sequentially compact, which was the same as totally bounded and Cauchy complete, and another property with the finite intersection property. But the main thing I want to talk about today is complete metric spaces because there's a ton can say about metric spaces once you know that they're complete. In particular, I want to talk about two specific examples-- one, the Banach fixed point theorem, which is a very useful theorem for differential equations. But secondly, I'm going to then talk about completions of metric spaces, which show up all of the time. So, yeah, let's go ahead and jump right in. So to motivate the conversation of the Banach fixed point theorem, I'm going to start by introducing what are known as Lipschitz functions-- Lipschitz. So a Lipschitz function is one that is continuous. It's pretty nice. It's a function f such that-- it's a function f from a metric space x to y such that there exists a K in the real numbers such that the distance between the image points, f of x, f of y, is bounded by K times the distance between x and y. Now, let's think to ourselves what values of k are actually interesting here for a moment. One, if k is negative, then positive definiteness tells us that everything is just 0, which is particularly nice, right? But if it's positive, then we have to do a little bit more work. But, yeah, this is the definition of Lipschitz, or K-Lipschitz, which is that there exists a K such that the distance between the image points is less than or equal to K times the distance of the original points. All right. So I claim that this is continuous, or functions like this are continuous. And that fact is not too hard to show. So proposition-- "Lipschitz implies continuous. The proof of this is not too bad because we're going to let epsilon be bigger than 0. And we want to choose a delta such that the distance between the image points is less than epsilon, where the distance between these two points is less than delta. But we can choose delta to be equal to epsilon over K, where K is the specific constant that works for our Lipschitz function. So then what we'll have is that the distance between the image points, f of x, f of y, will be less than or equal to K times the distance between x and y. But this, under assumption, is going to be less than K times delta, which is equal to epsilon. So this shows that the distance between points in the image gets arbitrarily small once you choose a small enough region around x. So this implies continuous. But, in fact, it implies a little bit more than continuous. Does anyone know what it implies, the stronger thing? Totally fair if not-- it's something you might not have seen before. Yeah, it's uniform continuity. So, in fact, Lipschitz implies uniform continuity because it doesn't depend on x at all. It only depends on f and epsilon. So that's our next section of discussion. I'm going to redefine what uniform continuity means for metric spaces. But most of the theory will still hold up. So definition, uniform continuity-- a function f from metric space x to metric space y is uniformly continuous if, for all epsilon bigger than 0, there exists a delta bigger than 0 such that if the distance between two points x and y in x is less than delta, then the distance between the image points in y, f of x, f of y, is less than epsilon. And the key difference between this definition and the definition of continuity is that this delta does not depend on the point x. The definition of continuity is that for every x, there exists this statement. OK. So let's prove some nice statements about uniform continuity, including but not limited to how it is affected by compactness. So proposition-- suppose f is a function from x to y continuous, and x is compact. Then what we can show is that, in fact, f is uniformly continuous, which is something that we should expect, that we had continuity on bounded intervals implying uniform continuity-- so let's keep going-- uniform continuity on the open interval. So then, f is uniformly continuous. The proof of this, I think, is particularly interesting because it's going to introduce something that we talked about a few lectures ago-- or reintroduce something. We're going to use the little big number lemma. So proof-- first we're going to start by covering our metric space x by a bunch of balls, just like we would do if we're doing a proof using compactness. So for all c in x, there exists a delta perhaps depending on c, which I'll note was delta c, bigger than 0, such that the distance between x and c is less than delta c. Let epsilon be bigger than 0, I should say. So then this implies that the distance between the image points is less than epsilon. So here epsilon is going to be fixed throughout this proof. In fact, while I'm at it, I'm going to say that this is less than epsilon over 2 because we're going to use the triangle inequality later on in this proof. So what we know, then, is that these balls of radius delta c are going to cover our set x because we're just taking a ball centered at every point. So the balls of radius delta c around c cover x. So what we know is by the Lesbegue number lemma, which was true for sequentially compact spaces, but we know that sequentially compact is the same as topologically compact. So by the Lesbegue lemma, there exists a delta bigger than 0 such that for all x in x, there exists a c in x such that the ball of radius delta around x is fully contained in the ball of radius delta c around c. Now, does this make sense to everyone? I know that this was a huge statement with a bunch of quantifiers. But this is what the Lesbegue numbers lemma says. We are covering our set x, this compact set x, with these balls. And we can choose a delta such that for any point in x, I can squeeze this ball of radius delta into one of these sets in our open cover. That's what the Lesbegue number lemma says. Now, why is this so helpful? Because we know that to do-- then we know that if the distance between x and y is less than delta, then, in fact, y is already in the ball of radius delta c, which lets us know that the distance between the image point of f of y and f of c is less than epsilon over 2. And so now we can directly imply the triangle inequality, where here we'll have that the distance in y between f of x and f of y-- when the distance between x and y is less than delta, then this distance is less than the distance between-- or sorry - equal to the distance between f of x and f of c plus the distance between f of c and f of y. But both of these are less than epsilon over 2. So this delta, remember, from the Lesbegue number lemma, does not depend on x. So we've proven uniform continuity. Everyone follow? Cool. I think that this is a very interesting statement. And let's look at an example of how we can actually use this. It's going to be a very powerful application. Specifically, it's an application to integral operators, which the name of it will become apparent by the proposition itself. So proposition-- let f be a function from the interval a, b cross c, d to R continuous. So, in fact, what we know right now is that this implies that it's uniformly continuous, which is great. That would be helpful. But two, what we're going to look at is the function g of y being equal to the integral from a to b of f of x, y dx. So, in other words, we're considering the function g of y where I plug in y into this integral. And now what I claim is that this function g-- so consider g such that this is true. I claim that g is continuous. And the proof of this is going to be relatively short because we're going to use uniform continuity for the integral. Now, if you haven't seen this before, it is very powerful. It's a powerful statement. And I'll say what the real analysis statement is that we're using in just a moment. But let's just go through the proof. Let yn be a sequence in cd such that yn converges to y. To show continuity by our second lecture on the general theory, we want to show that g of xy-- sorry, yn-- goes to g of y as n goes to infinity because this will imply that g is continuous, right? This goes with our statement of continuity, otherwise known as sequential continuity. The proof of this is not going to be so bad because we're just going to write out what the limit is. We're going to have the limit as n goes to infinity of g of yn. Well, this is the limit as n goes to infinity of the integral a to b, f of x yn dx by definition. But by uniform continuity, for real analysis, we know that we can swap the limit and the integral, right? Has everyone seen this theorem before? It's very, very useful. If you need to relearn it, I would highly suggest doing so. But it's one that is mostly used with Lesbegue integration. Or, specifically, the lemma uses Riemannian integration, which makes it kind of annoying to prove. So I will not do so here. But the point is that we can switch these two limits as n goes to infinity of f of x, yn. But by the fact that f is continuous, we know that this is going to converge to f of x, y. So this is integral from a to b of f of x, y dx, which is g of y. So we're done with the proof. It was a very simple proof. But we highlighted the importance of uniform continuity. We can simply state that we can swap the limits using uniform continuity. And then our job is nearly done. But this integral operator is, in fact, going to come up a little bit later today. There's one on the face of it that is pretty straightforward but has very useful applications to differential equations. All right. So now we're turning back to the definition of Lipschitz functions. I'm going to define what a contraction is, which is going to lead us to the Banach fixed point theorem, which is our main goal of this first section. OK. Definition, contraction-- a function f from a metric space x to itself is a contraction if it is k-Lipschitz for k between 0 and 1, for k bigger than or equal to 0 and less than 1. In other words, the distance between two points f of x and f of y is less than or equal to k times the distance in x of x to y, where here k, small k, is between 0 and 1. Now, why is this called a contraction? Well, we can just sort of draw a picture of what's happening. If this is my metric space x, and this is my initial points x and y, I know that the distance between the images of them must be less than k times the distance from x to y. So if this is the distance from x to y, then for k really small, this will be the distance between f of x and f of y. And we can continue applying this over and over and over again. So what is this-- what do you think will actually happen in the end? Well, these points are going to get really close together. So our question is, does there exist a point such that its image is the same as its initial value? And I'll state that more explicitly in a moment. Specifically, we want to know if there exists a fixed point. I should have probably drawn that bigger, but that's fine-- a fixed point. If f is, again, a function from x to itself, x is a fixed point if f of x equals x. All right. So our claim for the Banach fixed point theorem, as you can probably guess from this image that I've drawn, is going to be that when I have a contraction on a Cauchy complete metric space, there is, in fact, going to be a fixed point. And let me write that out explicitly. Actually, this is more of a theorem-- sorry, theorem. It's called the Banach fixed point theorem but also sometimes called the contraction mapping theorem-- so whichever name you prefer to use. I usually use contraction fixed point-- sorry, contraction mapping theorem in the notes just because it's slightly shorter. So let x be a Cauchy complete metric space and f-- oh, sorry. It also needs to be nonempty-- because otherwise, if it's empty, then what does having a fixed point even mean? How can a point be fixed if there is no points in x?-- and then that f be a map from x to x, a contraction. Then the claim is that there exists a fixed point. In fact, it's going to be unique, which will be an interesting part of this proof, an interesting yet short part of this proof. Now let's restate what we already just so that we can make sure that we're all on the same page. We know that by definition, a contraction is a k-Lipschitz map. And as we've shown already up above, Lipschitz functions are uniformly continuous. Are we going to use this in our proof? Not explicitly. But it is helpful to just keep in mind, like, what are we actually doing here? We have a continuous map that just so happens to be a contraction, and we're going to show that there exists a fixed point. All right. The proof of this is relatively short. We're going to utilize Cauchy completeness. That's the main thing that we're going to utilize in this proof. We want to construct a function, or we want to construct a sequence that is Cauchy complete and therefore will have a limit. And that limit will be the fixed point. And to do so, we're just going to pick-- oh, proof-- pick arbitrary x0 big X. And this is where we use that it's nonempty. The fact that we can pick any x0 X has to do with the fact that it's nonempty. The fact that it's nonempty is relatively a small moot point. But I do want to iterate that this assumption is necessary. And then what we're going to do is define xn plus 1 to be f of xn. So our sequence will be x0, f of x0, f of f of x0, and so on. This is going to be our sequence. And our hope is that this is going to converge, i.e., hopefully, it will be Cauchy. And the proof of this is pretty straightforward. Well, we'll note that the distance between xi plus 1 and xi-- what can we state about this? Well, firstly, we can plug in what the definitions of them are. This is the distance between f of xi and the distance between f of xi minus 1. And what we know is by the fact that it's a contraction is that this is less than or equal to k times the distance of xi to xi minus 1. And I can continue reiterating this over and over and over again, where here I'll get less than or equal to k to the i of the distance from-- i or xi? Yeah, i-- distance from x1 to x0. Everyone follow? And we're going to use this to consider-- to show that our sequence is in fact Cauchy, we're going to look at the distance between xn and xm. So I'll do that here. We're considering the distance from xn to xm, and we want to imply this simple fact between the distances between two nearby points, two neighbor points. Well, to do that, we can just look at the triangle inequality. We can write this as less than or equal to the distance from xi plus 1 to xi, where i goes from n to m minus 1. Does everyone see how I did this step? You can expand it out to see that it is precisely the triangle inequality. This is less than or equal to the distance from x to xn plus 1, xn plus 1 to xn plus 2, so on and so forth until we get to m. And here I'm assuming that m is bigger than n because it doesn't particularly matter. OK. Well, then, this is less than or equal to the sum of k to the i times the distance from x1 to x0, which is great. Notice already that this distance is just fixed. It doesn't depend on i at all. But now we want to figure out how we deal with the sum from n to mx1. And to do so, we can apply a pretty nice trick, which is just multiply by kn. This will be less than or equal to kn-- and I'll pull out the distance x1, x0-- of the sum from i equals 0 to m minus 1 minus n-- right, minus n? Yeah-- of k to the i. So here I'm just multiplying by k to the n such that when I plug it back in, I get the same exact sequence. And so what can we do? This is then just a geometric series. This is less than or equal to k to the n times the distance from x1 to x0 of the sum from i equals 0 to infinity of ki, which is simply k to the n over 1 minus k of the distance from x1 to x0, where here I'm applying the geometric series formula to this geometric series. And now we're going to use the fact that-- now we're going to use the fact that k is less than 1. Or, specifically, it's between 0 and 1, right? I can choose n large enough-- large enough so that this term is less than epsilon. I'm not going to state explicitly what this n is. You can figure this out independently. But we know that one minus exist, because k to the n is going to be going to 0. So does this complete our proof? Not fully. We have that this is a Cauchy sequence. So we know it converges. We need to, one, show that the point it converges to is our fixed point, and two, show it's unique. So let's go ahead and do that. OK. So we want to show that this is, in fact, our limit point. And to do so-- or specifically, first I'll state we know there exists an x and X such that xn converges to x. How do we know this? Because it's Cauchy complete. I'll reiterate this point. When you're going through proofs of theorems, it's important to double-check that you're using the statements of your theorem in every possible-- at least once in your proof, or at least the implications of your statements. So we know that there exists an x such that xn converges to X by Cauchy completeness. And we want to show that x is its own fixed point. But how do we do that? Well, x is the limit as n goes to infinity of f of xn. This is how we literally defined what the fixed point was. Or this is how we defined our sequence. So what can we do? We can imply the fact that f is continuous because it's Lipschitz. So this is the same as f of the limit as n goes to infinity of xn, which is f of x. So we've shown that it's a fixed point. It's not too bad of a proof. But it does utilize the fact that it's continuous. So that's why I talked about that earlier. Secondly, we want to show that this fixed point is, in fact, unique. And the way to do so is very similar to what we've done before. So, two, let y be an x such that y is equal to f of y. Then what will we do? Well, we will have that the distance from x to y will be equal to the distance from x to f of x, f of y. And then we can apply the fact that it's a contraction, right? This will be less than or equal to k times the distance from f-- sorry, of f x to y. And we can move this term over. This will be 1 minus k times the distance from x to y. It's less than or equal to 0. Now, what does this tell us? Well, 1 minus k is going to be nonzero, right? k is between 0 and 1. And it doesn't include 1. So what this tells us is then that the distance between x and y is 0, which, by our metric space theory, implies that x equals 5. Now, I'm going to quote the book Lebl for a moment here because I think that he states it very well. "Not only does this proof tell us that there exists a fixed point, but this proof is constructive." It tells you how to find the fixed point itself. You just reiterate the process of applying the function to itself over and over and over again, which, if you're in the math lecture series, is something that Professor Staffilani talked about in the conversation of the nonlinear Schwartz equation. I think it's very interesting and, in fact, has something to do with differential operators. So, to that point, I'm going to go through a proposition that is very directly tied to differential operators, or differential equations. So an example-- let lambda be in the real numbers and f and g be continuous on a to b. I'm going to define one more function. I'm going to let k be continuous on the interval a, b cross a, b. So k here is going to depend on two parameters OK. So then my question is, for which lambda is Tf of x equal to g of x plus the integral from a to b of k x, y f of x dx-- for which lambda-- sorry, lambda-- is this a contraction? I know it's a long statement. And I know I haven't largely motivated it. And I'll get to why the motivation is at the end of the proof. But let's think about this for a moment. I have this integral operator where what it's doing is it's taking in a function f and spitting out g of x plus lambda times f applied to this integral operator. This is pretty directly applied to differential equations. The second thing I want to know is that in order for it to be a contraction, by definition, it has to be a map from the set to itself, right? A contraction is a map from x to itself. So the first question we should ask ourselves is that if it takes in a function that's continuous, does it spit out something that's continuous? The answer will be yes. How do we see that? Well, g of x is continuous. So we know that that's all good. But secondly, we've already shown that the integral from a to b of a continuous function is going to be continuous when I plug in the parameter y. That was our-- I think I might have erased it. Yeah, I did. But that was one of our statements was that this is, in fact, going to be continuous. So we know that this is a map from continuous functions to itself. And that's what makes this question so interesting because now we're looking at specific functions. We're not looking at the distance between x and y. We're looking at the distance between functions f and g. I want to reiterate this point. Metric spaces give us the ability to view functions as single points. There are things that we can manipulate and plug in and figure out the distances between. Now we're no longer viewing functions as things that take in inputs and spit out outputs, but rather things that we can manipulate. And this viewpoint-- deeply influential for all parts of mathematics moving forward. But the question still stands. For which lambda is this a contraction? Well, to go about proving for which lambda it's a contraction, let's look at the distance between Tf of f1 of x and Tf2 of x, where, again, what I'm changing here is f. It's not x that we're changing. It's f. All right? Well, then this is equal to, by definition, the difference of these two functions. So this will be the integral from a to b k x, y f1 of x. And while I'm at it, I'm going to combine this with f2 of x just because I'm running out of room on this board. So it's going to be the integral of k applied to f1 minus the integral of k applied to f2. And we want to make this small. How do we do that? Well, firstly, let's note one very useful fact, which is that k is a continuous function on a bounded interval, or on a bounded interval across itself. So this tells us is that, in fact, k is bounded. Why don't we want to apply the fact that the difference between f1 and f2 is bounded? Well, because we want to write this in terms of the distance between f1 and f2. So we don't want to do that. So how do we do that? Well, first we apply the triangle inequality to this integral. So this is less than or equal to b of k of x, y times f1 of x minus f2 of x. And now we want to get in somehow the distance, or the metric, on the continuous functions. To do so, I can just move in the supremum here. I know that this integral will be less than or equal to the term k of x, y times the supremum of the distance between f of 1 minus f of 2. So then this will be left equal to an integral from a to b k x, y times the supremum over x of f1 of x minus f2 of x dx. All right? So how are we going to actually use this? Well, notice right now this is just a constant. So we can pull this term out. Oh, sorry. I should have had a lambda here this entire time-- lambda, lambda, lambda. OK. So this term is a con-- this term is just the distance between f1 and f2. How do we bound this term? Well, we're going to use the fact that k of x, y is bounded because it's continuous on a compact set. So suppose that it's bounded by c where c is just a real number. Then what we'll have-- doo-doo-doo-- is that the distance between T of f1 minus T of f2 is less than or equal to lambda times the distance in c0 of f1 to f2 times integral from a to b of k x, y dx. But, again, this is less than some constant c. So I can write this as c times the interval a to b-- so b minus a. Everyone see how I did that? So for which lambda is this going to be a contraction? Well, there will be the lambda such that lambda is less than 1 over c times b minus a, right? Because once this is less than 1 over c times b minus a, then this suddenly states that the distance between f1 and f2-- i.e., the supremum of that distance, will be less than or equal to a constant that's less than 1 times the distance between f1 and f2. So this is now a contraction. Why is this so helpful? Why did we do this? This is the one part of today's lecture that I was having a hard time motivating specifically, or figuring out how to state. Why did we do this? Well, it's deeply important for differential equations on the first hand, which I'll just state without argument because that would require a study of partial differential equations. But the second thing I want to know is that then notice that by the Banach fixed point theorem, there exists a function f such that it's a fixed point. Not only that, but we know that it's unique. What does this actually tell us? This tells us that there exists an f for lambda less than 1 over c times b minus a such that T of f, which is equal to g of x plus lambda integral a to b k x, y f of x dx. This tells us that Tf is, in fact, equal to f. We know this by the Banach fixed point thorem, where here we're using the fact that continuous functions are Cauchy complete. This was on your second problem set, which is why I had y'all do it. So we know that f of x is equal to this right-hand side, which, if you differentiate it or-- I'll make a smaller mark-- remark. If g is, in fact, differentiable-- so c1 a to b-- what does this tell us? Well, now, all of a sudden, I can differentiate the right-hand side. I'll get that f of x, or the derivative of f of x, is equal to-- how do I want to say this? Let me be more careful. One, we know that there exists an f that is continuous such that this is true. If g is, in fact, differentiable, you can think of it as a constant. If g is differentiable, then f must be differentiable, because then it's a differentiable function plus something that's differentiable because it's an integral. So this tells us that, in fact, f is in c1, which is important in differential equations, right? This tells us a way to prove that there exists a unique solution to the differential equation that defines this integral operator. Now, of course, not every differential equation will result in an integral operator like this. But a huge class of them can't be, which is very important. If you're interested in this question, I would highly suggest looking at section 7.6.2 on Picard iteration in Lebl's book. It's a very useful book to have. Or it's a very useful section to read into. It just uses an application that I think will take up the rest of our time. And I want to talk about completions of metric spaces. So any questions before I move on? Totally fair. OK, so next section-- ah how do i put it? - 2 completions of metric spaces. Has anyone heard of this before, completion of the metric spaces? Well, one way to motivate completions of metric spaces is through an example that you all have already seen. Example-- the real numbers are a completion of the rationals. So remember how we started our journey in real analysis, right? We started by looking at rational numbers. We stated things like they're a field, and we stated things like there aren't every real number-- or, sorry, every real number is not a rational one, right? The square root of 2 is an example. And then we said, well, hold on. Let's just throw in the square root of 2, right? We completed the rationals by filling in all of the holes. And doing so, this method, is known as a completion. Now, there's a few ways you might have seen this for the rational numbers. And I'll state them right now. One, you might have seen a proof of it using Dedekind cuts. This is how Rudin uses it in his book, Baby Rudin, Dedekind cuts. If you're interested in how this method works, it's in his Appendix 1 of his first chapter. This one is not super used. It's sort of difficult to get around. But you could have viewed it this way. Two, you also could have viewed it by using the least upper bound property. Has anyone seen this method before? Totally fair if not-- I'll simply state what it is if not. For this method, what we do is we define the real numbers as the smallest set that contains the rational numbers such that every bounded set, its upper bound is in the set. Great. Does this make sense verbally? I can also write it up more explicitly. But this is another way you might have seen it. Three, you might have learned it using equivalence classes of Cauchy sequences. Did anyone see it this way? Or, in fact, let me just ask, what ways have y'all seen it, the completion of the rationals? The second one, the least upper bound property? What about you? It's really fair. It was a while ago. [INAUDIBLE student question] Oh, interesting-- the equivalence class has the Cauchy sequences. Well, it was at least a full semester ago, if not longer. But the third one is the one I'm going to iterate on today, all right? Where, today, we're looking at Cauchy complete metric spaces, it makes sense to look at Cauchy sequences as our starting point. But the equivalence classes is not too bad to motivate. The idea is we say a Cauchy sequence am is equivalent-- or I'll use this sym, note symbol. So here an and bn are Cauchy sequences. I'm going to say that they're equivalent if the distance between an and bn goes to 0 as n goes to infinity. This is how we define equivalence classes of Cauchy sequences. What this let us do is define all of our limit points to be the same. The issue-- let me say this differently. The idea is we want to take sequences of rational numbers that will converge to the real numbers we want. We want them to converge to the square root of 2. We want them to converge to the square root of 3, things like this. The issue is we don't want to overdefine our space. We don't want multiple sequences to go-- or we don't want multiple square roots of 2. We only want one specific one. If we use equivalence classes, let's just define the limit points as being the same. So this is what lets us do it. We have equivalence classes of real number-- of rational numbers. And then what we say is that R is the set of equivalence classes. The notion of an equivalence relationship, I've put on the lecture notes. So I would highly suggest looking at that on Wikipedia if you haven't seen it before. But the fact that this is true, you can simply write out and prove the fact that this is an equivalence relation. In fact, I'll just state what the three properties are now, the three properties you would have to show, because there are ones that, at least on the face of them, we should be able to think through a little bit. Doo-doo-doo. OK. So we have this set of equivalence classes. What we want to show is that this is, in fact, an equivalence relationship. So we want to show first that the sequence an, if this is Cauchy, this is equivalent to an. But how do we know that this is true? Well, it's simply by the fact that an minus an equals 0 for every single n. So it's definitely going to tend to 0. Secondly, you want to show that if an is equivalent to bn, this is true if and only if bn is equivalent to an. Why is this going to be true? Well, we know that absolute values are symmetric. So we can just swap those around. All good on that front. And three, we want to show that if an is equivalent to bn and bn is equivalent to cn, then an is equivalent to cn. This last one, I'll iterate on. I'll expand upon this. How do we do that? Well, we look at the distance from an to cn, and we know that this is less than or equal to the distance from an to bn plus the distance from bn to cn. And we can make both of these terms arbitrarily small because the distances between the points goes to 0. So then this is less than epsilon, which tells us that this is going to converge to 0. This is less than epsilon for all epsilon bigger than 0. There exists an n such that this is true. OK. So this tells us that, in fact, the distance between an and cn are going to tend to 0. And notice that all of the things I've stated so far, everything in this equivalence relationship, only uses the metric, right? Here we're just using the metric defined by absolute values. But you can picture replacing this with any metric you care to use, which is why we can view completions of metric spaces as a whole topic for today. So notice we have the same equivalence classes notion for metric spaces. Why is this helpful? Well I'll give one short example, at least verbally, which is this metric is not the only one that can exist on the rational numbers. We can impose it with a bunch of other metrics if you wanted to and then look at the equivalence classes of that. And that leads to what is known as p-adic numbers. I wish I had more time to talk about that today, but it's one of the examples that shows up in number theory. So I'll leave that for you to look at. But how do we add in these limit points? That's the whole question, right? Well, there's a theorem that tells us that we can simply do so. And let me state what that theorem is. This theorem, I'll just write as completions because it's what tells us that there exists a completion. OK. Let M, d be a metric space. And it doesn't need to be Cauchy complete, right? The idea is that we want to complete it so that it is Cauchy complete. Then there exists an M bar such that, one, M is a subset of M bar; two-- doo-doo-doo-- the distance on M bar-- I should note that this is a metric space. And I'll call the metric on it d bar for simplicity. The distance on M bar restricts to the regular distance on M. So what this tells you is that we have the subspace metric on our space, right? The distance between two points and M is going to stay the same in this new metric d bar. And three, M bar is Cauchy complete. And, finally, four, the closure of M is M bar, which is where this notation comes from. If you've ever seen the definition of closure before in a topology book or in Lebl's book, it's M bar. Does everyone remember what the definition of closure is? It's the smallest closed set that contains your smaller set. So the smallest closed set that contains M is, in fact, M bar. You can think of it as just including all the boundary points as the main thing. Doo-doo-doo The reason why that's the right notion to consider is because we know that closed sets contain all of your limit points. Specifically, the issue are the ones at the boundary. But if you add those points in, then we're all good. Before we go into this proof, however, I need to simply state one small lemma. I won't go through the proof, because it's very similar to what's on your homework, but I'll definitely talk through it, which is the set C at lower infinity of M, which is the set of functions f, which are continuous, with the supremum of m and M of f of m bounded-- so this is less than infinity-- as a metric space. So what this is is the set of continuous bounded functions. And denoted it's c infinity of M. Notice it's in the small part because it's not smooth. Smooth goes in the upper part. So this is a metric space, specifically with metric given by the supremum or distance d infinity-- or always d upper infinity of f to g equal to-- no, I'll use lower infinity-- the supremum of m and M of the distance between f of x minus g of x. Oh, sorry. These should be m's. Firstly, I should ask how do we know that this supremum exists and is finite. Anyone? I'll let y'all think through it. Here we want to use the fact that f and g are bounded. So does that immediately imply that the supremum is bounded? Yeah. The answer is yes. You can just apply the triangle inequality, right? The absolute value of f of m minus g of m is less than or equal to absolute value f plus absolute value g. And we can take the supremum of that, and that will have to be less than infinity. So this supremum definitely exists, and it's finite, which is helpful. We want to be able to say metrics must be finite, unless you're looking at some weird version of a metric. So we know that this is, in fact, a metric. The proof of this is just using the same proof of supremums that we did for continuous functions on a bounded interval. The only difference is instead of the extreme value theorem, we used boundedness, right? That's the main difference, because we don't know that it's compact. So we don't know that there exists an extreme value. But we do know that it's bounded. So that's all we needed from the extreme value theorem. So that shows that this is, in fact, a metric. What's the next thing that we might want to say about this? Well, in fact, C infinity of M is Cauchy complete. The proof of this is just the same as it was on your homework. You take a Cauchy sequence of functions. So the function such that the distance between two points gets small. And then what we know is we can take the limit points of them by continuity. And then you have to show that, in fact, the limit point is bounded. But the proof of this is nearly exactly the same as your second problem set. So unless anyone wants me to work through the details right here, I would highly suggest doing it yourself. But I think it's the sort of thing that, because you've all done the problem set, you can believe me that this is true. All right. So how are you going to use this fact? Well, the fact that this is Cauchy complete is super helpful for us. We want to somehow view our metric space M as a subset of C infinity of M. Now, how do we do that? So I'm going to prove this theorem now. The way that we do that is we simply map points to-- we simply map points in our metric space to a function. So fix m prime M arbitrary and define the map from M and m. We're going to map this to the function g of m which is defined pointewise at p as the distance of p to m minus the distance from p to m prime. So-- oh, sorry-- yeah. So fix sum m prime and M, which is arbitrary. And we're defining the map from our point in a metric space to a continuous function. How do we know this is continuous? Well, firstly, you can show that a metric is continuous, which is what we did on our second day, right? We had that the distance between xn and y goes to the distance of x and y. And, in fact, distance of x and ym converges to d x, y. So we have already briefly, in loose terms, shown continuity of a metric. But how do we show that this is bounded? Well, the fact that this is bounded has to do with the fact that both of these metrics are less than infinity, right? That's by definition of the metric. So we're never going to get something that's nonfinite. So this tells us that we can take the supremum of it. So we now have a way to view our metric M as a subset of C infinity of M because you've mapped it to a continuous and bounded function. Now, our goal is simply to take the closure of this subset. But to do so, we need to clarify what we mean by this, quotation mark, "subset," right? What I mean by this is notice that gm of p-- how do I want to say this? Notice, then, that the distance from m1 to m2, this is, in fact, equal to the supremum of m and M of the distance between gm1 of p minus gm2 of p, because you can simply plug in-- oh, sorry this should be supremum over p-- because you can simply plug in m1 and m2 and take their difference. And then you just want the maximum distance between them. And notice that this is precisely the same metric as is on our C infinity space. So this is equal to the distance to infinity between gm1 and gm2. What this tells us is that this map-- this map that goes from m to gm-- is an isometry, which means that it just preserves distances. The distance between two points is the same as the distance between two functions in this different space. And, in fact, not only is it an isometry. It's going to be bijective, right? For each m, there exists a unique gm defined as this distance. So this tells us that we really can view m as a subset of C infinity because we're just viewing it as a subset such that distances are preserved. Does everyone get why this fact is true? OK. So move over to the next board. The proof from here is pretty straightforward. We have a subset of our metric space inside of a Cauchy complete metric space. Doo-doo-doo. So what we have is that M is a subset of C infinity of M. And notice that this is complete, infinite, Cauchy complete. So it contains all of its limit points. And secondly, we can just take the closure of M. So let M bar be equal to C infinity-- oh, sorry, not C infinity-- be the closure of M in C infinity of M. So is this the completion that we want? Well, to show that it's the completion that we want is not too bad. You just have to check each of these properties. I'll say that again in case it was a little bit loud. We just have to check these four properties. Well, one, we know that M is definitely a subset of its closure. That's by definition. And, in fact, it also satisfies four. Secondly, we know the distances are preserved because the distances are preserved on the infinity of M when we take the closure. That's by the subspace metric. The last thing you want to check is that, in fact, this is Cauchy complete. But that fact follows from the fact that it's a closed subset of a Cauchy complete space. Recall the closed subs-- or a closed subset of a Cauchy complete metric space is Cauchy complete. How do I check this? Well, take any Cauchy sequence in your space. We know that it has to converge to something because it's in a Cauchy complete space. But that thing it converges to has to be in your closed subset because it's closed. So it contains all of its limit points. So what this tells us is that every Cauchy sequence in our closed subset converges in our closed subset. So this implies that it's Cauchy complete. So then we're basically done, right? M bar is a subset of C infinity of M. But by this isometry, we can map it back to just the metric space. So the proof is then done. So why is this so helpful? I've already given the example of the rationals. The rationals are a completion of the real numbers. But that's one that we've already known, right? That that's one that has been known for quite some time in our real analysis studies. But there's quite a few more examples that we can consider. These examples show up, for instance, in 18.102, in very, very important ways. So example-- consider normed spaces. These are vector spaces with a norm put on them. Recall this from our third lecture. What if this isn't Cauchy complete? We want Cauchy completeness for a number of our theorems, and it's very important that we do. Well, we can just take the closure of them and end up with-- or the completion of them-- and end up with what are known as Banach spaces. And Banach space theory is literally the majority of 18.102. In fact, we could do slightly more if you've heard of this from perhaps 18.06. You can complete inner product spaces, which I won't define here. But I'll simply note that the completion of them are known as Hilbert spaces. And these are just names, right? These are just names that aren't necessarily too important to you right now. But I think that they're super cool because they show up in 18.102. But perhaps an example that would be more interesting right now is how we end up with integrable functions, right? Recall Riemann integrable functions have a ton of holes-- "has a ton of holes." So, for instance, an example of this would be to take the indicator function on rationals, where the indicator function at the point x is 1 where x is in the rationals and 0 otherwise, i.e., if it's irrational. How do you integrate this function using Riemann integration? The answer is that you can't. It doesn't quite work. Somehow, this should both have an integral 0 because most terms are irrational, but it should also have an integral 1 because most terms are rational. It's a huge issue, right? Riemann integration has a ton of flaws, this just being one of them. But, in fact, I can complete them. I can complete Riemann integrable functions. So, specifically, we can consider compactly supported functions. So, specifically, consider C C0 functions on, let's say, a metric space-- what do I want to say? I'll say the metric space x. No, I'll just say on R. So this is a set of compactly supported functions on the real numbers that are continuous. We've already talked about this before on our third lecture when we were talking about integration. Here what's so nice about this set is that the integral of a function f in the space over R-- the integral of f of x dx will be less than infinity because I know that the terms have to eventually go to 0. But, in fact, this gives us a metric on our space. I can take absolute values of f of x. And then this is suddenly our L1 metric from day one, right? Our L1 metric was such that the distance-- I think I called it I1-- I1 of f, g is equal to the integral over R of f of x minus g of x dx, where it's called L1 because it's raised to the first power, then raised to the 1 over 1 power. So this is our metric on the set of compactly supported functions. I can simply complete the subset of the compactly supported functions under this metric because, again, our definition of completion depends on what metric we're choosing. Right here, our metric has to restrict to the metric on our original metric space. So if I change this metric, it's going to change what the completion looks like. In our case, what we would end up with-- so, in other words, the completion of the metric space C0 C of R with respect to the I1 metric, what this will give us is the set of LP functions-- or L1 functions, sorry-- on R, which are simply the set of functions that have integral 1. So this is functions f such that the integral of f is 1. Now, here, our functions can look quite a bit weirder. They don't have to be Riemann integrable anymore. They can be Lesbegue integrable, which is what this L stands for. This integral is then known as the Lesbegue integral. And this is used all the time. This is-- oh, sorry. This shouldn't be 1. This should be less than infinity. This is used all the time, right? Because we want to be able to integrate functions that actually integrate well. And Riemann integrable functions are not it. They have a ton of holes, right? In fact, we can generalize this quite a bit more. Remember on that first day that I defined Ip, which is the integral of the distance between them to the pth power, all of which raised to the 1 over p. If I consider completing the metric of continuous functions under Ip, then what I'll end up with are Lp functions, where all of a sudden this is raised to the pth power. So this is the space of Lp functions-- deeply, deeply important. This also shows up in 18.102. But it's perhaps slightly easier to see why it's so important. So I'll note one more thing, I think, and then that will mostly be it, which is recall that this set was, in fact, a normed space, right? I can define this distance as a norm. What this tells us is that the Lp spaces are Cauchy complete, right, because they're Banach spaces. They are completion of a normed space. Now, this doesn't say that we've said all the things that there are to say about Lp functions. There's still questions like, what theorems about limits do we have? What theorems about approximations do we have? But this does give us a useful intuition behind what's happening. All that's happening is we're looking at equivalence classes of Cauchy sequences. It makes a lot of sense to do this when it comes to the real numbers. But it might be weirder to think about with respect to functions or with respect to topologies and weird things like that. So, yeah, I'll just leave y'all with that thought is equivalence classes of Cauchy sequences is a useful intuition behind all of these things. I should note that this is the Lp metric. I forgot to change that one. OK. Is there anything else I want to say? I think that's it. Yeah, so unless you all have any questions, then we can end today slightly early. But, yeah, this was very neat. Next time, I'm going to talk about more applications to 18.101, 102, 103, 152. There's a ton of applications of all of this material to this later class-- oh, 901, if you're planning to take 18.901. There's a ton of applications of what we've talked about in this class so far to these later classes. And I'm going to bring up how that actually works. So see y'all next time.
MIT_18S190_Introduction_To_Metric_Spaces_IAP_2023
Lecture_6_Where_We_Go_from_Here.txt
PAIGE BRIGHT: So welcome to the last lecture of the class. Today we're going to talk about where all of this material particularly goes from here-- so for instance, how this material applies to 18.102, a little bit of 18.101, 901, for sure, and a little bit of 152. If those numbers don't mean anything to you right now, that's totally fair. I'm going to write out what the number and the title of it is. But loosely speaking, that's the general plan for today. But before we do that, I want to introduce just a little bit of brief history to metric spaces in the first place because I think that there's this weird notion that all of these mathematical tools just pop up. But in fact, everyone knew each other in the field, which I think is really cool. So firstly, the main thing to know is that, in the early 1900s, mathematics was far less axiomatized. It was far more localized in each field, like-- oh, in this field who's studying this type of function? We can study what this means for the entire space. What does being Lipschitz mean what? Is a vector space, things like that? But because of this, a bunch of different people had all different types of convergence. They just weren't unified. And now, this is fine, but it makes it difficult to see if it's something that's true specifically about the space or if it's something true in higher generality. So in 1906, Fréchet invented metric spaces. The reason I think this is particularly interesting is because this concept is one that he introduced in his PhD dissertation. So he went throughout his PhD course load and then created metric spaces in his dissertation. It's a very, very powerful tool. And as we've seen in this class so far, it unified all these different types of convergence at once. No longer did people have to state convergence in their field and then prove all the properties. They can simply state the distance on it and then state what convergence means, if, it is, in fact, a metric. So this, in particular, really unified notions of convergence and other types of open sets and things like that. So in particular, this lets you prove facts about metric spaces and then state it about your space. That's the main idea. And then later on, in 1914, a mathematician known as Hausdorff, which will be a familiar name if you've done topology, popularized metric spaces. So in particular, he wrote this book called Principles of Set Theory. And in this book, he was able to define, or-- this book was very, very popular, and in it, he includes metric spaces. So it made it more and more popular. But in fact, he did quite a bit more than that in this book. In this book, he introduced topological spaces. And topological spaces are, of course, what is studied in Introduction to Topology, 18.901. Now, throughout the day, I'm going to use this diagram, where what I'm going to talk about is this idea that topological spaces are more general than metric spaces. Let me write this down. Topological spaces-- so a way to see this is that the definition of a topology, which we'll get to in a moment, is simply a definition of what it means to be open. And of course, we have a notion of what it means to be open under the metric. You can just cover it with epsilon balls and everything's fine. And so topological spaces are far more general than metric spaces, and there's quite a bit to study here. So let me introduce what the definition is of a topological space. So a topological space is simply a set that has a topology on it. And I'll define what it means to be a topology in a moment. A topology T on a set x is a collection of subsets of x such that, one, the empty set and the entire met-- or not metric space-- the entire set is in your topology. Two, if I let Ti be a set or a collection of subsets of T, then the arbitrary union of them-- Ti is nT. And three-- the finite intersection of open sets-- Ti-- are nT. So again, just to reiterate, a topology is just a collection of subsets of the metric space X-- or not metric space-- the set X such that the empty set and the entirety of the set is in the topology. Arbitrary unions of points of the topology are in the topology. And finite intersections are as well. And notice that these are the same as the topological properties of open sets we defined in metric spaces, which is a good thing because this is what we would want to have if, in fact, metric spaces are more specific than topological spaces. And what is a topological space? Well, a topological space is simply a set with a topology on it. So I'll just write, with T, where T is the topology. So with these three properties, we have notions of openness, in particular. In fact, one can define a topology-- this is more of a terminology thing. A set is open-- let's say a set A inside a topological space, X, is open if A is in the topology, and closed if X minus A is in the topology, which implies that the complement of a closed set is open. So that's an equivalent definition. But yeah, these are properties of what are known as open sets in this much, much more general setting. Now, you might be asking yourself, why should we particularly care? This is a very abstract definition. But I'll remind you is that when we first started talking about metric spaces, we had a very, very general, somewhat abstract definition. It's one that, at first, seems not too bad. It's just the topological properties of open sets we've already discussed. But with this generality, we can prove quite a bit more. Yeah, I already noted the topology on a metric space. It's simply-- simply unions of epsilon balls. If you're in a topology class, this would say-- this would be the same as having a basis of epsilon balls for the topology. But I'll leave it there for now. And in fact, when does-- when is a topological space, in fact, a metric space? That is known as a metrizable space. So a topological space, X, is metrizable if there exists-- if there exists a metric, d, inducing the topology on x-- topology. So here, the topology on a metric space is induced by these epsilon balls. In more generality, a topological space is metrizable if there exists a metric inducing that topology. And now this notion of inducing is one that I'll leave for 18.901. But I just bring it up now because, as I mentioned on our second day, not every topological space is metrizable. Not every set that you're considering is going to be as nice as a metric space. Even though we can put a metric on any space, it doesn't mean that it gives us actually good information. So having metrizability is deeply important. So let's take this time real quick to define the notions of convergence and open sets and continuity in terms of the topology, just like we did on our first day. We defined what a metric was, and then we defined all of the terms we were used to in terms of that metric. And so we're going to do the same now. One-- a neighborhood-- I should write this out the first time. A neighborhood of a point x inside your set, X, is simply an open set-- U containing x. Why do I bring this up? Because our definition of a convergence and of continuity will depend on this definition of a neighborhood. So two, a sequence xn converges to a point x if for every neighborhood of x, all but finitely many xi-- many-- xi are not in the neighborhood. Now, recall that this is the same definition that we had for metric spaces. We had the dilemma that a sequence converges to a point x if and only if for every epsilon ball around it, all but finitely many terms are outside of it, which was a pretty direct lemma. So this is how we define convergence for topological space. Three, we also state things about continuity. If I have a function f from a topological space x to y-- and I'll say that the topologies are on-- that the topologies are called Tx and Ty, then f is continuous. Or I should write this out-- continuous if, for every single open set, T and Ty-- f-inverse of T is in Tx. In other words, the inverse images of open sets-- the inverse images of open sets remain open, which is as we've shown, as well, the same definition of continuity we can have for metric spaces. So all of this is still very, very related to what we've been talking about. So you might wonder why we even study metric spaces at all, then, right because topological spaces are much, much more general. If I can prove a fact about a topological space, I will have proven it for my metric space as well. And the short answer to that is, in my opinion-- the same answer to why would you study real analysis after calculus. In theory, real analysis shows all the facts you need to know about calculus. It proves it in more generality. But you gain a lot of intuition from dealing with calculus in the first place. Before you prove the mean value theorem abstractly, it makes sense to have an idea of what's happening before you even do so. So very similarly here, having intuition about metrics and metric spaces provides a lot of intuition about topological spaces, even though most topological spaces are not metric spaces. It still gives you the framework to move forward. For this reason, that's essentially why, whenever I could, I would draw pictures of what's happening as a blob because then you get the intuition of, oh, how can I start to problem solve via a diagram? It just gives you that right frame of mind to move forward. OK, now that being said, metric spaces, because they're much more specific than topological spaces, are a current area of research. Topological spaces are mostly done being researched. Topology is not a close field, but point set topology-- the basic framework-- is not as much of an active area of research. And metric spaces, on the other hand, very much so are. So that is yet another answer to the question of, why should I care about metric spaces if I could talk about topological spaces? OK, so now we're going to talk about 18.102 unless you have some questions on the material we've talked about today. Cool. Did I write down 18.901? I didn't. I meant to do so. So this is the material that's covered in one of the first lectures of 18.901-- Introduction to Topology. OK, 18.102-- Functional Analysis, or Intro to Functional Analysis. Functional analysis is all about studying what are known as normed spaces, and we've already talked about them before. Normed spaces are just a subset of metric spaces. We talked about this in lecture 3. It's slightly more specific than metric spaces. Normed spaces. So again, recall a normed space is simply a set with a norm on it, or a vector space with a norm on it, denoted with absolute value bars. And specifically here, the three properties we want again are a positive definiteness. We want the norm to be bigger than 0 or, if it's equal to 0, for the point P 0 itself. Positive definiteness-- absolute homogeneity and the triangle inequality. These are the three properties we want on normed spaces. And on the homework, you've already shown that a metric induced by the norm is, in fact, a metric. So normed spaces are a generalization of metric spaces. OK, now, with this definition of norm, we can, yet again, redefine all of our notions of convergence and neighborhoods and continuity as we've done before, so I'll quickly do so. The way to do it is just write it in terms of the metric. A sequence xn converges to x in a normed space if, for all epsilon bigger than 0, there exists an n in the natural numbers such that for all n bigger than or equal to n, the distance between xn and x is less than epsilon. This is the definition in terms of the metric space. But recall that we can define this metric in terms of the norm. So if you're in a class like Functional Analysis, you'll probably just use this notation as opposed to going to the metric one. But this is the definition of convergence in a normed space. And again, we want all these definitions to be compatible. So if it feels like repetition, there's a good reason why it does. It's just-- we want a new definition but one that still works with the broader setting of metric spaces so that we don't have to redo all of our work. Two-- of course, you can find Cauchy sequences in the same way-- simply ones such that-- for all epsilon bigger than zero, there exists an n to the the natural numbers such that for all n and m bigger than or equal to n, the distance between xn and xm is less than epsilon. So this is the definition of Cauchy sequences. And finally, we define a set to be open if and only if we can find a ball of radius epsilon around each point. So three-- a subset A inside of x is open if, for all x and A, there exists an epsilon bigger than 0, such that the ball of radius epsilon around x is contained in A. And in terms of the norm, this is the set of y such that the norm distance between x minus y is less than epsilon. So precisely the same as in a metric space. Now, to this point, as I was talking about last time-- so again, a set is open if and only if, for all x in A, there exists a ball of radius epsilon that's contained in A. So just to continue off where we left off last time in terms of lecture 5, we started introducing completions of metric spaces because Cauchy completeness is very, very important, so much so, in the setting of functional analysis, that we have a name for it. A Banach space is a normed space that's Cauchy complete, essentially. The only difference is that it's complete with respect to the metric because, again, we have a notion of Cauchy completeness in terms of metric spaces, so we want it to be compatible. And as we talked about last time, you can take the completion of a normed space to get a Banach space. So Banach spaces are slightly more general-- or, sorry, slightly more specific than a normed space, but the tools that we use are very, very important. So in particular, as an example of a Banach space, you can show that C infinity of a metric space M-- or even a normed space-- is a Banach space. We talked about this last time, but the proof is essentially the same as in this sense of continuity, where, again, this is a set of functions, f, that are continuous. And this n bounded-- so the supremum over m and m-- of f of m-- sorry, f of m-- is less than infinity. So you can show that this is a Banach space. There's a few other classic examples, like Rn, Cn, instead of continuous spaces, or C0 AB. But most of them are based off of these four. These are the few key ones to have in mind. And these are so useful that it makes sense to use these as our main sense of intuition. In fact, we even have weirder examples of Banach spaces, and I'll introduce one of them now, where, specifically, you're interested in the dual of the vector space. Have you heard of the notion of dual before? No worries. Yeah, it's totally fair. In the case of norm spaces it's not too bad. It's the set of functionals. A functional is a linear map-- let's say T-- from your normed space, X, into the real numbers, or complex numbers if you prefer. So it's just a linear map from your normed space into the real numbers or the complex numbers. And in fact, this is the definition of a dual of a vector space in general. Just replace X with the vector space. And in fact, you can show that the set of functionals are a Banach space. But to do so, we need to introduce a norm. In order to even have the possibility of defining or showing that a space is a Banach space, you need to have a notion of a norm. So the norm on it-- the norm on T, known as the operator norm, is simply the supremum over x in-- x and capital X with the norm of x equal to 1 of T of x. So it's just the largest image- sorry, the supremum of points on the ball of radius 1 around x. That's what the definition of the operator norm is. And you can show that this, in fact, in a norm. Homogeneity is not too bad. The triangle inequality is where things get worse, as always, but homogeneity and positive definiteness are nearly immediate. So with this norm, then you can show that the set of functionals is a Banach space. And why is this important? Because it turns out that studying your norm space is pretty much analogous to studying your dual space. So studying this at a functional-- is you can redefine continuity. You can redefine open sets, things like that. You can show all of these properties. But for functionals, as a specific example, and the fact that operators are a Banach space is particularly really helpful. OK, now, I want to note one more example of a norm space before I move-- or a Banach. Yeah, I want to give one more example of a norm space before I move on, which is one that I briefly talked about before. I'll write up here because we're done with topologies. Specifically, you can study what are known as inner product spaces, which is something-- which is a title that I-- which is a word I introduced last time. But I'll define right now-- it's basically the same concept as a normed space and a metric space. You just introduce an inner product on your set. And your inner product-- you can think of like a dot product on rn. Having the dot product was very, very helpful in rn because that lets you define things like magnitude, which is part of the reason why we introduced norms in the first place, is to introduce the notion of magnitude. So an inner product space is a set or a vector space, X, with an inner product defined on it. And specifically, we want this inner product, as usual, to have three main properties. The three properties are-- and I'm going to assume-- I'm going to assume that the end product takes in two points in X and spits out a real number. So the three properties are symmetry. The inner product of x and y should be the same as y and x. Two, you want linearity. The inner product of ax plus by-- inner product with z is the same as ax inner product, C, plus by inner product C. So this is linearity. And lastly, we want positive definiteness, but it's slightly weirder. If x is not 0, then the inner product of x with itself is bigger than 0. It's not an if and only if statement because, of course, we have notions of orthogonality from real-- sorry, from rn. But if it's non-zero, the inner product should be non-zero. These are the three properties of an inner product space. And the reason I bring this up right now is because you can show-- consider-- just as we did in calculus, you can consider the inner product of x and x to the one half. This makes sense because it's positive, so taking the square root is totally fine. What you can show is that this induces a norm on x. The way that you show this-- so, in other words, induces norm. So what you would want to show is that, given the inner product space, if I look at the square root of x inner product itself, you would want to show that this implies you have the three properties you want for a norm space, just as we did in the proof of metric spaces. And this is really important. And this shows up all the time, in fact, in quantum mechanics, if that's some part of math and physics that you're interested in. And in fact, I should say we can Cauchy complete it, as usual. Once we have that our inner product induces a norm, we know that it then induces a metric. The one thing we don't know is if it's a bonding space or not. And that's really fine. We can just Cauchy complete it and get what is known as a Hilbert space. A Hilbert space is a Cauchy complete inner product space. So you can view it as a completion. You can also view it as just the definition. And here, Cauchy completeness with respect to the metric that's induced. I'm going to rearrange this diagram real quick because not every product space is a Banach space, but it is a subset. So specifically, you can have something like this. Inner product spaces are a subset of norm spaces. And Hilbert spaces are inner product spaces that are Cauchy complete-- so therefore are Banach spaces. This is the end of my diagram. No more drawing squares all the way down. But I just wanted to show all these ideas are deeply related, and each one of them is interesting to study in its own respect. Now, granted, all of the center squares-- norm spaces, Banach spaces, and inner product spaces-- these are mostly talked about in functional analysis, which roughly makes sense because you need vector spaces in order to move forward. But yeah, because, again, functional analysis is done on normed spaces, and more specifically, usually, Banach spaces. OK, I want to note one small application to 18.101, and then I'm going to talk about the application to differential equations. In 18.101, Analysis of Manifolds, and you're studying, of course, manifolds. Now, what is a manifold? A manifold is just some smooth enough blob-- and you can picture it in Euclidean space. To be even more specific, picture you have an orange. The main property that we like about an orange or a sphere is that, locally, it looks flat, just like it does in calculus. A function, locally, looks like a line. In higher generality, a manifold is just a space that, locally, looks flat. The reason I bring this up is because, in the definition of a manifold, you assume, or at least you define a manifold-- you assume it is metrizable. It's a space with a metric on it, or at least a metric that induces the norm. So why is this so important? Why do we study metrizability in our manifold? Well, we want there to be a notion of distance on our manifold, which makes sense. The reason why metrizability I bring up here is because, in reality, when you're studying metrizable spaces, you don't often care what the metric is. Sometimes you do. Sometimes you want to say, oh, once there exists a metric, then x, y, and z. But most of the time, you just use the properties of the open sets. So most of the time you just want to use the fact that balls of radius epsilon are open. And doing so lets you define smooth functions. It lets you define integration. It lets you define-- I had one more thing. I mentioned it. You can define integration, and you can define vector fields, as you might have seen in 1802. All of this is just letting us do calculus on weirder shapes, doing calculus on a sphere, doing calculus on a smooth enough tree, things like that. But really, all we really need is that the balls of radius epsilon are open to start doing so. So sometimes, in 18.101, you won't see how metrics come up, in particular, but the intuition is still there. You still want to have a notion of distance on your manifold. So yeah, that's all I'll say about 18.101 because, of course, it takes a month to actually get into meaningful theory because you're redefining multivariable calculus. So I'll leave that there for now. Yeah, the last example I want to talk about today is specifically the application to differential equations. And we already briefly talked about this. Last time, we talked about integral operators, specifically as an application of the Banach Fixed Point Theorem. So let me just write this down. Last time, we considered ODEs of the form. What was it? g of x plus the integral from a to b. kxy f of x f of y dy. So the last time, we considered PDE that leads to this nice integral operator. If this is unfamiliar, that's totally fair. But I'm just bringing up the fact that we've already seen one more application of the Banach Fixed Point Theorem to differential equations. But in fact, you can use differential equations to motivate the notion of compact sets. I should have written 18.152-- Intro to Partial Differential Equations. So we can use the notion-- or we can use ODE or PDE to motivate compact sets. And let me just briefly explain why this is the case. So picture some subset, omega. Subset, omega, of R2. And picture this as your metal sheet. So I just have some subset out here-- omega. I'm going to assume it's nice and connected. And I want to consider it as a metal sheet. Why do I do that? Because, from here-- let's say I just heat up one tiny portion of it. Let's say I take a little blowtorch and I heat it up right here. We want to know how the temperature is affected by this, in particular. Let's say that u of xy is the temperature at xy and omega. What you can derive, using physics or using just differential-- looking at it from a differential viewpoint-- you can define-- you can show that the heat equation that you'll get from heating up this tiny spot is of the form, derivative of u with respect to x twice, plus derivative of u with respect to y twice. This is known as the heat equation. And in fact, if it reaches-- if we let this metal sheet reach equilibrium-- if we let the blow torch heat dissipate to all over the metal sheet, then you'll get that this equation is equal to 0, which is the differential equation that we're particularly interested in. If I have a solution of this form, what do I know? This is, in fact, so important-- this operator-- that we just call it the Laplacian of u, if you've heard of that before, where the Laplacian is just the derivatives squared applied each time, and something over them. OK, so the question is, what if I only know the temperature at the boundary? So here, the boundary is just-- we could just draw it out pictorially. It's the point that's just along the edge. Suppose, just along the edge, I knew the temperature and that's all I knew. So f equals u on boundary. And we denote the boundary as partial omega. This is just how we define. This is just the notation that's used. The question is, given this equation is true and given that I know the temperature values along the boundary, does there exist-- or I should just write out-- exist a u satisfying this? So specifically, does there exist a u such that the Laplacian view is 0 and such that used the function when restricted to the boundary? So this will be what I call Q1. The question is quite a bit difficult to answer, at least immediately. In fact, that's how most PDE questions go. It's particularly difficult to show existence of a solution, but we can perhaps think about it more physically. What if, instead of viewing it in terms of its temperature, which can be a little bit weird, we try to minimize the energy? If we minimize the energy, then maybe, then, we'll have reached thermal equilibrium. And this is exactly what mathematicians did at the time. I'm sorry, one second. Oh, I should note this question is known as the Dirichlet problem. You would have saw-- or you've likely seen-- a discrete version of this if you've done 18.701. On the first problem set-- sometimes professors include it. Sometimes they don't. It's totally fair if not. But yeah, the first question is, does there exist T satisfying this? And mathematicians, at the time, just tried to study the energy. So define the energy of a function, u, to be one half times the integral over omega of the gradient of u squared dA, where A's the area differentiable on A, on omega. This is known as the energy of the function u. You can think about it as the heat energy. But now the question is, if I minimize this, is it a solution to the differential equation? And that's what mathematicians, in particular, did. They said the question-- question two-- does there exist a function u-- specifically, u in C2-- too because we want to be able to differentiate it twice-- such that energy of u is equal to the information of the energy? So it is equal to E inf, where E inf is defined as the infimum of the energy. So just take the infimum of this. Assume that I know that, in the end, the energy should be, insert value here. Does there exist a u that minimizes that energy? And I'll come back to these questions in a moment. But the thing I want to know is that what mathematicians did is they said, oh, if there exists a function u that satisfies this, then it will solve the problem. But the issue here is that-- sorry, let me just read through this one more time-- what I'm trying to say. The issue here is that we still have to find this function. The question of existence is finding one that works. And what was done is they were taking sequences of functions-- so what was done is you would study functions un converging to u. So you would construct un such that un would convert to the function, u, that you wanted. And once this was done, you would get that the energy of un converges to the energy of u. The proof of this fact just follows from taking the limit right here and applying uniform continuity. But yeah, this was the idea, and this idea did not work, weirdly. Why didn't it work? Because even if there exists a sequence of these functions in C2 that converges to u, how do we know that as C2? Question-- is u in C2? The answer is not directly, yes, because the C2 functions are not complete or they're not a Banach space-- or yeah, they're not a Banach space. The way that you can see this-- so not necessarily-- not compact. The way that you see that it's non-compact is you just consider the sequence x to the n as your un. Each of these is differentiable twice, but un we'll converge, pointwise, to 1-- or, sorry, to u of x equal to 1 when x is not equal-- when x is in 0 to 1 and 0 otherwise, i.e. if it's equal to 0. And this is not twice differentiable. It's not differentiable at the point 0. It's discontinuous. So the issue is that this function wasn't in C2, necessarily. The way that you see that this isn't compact recall is by the fact that this would imply that it's not sequentially compact. Now, what else could we do? Well, mathematicians then also showed that if the sequence of un in instead in C1, does that work? And what they were able to show is that, if the sequence is in C1-- sorry, if un was in C1 and this limit existed, then you can show that U was in C2, so retroactively, we'd be done. But the issue is, again-- sorry, not that u is in C2 but that each of the uns be in C2. The issue is that, again, C1 is not compact, so we're still not done. Now, in the end, we were able to solve this problem with techniques that are quite beyond the scope of this class. It's sometimes talked about in 18.102. So C 18.102. Sometimes it's talked about there. It's talked about in the lecture notes that are on a CW, so if you want to read more about that there. But that's the very, very last lecture, so it's difficult, yes, but interesting nonetheless. But it's an interesting question. How should we be approaching these problems? Should we be approaching them physically, or should we be approaching them super rigorously? And the answer is unclear. There's no direct answer to this question. This gave us the framework to think about compactness. Even though the solution didn't work, it motivated the development of compact sets for metric spaces. We have the C1 and C2 are metric spaces. We want to understand if limit points are in your set. And even though it wasn't the case for C1 and C2, it still developed all this terminology that, as you've already seen, is very, very important. So yeah, they were able to solve this problem eventually, just using much different techniques. So that's all I've prepared for today. This was mostly just a broad overview of where the material goes from here. The intuition that comes from metric spaces shows up all the time. For instance, in FOIA analysis you want to understand if FOIA theories converge to your function and in which sense. And in that way, it's related to metric spaces. We've talked about how it applies to manifold theory. We've talked about functional analysis, topology, differential equations. These are five major applications of the material. But the intuition that you gain from metric spaces will continue to work throughout your time at MIT. So for instance, in general, proving things axiomatically are functions that satisfy certain properties, like norms, inner products and metrics, is a useful skill. But even more in particular, I think a class like this and a class like Real Analysis is interesting because of all of these different subsets and bigger subsets-- or bigger subsets-- is the terminology. We started off with studying Euclidean space, which was just a tiny little dot in the set of metric spaces. But there's obviously much more to be done here. We talked about metric spaces. But it makes sense to start right in the middle and then work your way out-- work your way out towards topological spaces and then work your-- or potentially also work your way towards norm spaces and Banach spaces in whichever order you choose. But the main important thing-- the main reason I've been teaching this class for two years-- is to highlight the fact that-- is to give you the tools to be able to move forward into studying topological spaces and norm spaces with some amount of intuition, as much as that's possible. OK, so unless you have any questions, we'll end 30 or 25 minutes early.
MIT_18S190_Introduction_To_Metric_Spaces_IAP_2023
Lecture_4_Compact_Metric_Spaces.txt
[SQUEAKING] [RUSTLING] [CLICKING] PAIGE BRIGHT: Last time we started talking about metric spaces being compact on Euclidean space in particular, where we showed on Rn that sequentially compact was the same as closed and bounded, which was the same as topologically compact. And notice that in these two theorems, we use some very important propositions that we've learned from real analysis. To prove that closed and bounded implied sequentially compact, we have to use the Bolzano-Weierstrass theorem, which essentially states exactly the condition of sequentially compactness. Bolzano-Weierstrass. And to use closed and bounded, to show topologically compact, we had to use the Heine-Borel theorem, where we trapped our compact set inside of a closed cube. And we know that that closed cube is going to be compact, but we don't always have these closed cubes on metric spaces, right? What does it mean for there to be a cube of functions? And in fact, in general, these implications are not going to be precisely true. And small error from last time-- last time, I stated that sequentially compact was the same as closed and bounded on metric spaces. This is not the case. Sorry about that. We are using Bolzano-Weierstrass to prove this theorem. And we need it to show the other implication. If it was always the same, then all three of these would always be true. And it's not. Metric spaces where closed and bounded implies topologically compact have the Heine-Borel property. And not every metric space has this property. But what we're going to show today is that if we're not on Rn-- if we're on a general metric space, we still have the following implications. We already know that sequentially compact and topologically compact implies closed and bounded, which is a very helpful thing, right? If you want to show that a metric space isn't compact, you just have to show that it's not closed or at least not bounded. But we're still going to have the following implication-- what we're going to show today. We're going to show that sequentially compact is the same as topologically compact. In fact, we're going to show two more properties that emphasize the importance of compact metric spaces in general. It's going to be a very proof-intensive day, but I hope you stick with it with me. So yeah, that's our goal for today. Before we show sequentially compact implies topologically compact, and furthermore, the opposite direction, first, we're going to start by proving some lemmas about sequential compactness to make our job just a little bit easier in the long term. And in fact, these lemmas are going to be used throughout the rest of the class, so please pay attention. So first, we have what is known as the lemma-- the big number lemma-- where here I'm just not writing out number. But the idea is given our metric space x is sequentially compact, if I have an open cover of x-- let's call this our Ui's-- then there exists an R bigger than 0 such that for all x in x, the ball of radius r around x is contained in one of the Ui's for some i. Now, let's draw a picture of what I'm actually describing here because this quantifier logic can be a little bit confusing the first time you see it. So here. Let's just draw a little blob for a metric space x. And what we're going to do is find an open cover of this by the Ui. So I'll draw these by circles. So these are our Ui's. OK, there we go. And now, for every x in x, we claim that there exists a ball of radius R around x such that it's contained in only one of these open sets-- or sorry, not only one-- that it's fully contained in one of these sets. And this might ring out to you as being true if you want to prove topologically compact is the same as sequentially compact. What we're going to do in the long term is assume that we're sequentially compact, take an open cover, and use this R to construct our conclusion. But first, we have to prove this fact. And the proof is going to be pretty usual for sequential compactness. So our proof is going to be by contradiction. We're going to use contradiction, assume not true. What does it mean for this statement to be not true? Well, it means that for every R bigger than 0, there exists some x in x such that the ball of radius R around an x is not contained in any of the Ui's. It's contained in a cover of the Ui's. It's just not fully contained in one of them, at least. And what we're going to do is use this to construct a sequence and take the convergent subsequence and reach contradiction. So for each R equal to 1 over n, choose x n to satisfy the above property. So we're going to choose the points in our sequence to be the xn, or at least one of the xn, such that the ball of radius R or the ball of radius 1 over n around x is not contained in any of the Ui's. And we know that this is going to be true because one of our n is going to be bigger than 0. So what are we going to do with the sequence? Well, we're going to start by taking a convergent subsequence, right? We're going to use the fact that we're sequentially compact. And this trend will carry on throughout the day. So we have a convergent subsequence-- let's call it xnk-- converging to x, of course. That's what we usually let it be. OK. So what are we going to do? We want to reach the contradiction that there is, in fact, a ball of radius R around one of these x's contained in the Ui. How do we do that? Well first, we notice that x has to be in one of the Ui's. x in Ui0 for some i0. And this is true given that it's an open cover of our space. And therefore, because the Ui0 is open, there exists an R0 bigger than 0 such that the ball of radius R0 is contained in Ui0. Right? Now, we want to find a subset of this such that we reach for contradiction. So choose n sufficiently large so that 1 over n is less than R0 over 2 and do the distance from x to xn is less than R0 over 2. And we're going to use this to create a subset of b R0 of x. So how do we do that? Well, we consider the ball of radius 1 over n around x-- or sorry, around xn. So for all y in this set, we notice that the distance from x to y is less than or equal to the distance from x to xn plus the distance from y to xn. This is by the triangle inequality. This term is going to be less than R0 over 2. It's less than 1 over n, which is less than R0 over 2 plus 1 over something that's less than 1 over n, which is less than R0 over 2. So this is R0, which tells us that the ball of radius 1 over n around x is fully contained. And the ball of radius R0 around x, which is fully contained, and Ui0, which is our contradiction, right? Because we claimed at the very beginning that none of these balls are going to be fully contained in any of the Ui's. So this is the big number lemma. As a small note, the largest such R such that this property holds is known as the Lebesgue number. And this comes up in a number of other applications. We're just not going to be talking about them today. So now, we have a way of refining our open cover to refine our open cover to the set of open balls of all the same radius, but what we need to do is get finitely many of these open balls for our open cover. And as such, we're going to define a new term. So definition-- the term is known as totally bounded. A-- or I should say metric space-- let's call it x-- is totally bounded if, for all epsilon bigger than 0, there exists y1 through yk such that the union of balls of radius epsilon around the finitely many yi's contains x. So it's a very similar picture to this, only we have all the same radius R instead of our regular open cover. And what we're going to do is show that sequential compactness implies totally bounded. This is another lemma of ours. OK. Lemma-- sequentially compact implies totally bounded as metric spaces. So if your metric space is sequentially compact, then it's going to be totally bounded. Let's prove this. Proof-- again, our proof is going to be by contradiction. Assume that it's not totally bounded. Then for all epsilon bigger than 0, there exists-- or sorry, I should say there does not exist finitely many epsilon balls that cover our space. What does it mean to be not existent, right? What does it mean for there to not exist finitely many epsilon balls? Well, this means that there are infinitely many of them. So it takes epsilon balls to cover x. Now, what are we going to do with this? We're once again going to construct a sequence and a convergent subsequence. But we need to somehow use the fact that it takes infinitely many epsilon balls to cover x. Well, let's just start by taking any x-- x1-- be an x. And now we want to construct an x2 such that there is no Cauchy subsequence of the xn's. So let's take x2 to be an x not including the ball of radius epsilon around x1. Let's continue this process. Let xn be in x not including the union of balls of radius epsilon around xi. Now, why are we doing this? We want to create a sequence of points such that the distance between every point is bigger than epsilon, which we have immediately from this. The distance from xi to xj is always bigger than epsilon by construction because if it wasn't less than epsilon, then it would be contained in the ball of radius epsilon around one of the other xi's. So what does this tell us? This tells us that our sequence can never be Cauchy, right? If our sequence was Cauchy, or a subsequence was Cauchy, then the distance between subsequent points has to be able to get less than epsilon, which is not the case. So what we notice is that there are no Cauchy subsequences of the sequence xi. And this is going to be our contradiction. Why? Because this is a sequence in a sequentially compact metric space. So we know that there has to exist a convergent subsequence. But being convergent implies that it's going to be Cauchy. We've shown this on the second day in our general theory of metric spaces. So what we've shown is that there should be a Cauchy subsequence, but there is no Cauchy subsequence. So this is our contradiction. And that concludes the proof. And these two lemmas-- the Lebesgue number lemma and this one right here-- are going to be all we need to show that sequentially compact implies topologically compact. So let's start that proof. Theorem-- sequentially compact is the same as topologically compact as metric spaces. So let's start the proof. Proof-- assume sequentially compact, and let Ui be an open cover of the metric space x. What we then know by the Lebesgue number lemma is that there exists an R bigger than 0 such that balls of radius R are contained in one of the Ui's. So therefore, by the Lebesgue number lemma, there exists an R bigger than 0 such that balls of radius all around x are contained in Ui for some i. I.e. for every x, there exists some i such that this is true. And now, how do we go from this to finitely many of these Ui's? Well, here we use the following lemma. We know that it's totally bounded. So given this R bigger than 0, let R be equal to epsilon and the definition of totally bounded. And then we know that there exists finitely many y1 through yk such that x is contained in balls of radius in the union of balls of radius R around yi from i equals 1 to k. But we want to go from this to a finite subcover. But what we notice is that each of these balls of radius R is contained in one of the Ui's. So right now, this is going to be contained in the union of Uij, where ij is the one such that the balls of radius R around yi is contained in Uij. And then we're done with the forward direction, right? Because we've gone from an open cover of our sequentially compact metric space and reached a finite subcover. So that shows that sequentially compact implies topologically compact. The other direction is going to be mildly harder, but it's still going to be essentially what we've been doing this entire time. We're going to assume that it's not sequentially compact, construct a sequence, and take the convergent subsequence, and reach a contradiction somehow. OK. So suppose topologically compact-- oh, I'm holding it backwards. Let me rewrite this. Compact. And let's suppose for the sake of contradiction-- assume for the sake of contradiction-- we're not sequentially compact. What we're going to do is-- or sorry, let me first state what does it mean to be not sequentially compact? Then there exists some sequence-- let's call it xn, of course-- with no convergent subsequence. Our goal is to go from this to an open cover of our space and show that there is no finite subcover to reach our contradiction. Well, how do we do that? Well first, we need to note some facts about convergent subsequences. Firstly, none of the xi's-- actually, I guess I shouldn't say firstly. Just note that none of the xi can appear infinitely many times. Actually, yeah. I'm going to call this firstly. Why is this true? Well, if any of the xi's appeared infinitely many times, then that would mean that we could just take a very trivial subsequence of xi's, and that would converge to xi. So we know that none of them appears infinitely many times. So we can just assume that each one appears finitely many times. But furthermore, we know that for all epsilon bigger-- nope, that's not what I meant to say. We know that there exists an epsilon n bigger than 0 such that the ball of radius epsilon n around xn is simply the set xn, where here I mean the single point xn. So I guess to be extra clear, I'll write this as xj and xj. Why is this true? Well, suppose for the sake of contradiction that for every epsilon bigger than 0, there exists some xi in a ball of radius epsilon around xj. Well then what that would tell us is that we could take a convergent subsequence, which will converge to xj. We just choose closer and closer points to xj. So we know that there must exist some epsilon n such that this is the case. So we're going to use this to create our open cover. We're going to let these be the open cover of uj. So let's call them ui, or uj's. Yeah. From j equals 1 to infinity. But this isn't quite an open cover. We don't know that we've fully covered x yet. So what we're going to do is let U0 be equal to x not including any of the terms in our sequence. Now, why does this make sense? Or sorry, why is this an open set? Well, we know that the complement of this set contains all of its limit points, which means that it's going to be closed, as you're shown on your second problem set. So what this tells us is that the complement of the complement is therefore going to be open. So U0 is open. Therefore, we've reached an open cover of our metric space x. We have that x is contained-- oh, wrong direction-- contained in U0 union the infinite union of ui's or uj's. j equals 1 to infinity. Now, from here we want to reach a finite subcover, but notice that every finite subcover is going to be emitting infinitely many points, right? Because every finite subcover is going to have finitely many of these uj's. But what that tells you is that infinitely many of them-- sorry, let me double-check what I want to be saying here. Yeah. Because it's going to emit infinitely many points, this tells us that x-- or sorry, there is no finite subcover, which is our contradiction because we're assuming topologically compact. So we've shown that topologically compact implies sequentially compact. And that concludes the other direction of our proof. And thus, we're done. So this is essentially what we've set off to do from the very beginning, which is very motivated by metric spaces. But as you may guess from the fact that sequentially compact implies totally bounded, there's quite a bit more that we can actually be assuming. So what we're going to show for the rest of today is two more equivalent definitions of compactness on a metric space. And I'll again reiterate these are for a metric space, not inherently a topological space, as you might see in 18901. But this is the definition of compactness or equivalent definitions of compactness for a metric space. But before I do that, I want to first state some lemmas or some corollaries of these facts that we've shown so far because they're deeply important. Compactness has some very deep results. And I know that this is going to be a very proof-intensive class. So I want to first start by stating some facts that are a little bit easier to prove, but also mildly intuitive. So first, recall a function f from x to y, where x and y are metric spaces, is continuous if and only if, for every u open in y, f inverse of u is open in x. Sorry, I know I didn't fully write it out, but I at least said it verbally. f inverse of you is going to be open in x. This is the definition of continuous. And what we're going to do is therefore show lemma. If k is a compact subset of x, where here the double subsets implies compact-- so recall that notation-- we're going to note that f of k is compact in y. Over here f is a continuous map from x to y. So this is our first statement that we're going to prove today. Or sorry, the first statement that we're going to prove right now. Proof-- well, we want to somehow reach an open cover of k. So let Ui be an open cover of f of k in y. Then what do we know? Well, what this will imply is that f inverse of Ui will be a finite-- sorry, not finite-- will be an open cover of k. The fact that the inverse images cover k is by the very definition of f of k, but how do we know that they're open? Well, this is from the fact that it's continuous. These are the inverse images of open sets. And so what we know is that we can reach a finite subcover-- a finite subcover given by f inverse Ui from, let's say, i equals 1 to k. And what we're going to do is map this back to f of k. Notice that then f of f inverse of Ui equals Ui, which is therefore going to give us our open-- sorry. How should I say this? How should I say this? This is not necessarily directly true. What I mean to say is that then Ui from i equals 1 to k is open cover of f of k. How do we know this? Well, we know that the Ui's are such that-- or sorry, the f inverse of Ui's cover k. So therefore, the images of them must cover f of k. And so we've gone from an open cover of f of k to a finite subcover, which shows that it's compact in y. So that is our proof. That shows that the image of compact sets is going to be compact under continuity. And using this and the fact that we have sequential compactness is going to allow us to state some very helpful lemmas. So lemma-- or sorry, not lemma-- corollary. On that f from x to, let's say, the real numbers, continuous has a max and a min achieved on compact sets, by which I mean for every compact subset of x, the image of it to R is going to have a maximum and a minimum achieved. And the proof is very short at this point. We know that f of k is going to be compact in R, which implies that it's going to be closed and bounded. Closed and bounded, which you can think of as sequentially compact in the same way, implies that we can find a maximum and a minimum of f of k. This is essentially just the extreme value theorem. But it's much more general than the extreme value theorem. Before, the extreme value theorem was a map from the real numbers, or at least a compact subset, like a to b, to the real numbers. Now, all of a sudden, we have the extreme value theorem for metric spaces. This fact is extremely helpful. I cannot iterate how important it's going to be in the long term if you keep taking analysis courses, but the fact that images of compact sets are going to be compact is very, very helpful, and helps us with finding a maximum and a minimum. In fact, one other way to think of this theorem is just to take a sequence that converges to the maximum and converges to the minimum. And there has to be a convergent subsequence of that sequence because it's sequentially compact. Besides this, I also want to note-- corollary-- maps f given x is compact, f from x to R continuous is going to be bounded. This is the second corollary, right? Before, we were using closedness. Right now, we're using boundedness. So this is very helpful just in order to be able to conceptualize what's happening. So the proof is already complete by before. So I will stop there for the moment being. Now, we have one more theorem that I want to show that will have some implications that will be conceptually helpful. But before I do that, I'm going to drink some water. The next theorem is known as Cantor's intersection theorem. It's one that tells us things about nested sequences of compact sets. And it's helpful conceptually. It's used all the time. In fact, if you were to take a complex analysis class, it's one of the first things that's used to show convergence of line integrals for holomorphic functions. So what's the statement? Theorem-- this is known as Cantor's intersection theorem. So given K1 containing K2 containing K3, so on and so forth, each compact, each nonempty and compact-- by compact, I mean topologically or sequentially, which we know. That's a simple fact that-- let me reiterate. Buy nonempty and compact, I mean sequentially or topologically because they're the equivalent on a metric space. And here I'm assuming that the compact subsets are in a metric space. Then the intersection of the ki's from i equals 1 to infinity is nonempty. How do we prove this? Well, we prove this by taking a sequence of terms in ki's. So proof-- let wi be in ki for all i. And we know that there exists such an xi because they're nonempty. So right off the bat, we're using that hypothesis. And what we want to note is then that there exists-- or sorry, let's focus on K1 for now. Notice xi is in K1 for all i. This follows from the fact that it's a nested sequence of compact sets. So therefore, there exists a convergent subsequence xnk converging to sum a in K1. And what you can continue to show is that it's going to continue to convergence into a in every ki. So secondly, notice-- xi from i equal to 2 to infinity as opposed to including 1 is going to be a subset of K2. And therefore, there exists a convergent subsequence. And the convergent subsequence has a unique limit point, right? This is what we said on the second day. Convergent sequences have equivalent or have the same-- how do I want to say this? Have the same limit points. So therefore, a will be in K2. And we can continue this process over and over again, ending up with a single point at least in all of the ki's. So this concludes our proof. And this is used in complex analysis, as I stated, to go from the sequence of triangles, let's say. So let's say we have the sequence of triangles, and we know that there exists a point in every one of them. We can keep reiterating this pattern and end up with a point in the limit. So we know that the limits of the intersection is going to be nonempty. So that's how it's used in complex analysis. Of course, we can't discuss this more fully now because it requires a discussion that takes numerous days of complex analysis to prove. But if you take complex analysis, this will be one of the first things you see. OK. So what are we going to do with this new theorem? Well, we're going to define a new property. Definition-- a collection of closed sets-- notice here that I'm assuming closed and not compact-- is or has the finite intersection property if every finite subsection-- i.e. if I take finitely many of the closed sets-- we're going to assume that it has nonempty n intersection. We know that this is already going to be true for compact sets because the infinite intersection of them is nonempty. But this statement is a little bit more general in that we're only assuming that it's going to be closed. And so we're assuming that every finite subcollection has a nonempty intersection. And so what we're going to show is that in fact, there's a relationship between the finite intersection property and topologically and sequentially compact. That's going to be our huge theorem for today. It's going to take a little bit to prove, but will be deeply powerful once we're done. Our theorem is that the following are equivalent. The following are equivalent, which I'll write out more fully in case you haven't seen this abbreviation before on a metric space x. One, x is topologically compact. Topologically. Two, x is sequentially compact, which makes sense. We've already shown this implication, but we're going to show two more properties right now. Three, x is totally bounded. We've already shown sequentially compact implies totally bounded, but there's one more thing we need to include. We need to include that it's Cauchy complete. And finally, our fourth property is going to be the following, which I'll state out clearly because it's one of the more confusing ones. Every collection of closed sets of closed subsets of x with the finite intersection property has nonempty intersection. Here the difference is that we know that every finite subcollection has nonempty intersection, but what this theorem is telling you is that the intersection of all of the closed subsets, if they have the finite intersection property, is also going to be nonempty. Now, what we're going to do is show that 1 is true if only if 4 is true. Let me write this out more fully. We've already shown that 1 is true if an only if 2 is true. What we're now going to do is show that 1 is true if and only if 4 is true, and then show that 2 is true if and only if 3 is true because we've already shown sequentially compact implies totally bounded. So we would just have to show Cauchy completeness. And then that's going to be the end of today. We're going to show these four properties of compact sets, and then we're going to end today's discussion. So let's start with showing that 1 is true implies 4 is true. So the proof is going to start with supposing, for the sake of contradiction, there exists a collection of closed sets with the finite intersection property, but with empty intersection. So suppose there exists ci-- i equals 1 to infinity-- closed with the finite intersection property, i.e. intersection of finite subcollections is going to be nonempty, and the intersection of all the ci's is empty. That's going to be our assumption for contradiction. Well, knowing that the intersection of these closed sets is going to be empty, what do we know about the complements? Well, then we know that x, which is the complement of the empty set, is going to be the complement of the intersection of ci's. But then we know by De Morgan's laws, which we introduced on the second day in general theory, it's going to be the union of the complements of ci's from i equals 1 to infinity. What do we know then? Well, the complement of a closed set is going to be open. So what we've gone from is this collection of closed sets to an open cover of x. Therefore, there exists a finite subcover given by, let's say, the union from i equals 1 to k of ci complement. And what we're going to show is then that it fails to have the finite intersection property. So notice then that the intersection of ci from i equals 1 to k is going to be the same as the union of the complements from i equals 1 to k, which we know is x. So what this tells us is that the complement of this-- sorry, that's not what I meant to say. Let me reiterate. What we know then is that x is equal to the union of ci complement from i equals 1 to k, which is, of course, equal to the intersection from n equals 1-- oh, OK, I see-- from i equals 1 to k ci complement. So we've gone from the union of the complements to the complement of the intersection. And then this implies that the intersection from i equals 1 to k of ci must be the empty set, which is a contradiction because we're assuming that it has the finite intersection property, which again means that every finite subcollection has a nonempty intersection. So this is our contradiction. And we've shown that topologically compact implies that every collection of closed sets has with the finite intersection property has nonempty intersection. The proof of the other direction is going to be very similar. We're going to show that it's topologically compact given property 4. So we're going to show that 4 implies 1. Suppose z Ui's are an open cover of x. What we want to go from, then, is we want to use this to find information about closed sets so that we can apply property 4. Well, what we'll do is let Ui complement these ci, which are going to be closed. How do you know that they're closed? Because the complement is going to be the Ui's, which are open, so that implies that it's closed. And so what we're going to show is that the ci's have finite intersection. So of the final intersection property-- assume for the sake of contradiction that they don't have the finite intersection property-- for the sake of contradiction-- that the intersection of ci1 cik is empty, i.e. it doesn't have the finite intersection property. Well then we'll have is that the union from i equals 1 to k of the Ui's is equal to the intersection of the ci k, ci n from n equals 1 to k complement complement, which is, of course, then the complement of the intersection of the cosets, which is x. Do I want to read you this proof? I will read you this proof. I want to rewrite this out more carefully because we have two nested contradictions. So firstly, assume for the sake of contradiction that there is no finite subcover of the Ui's. Finite subcover of the Ui's-- i.e. we're assuming for the sake of contradiction that x is not topologically compact. And furthermore, we're going to assume for the sake of contradiction that the ci's don't have the finite intersection property. Therefore, because we want to show that they do have the finite intersection property, so the intersection from n equals 1 to k of ci n is empty. That's what we're going to assume. Well, then what do we know? We know then that the union of the Ui's-- Ui n from n equals 1 to k-- is the intersection of the complements-- complement from n equals 1 to k. But then this is going to be all of x, right? Because it's a complement of the empty set. And that's a contradiction because we're assuming that there's no finite subcover of the Ui's. So what we know then is that the ci's have the finite intersection property, i.e. the intersection of all of the ci's from i equals 1 to infinity is nonempty. So what do we know? We know then that the complement of these is going to be not all of x. So we know that x, which is the complement of the empty set, is not going to be equal to the complement of the intersection of the ci's. i equals 1 to infinity, which is, of course, the union of the Ui's. But this is a contradiction because we've assumed from the very beginning that the Ui's are an open cover of x. So what we've shown is that our open cover is not, in fact, an open cover, which is our contradiction. So we've shown so far that 1 implies 4, which implies 1, and 1 implies 2, which implies 1. So now, what we need to show is that 2 is the same as 3, i.e. sequentially compact is the same as totally bounded and Cauchy complete. Now, one of these directions has already been done for us. We've already shown that totally bounded is a consequence of sequentially compact. What we now need to show is that it's Cauchy complete. So we're going to first show that 2 implies 3. OK. So let xi be a Cauchy sequence of x, i.e. for all epsilon bigger than 0, there exists an n in the natural numbers such that for all n and m bigger than or equal to n, the distance between xn and xm is less than epsilon. We're going to use this to our advantage. So furthermore, what we know is that by sequential compactness, there's going to exist a convergent subsequence of the xi's. And let's say it converges to x. And what we're going to show then is that in fact, xi will converge to x. We're going to show that the limit point of the Cauchy sequence is, in fact, going to be in our metric space. To do that, let me move to-- actually, I can fit this in here. Notice then for all nk and n bigger than or equal to n, look at the distance from x to xn. And this will be less than or equal to the distance from x to xnk plus the distance from xnk to xn. And we know that this term is going to be less than epsilon over 2. Or sorry, we should assume epsilon over 2 from the beginning. But furthermore, we know the xnk converges to x. So we can assume that this one is less than epsilon over 2, which is epsilon. So we've shown that xn is-- so what we've shown, therefore, is that xi will converge to x. And so we've shown that every Cauchy sequence is convergent. So we've shown Cauchy completeness, which is what we wanted to show. We wanted to show that sequential compactness was the same as totally bounded and Cauchy complete, which concludes the first part of our proof. The second part, the second direction, is going to be quite a bit harder because we want to show that every sequence has a convergent subsequence, assuming totally bounded and Cauchy complete, which is going to require quite a bit of work, or at least quite a bit of mental work, even if on the chalkboard it's not so much. It's a little bit confusing, how we go from totally bounded and Cauchy complete to a convergent subsequence. OK. We now want to show that property 4 implies property 1. And this will conclude our proof. OK. How are we going to do this? Well, going to assume that x is totally bounded and Cauchy complete. And we want to show sequential compactness. How do we do that? Well, we're going to start with an arbitrary sequence in x. So let xi be an arbitrary sequence in x. And we want to show that there exists a convergent subsequence, which is going to be a little bit difficult. OK. To do so, what we're going to do is start with totally bounded. We're going to note that for all n to the natural numbers, we have that there exists finitely many y1, or in fact, I'll notate this yn1 through yr of nn such that the balls of radius 1 over n around yi of n cover x. This is our assumption from totally boundedness. Here r of n is just a function of r. It's a function r that depends on n because the number of finitely many terms can depend on n. So in other words, we don't know exactly how many finitely many terms we have, but the more important thing to note is that we have finitely many of these for every single n, which is our superscript. OK. What are we going to do with this? Well, we're going to go from this to-- OK, we want to go from this to a Cauchy subsequence of the xi's. And once we know that it's Cauchy, we're going to know that it's convergent. So we know that there must exist 1-- how do I want to say this? Yeah, let's let S1 be the set of these points-- y1 n through yr of nn. What we know is that given that xi is an infinitely long sequence in x, one of the balls of radius 1 over n around these points-- sorry, I should call this Sn. We know that around the balls of radius 1 over n, around 1 of these points, has to contain infinitely many of the terms. This is by the pigeonhole principle. So therefore, there exists in xi-- or sorry, I should say there exists a ball of radius 1 around some yi of 1 containing infinitely many xi. And what we're going to do is choose our first point in our sequence to be one of these points. So let z1 be the first, just out of assumption. And what we're going to do is continue constructing these zi's. Well, we know then, furthermore, is that there exists a ball of radius 1 of yi 2 such that-- sorry, 1/2 such that the intersection of these 2 ball of radius 1 over 2 around yi 2 intersection the ball of radius 1 yj1 has infinitely many xi. How do we know that this is true? Well, we know that infinitely many of them already lie in this ball. And so we already know if we take the intersection of this with all the other balls of radius 1/2, there has to be infinitely many there. And to see this, I'll draw a little bit more of a picture. So here is our metric space x. And we start with covering it with balls of radius 1, which I'll draw pretty big, just so that we don't have to make a bunch of small pictures. So here are the balls of radius 1. Well, what we know then is that one of these-- let's say this one-- contains infinitely many terms. So we choose our z1 to be here. Furthermore, we have the intersection of this with balls of radius 1/2. So I make these a little bit smaller. Sorry, that's not quite good. Yeah, these will be my balls of radius 1/2. What I know then is that the intersection of the balls of radius 1/2 and the balls of radius 1-- there's only going to be finitely many of them, by assumption. And the fact that there are finitely many of them implies that there exists infinitely many of them that lie in one of these balls. One of these balls of radius 1/2 intersected one of the balls of radius 1. So what we can do is choose z2 to be in this intersection of them. And we're going to reiterate this process. We're going to assume that z3 is one of the infinitely many points in the intersection of balls of radius 1/3, 1/2, and 1, and so on and so forth. And what we hope to show is that the sequence is going to be Cauchy. But that's not, at this point, going to be too hard to do. OK, let me write this down more fully. We're going to choose zk to be in the intersection of balls of radius nk of n from n equals 1 to k. Yeah, balls of radius 1 to k. And now we want to show that this sequence is going to be Cauchy. Well, what's the distance-- or for all epsilon bigger than 0, choose n in the natural numbers such that 1 over n is less than epsilon. Then what do we know? Well, for all n and m bigger than or equal to n, we know then that zn and zm are both going to be contained in the ball of radius 1 over n of one of the yi's. I won't describe which one because it's a little bit annoying. But we know that they both must lie in 1 over n because they're in this intersection of all these terms. So what we know then is that the distance from zn to zm must be less than 1 over n or less than or equal to 1 over n, which is less than epsilon, which implies that the zi's must be Cauchy. So therefore, the zi's are Cauchy-- and by Cauchy completeness, we know that there exists a limit point. zi converges to some z in x. But recall again that our zi's are constructed from our original sequence. They're a subsequence of our original sequence. So what we've shown is that there is a convergent subsequence of our sequence in x. And this is exactly what we sought out to show. And this shows that totally bounded and Cauchy complete implies sequentially compact. And this is the end of our four-part proof. So again, to reiterate, we've shown that topologically compact is the same as sequentially compact, where here we used the Lebesgue number lemma and totally boundedness. Then from here we show that sequentially compact is the same as totally bounded and Cauchy complete. Cauchy complete didn't take too much more work. That was pretty short. The other direction required quite a bit of maneuvering to get that convergent subsequence, but again, the notes contain this more fully if you want to read through it slowly and at your own pace. And furthermore, lastly, we showed that 1 is the same as 4, where topologically compact is the same as closed subsets with the finite intersection property having nonempty intersection. And now, let's think carefully about how each of these proofs went because each have their own purpose in a proof about metric spaces. To show that 1 was the same as 4, we looked at open covers, right? We took the open cover definition and looked at complements of closed sets and being open, and complements of open sets being closed. And we used all these different properties like that. And that makes a lot of sense because both definitions are in terms of closed and open sets. But to show sequentially compact was the same as totally bounded and Cauchy complete, we used definitions of sequential compactness, right? Because it's so much easier to look at convergent subsequences and such when you're looking at sequential compactness. So we could have proven that 1 implies 3, but it would have been a little bit more difficult and a little bit harder to maneuver around. And in fact, that's why I've shown these four properties today because if you're in a class where you're proving facts about metric spaces, you need to use the property that makes the most sense to use in a given moment. For instance, on your problem set, you're going to be asked to show that the union of finitely many compact sets is going to be compact. How do you know that this is going to be true? Well, you can use sequential compactness. You can use topological compactness. You can use any of these for properties that you still wish to choose, but be sure to think carefully about which one because one of them might be a little bit easier. So I won't discredit you if you use any of the other ones. And this is true in general when you're going into proofs about metric spaces. Having the intuition to flip from one definition to the other is deeply important and very useful to know. So that will conclude today's lecture. On next Thursday, we're going to prove some new facts about compact spaces. And in particular, what we're going to show or talk about is the history of compact metric spaces and why they were developed in the first place, which comes from what is known as the Dirichlet problem, but I'll leave that discussion until next Thursday. Until then have a great weekend. Thank you.
Stanford_Computer_Vision
Lecture_6_Training_Neural_Networks_I.txt
- Okay, let's get started. Okay, so today we're going to get into some of the details about how we train neural networks. So, some administrative details first. Assignment 1 is due today, Thursday, so 11:59 p.m. tonight on Canvas. We're also going to be releasing Assignment 2 today, and then your project proposals are due Tuesday, April 25th. So you should be really starting to think about your projects now if you haven't already. How many people have decided what they want to do for their project so far? Okay, so some, some people, so yeah, everyone else, you can go to TA office hours if you want suggestions and bounce ideas off of TAs. We also have a list of projects that other people have proposed. Some people usually affiliated with Stanford, so on Piazza, so you can take a look at those for additional ideas. And we also have some notes on backprop for a linear layer and a vector and tensor derivatives that Justin's written up, so that should help with understanding how exactly backprop works and for vectors and matrices. So these are linked to lecture four on the syllabus and you can go and take a look at those. Okay, so where we are now. We've talked about how to express a function in terms of a computational graph, that we can represent any function in terms of a computational graph. And we've talked more explicitly about neural networks, which is a type of graph where we have these linear layers that we stack on top of each other with nonlinearities in between. And we've also talked last lecture about convolutional neural networks, which are a particular type of network that uses convolutional layers to preserve the spatial structure throughout all the the hierarchy of the network. And so we saw exactly how a convolution layer looked, where each activation map in the convolutional layer output is produced by sliding a filter of weights over all of the spatial locations in the input. And we also saw that usually we can have many filters per layer, each of which produces a separate activation map. And so what we can get is from an input right, with a certain depth, we'll get an activation map output, which has some spatial dimension that's preserved, as well as the depth is the total number of filters that we have in that layer. And so what we want to do is we want to learn the values of all of these weights or parameters, and we saw that we can learn our network parameters through optimization, which we talked about little bit earlier in the course, right? And so we want to get to a point in the loss landscape that produces a low loss, and we can do this by taking steps in the direction of the negative gradient. And so the whole process we actually call a Mini-batch Stochastic Gradient Descent where the steps are that we continuously, we sample a batch of data. We forward prop it through our computational graph or our neural network. We get the loss at the end. We backprop through our network to calculate the gradients. And then we update the parameters or the weights in our network using this gradient. Okay, so now for the next couple of lectures we're going to talk about some of the details involved in training neural networks. And so this involves things like how do we set up our neural network at the beginning, which activation functions that we choose, how do we preprocess the data, weight initialization, regularization, gradient checking. We'll also talk about training dynamics. So, how do we babysit the learning process? How do we choose how we do parameter updates, specific perimeter update rules, and how do we do hyperparameter optimization to choose the best hyperparameters? And then we'll also talk about evaluation and model ensembles. So today in the first part, I will talk about activation functions, data preprocessing, weight initialization, batch normalization, babysitting the learning process, and hyperparameter optimization. Okay, so first activation functions. So, we saw earlier how out of any particular layer, we have the data coming in. We multiply by our weight in you know, fully connected or a convolutional layer. And then we'll pass this through an activation function or nonlinearity. And we saw some examples of this. We used sigmoid previously in some of our examples. We also saw the ReLU nonlinearity. And so today we'll talk more about different choices for these different nonlinearities and trade-offs between them. So first, the sigmoid, which we've seen before, and probably the one we're most comfortable with, right? So the sigmoid function is as we have up here, one over one plus e to the negative x. And what this does is it takes each number that's input into the sigmoid nonlinearity, so each element, and the elementwise squashes these into this range [0,1] right, using this function here. And so, if you get very high values as input, then output is going to be something near one. If you get very low values, or, I'm sorry, very negative values, it's going to be near zero. And then we have this regime near zero that it's in a linear regime. It looks a bit like a linear function. And so this is been historically popular, because sigmoids, in a sense, you can interpret them as a kind of a saturating firing rate of a neuron, right? So if it's something between zero and one, you could think of it as a firing rate. And we'll talk later about other nonlinearities, like ReLUs that, in practice, actually turned out to be more biologically plausible, but this does have a kind of interpretation that you could make. So if we look at this nonlinearity more carefully, there's several problems that there actually are with this. So the first is that saturated neurons can kill off the gradient. And so what exactly does this mean? So if we look at a sigmoid gate right, a node in our computational graph, and we have our data X as input into it, and then we have the output of the sigmoid gate coming out of it, what does the gradient flow look like as we're coming back? We have dL over d sigma right? The upstream gradient coming down, and then we're going to multiply this by dSigma over dX. This will be the gradient of a local sigmoid function. And we're going to chain these together for our downstream gradient that we pass back. So who can tell me what happens when X is equal to -10? It's very negative. What does is gradient look like? Zero, yeah, so that's right. So the gradient become zero and that's because in this negative, very negative region of the sigmoid, it's essentially flat, so the gradient is zero, and we chain any upstream gradient coming down. We multiply by basically something near zero, and we're going to get a very small gradient that's flowing back downwards, right? So, in a sense, after the chain rule, this kills the gradient flow and you're going to have a zero gradient passed down to downstream nodes. And so what happens when X is equal to zero? So there it's, yeah, it's fine in this regime. So, in this regime near zero, you're going to get a reasonable gradient here, and then it'll be fine for backprop. And then what about X equals 10? Zero, right. So again, so when X is equal to a very negative or X is equal to large positive numbers, then these are all regions where the sigmoid function is flat, and it's going to kill off the gradient and you're not going to get a gradient flow coming back. Okay, so a second problem is that the sigmoid outputs are not zero centered. And so let's take a look at why this is a problem. So, consider what happens when the input to a neuron is always positive. So in this case, all of our Xs we're going to say is positive. It's going to be multiplied by some weight, W, and then we're going to run it through our activation function. So what can we say about the gradients on W? So think about what the local gradient is going to be, right, for this linear layer. We have DL over whatever the activation function, the loss coming down, and then we have our local gradient, which is going to be basically X, right? And so what does this mean, if all of X is positive? Okay, so I heard it's always going to be positive. So that's almost right. It's always going to be either positive, or all positive or all negative, right? So, our upstream gradient coming down is DL over our loss. L is going to be DL over DF. and this is going to be either positive or negative. It's some arbitrary gradient coming down. And then our local gradient that we multiply this by is, if we're going to find the gradients on W, is going to be DF over DW, which is going to be X. And if X is always positive then the gradients on W, which is multiplying these two together, are going to always be the sign of the upstream gradient coming down. And so what this means is that all the gradients of W, since they're always either positive or negative, they're always going to move in the same direction. You're either going to increase all of the, when you do a parameter update, you're going to either increase all of the values of W by a positive amount, or differing positive amounts, or you will decrease them all. And so the problem with this is that, this gives very inefficient gradient updates. So, if you look at on the right here, we have an example of a case where, let's say W is two-dimensional, so we have our two axes for W, and if we say that we can only have all positive or all negative updates, then we have these two quadrants, and, are the two places where the axis are either all positive or negative, and these are the only directions in which we're allowed to make a gradient update. And so in the case where, let's say our hypothetical optimal W is actually this blue vector here, right, and we're starting off at you know some point, or at the top of the the the beginning of the red arrows, we can't just directly take a gradient update in this direction, because this is not in one of those two allowed gradient directions. And so what we're going to have to do, is we'll have to take a sequence of gradient updates. For example, in these red arrow directions that are each in allowed directions, in order to finally get to this optimal W. And so this is why also, in general, we want a zero mean data. So, we want our input X to be zero meaned, so that we actually have positive and negative values and we don't get into this problem of the gradient updates. They'll be all moving in the same direction. So is this clear? Any questions on this point? Okay. Okay, so we've talked about these two main problems of the sigmoid. The saturated neurons can kill the gradients if we're too positive or too negative of an input. They're also not zero-centered and so we get these, this inefficient kind of gradient update. And then a third problem, we have an exponential function in here, so this is a little bit computationally expensive. In the grand scheme of your network, this is usually not the main problem, because we have all these convolutions and dot products that are a lot more expensive, but this is just a minor point also to observe. So now we can look at a second activation function here at tanh. And so this looks very similar to the sigmoid, but the difference is that now it's squashing to the range [-1, 1]. So here, the main difference is that it's now zero-centered, so we've gotten rid of the second problem that we had. It still kills the gradients, however, when it's saturated. So, you still have these regimes where the gradient is essentially flat and you're going to kill the gradient flow. So this is a bit better than the sigmoid, but it still has some problems. Okay, so now let's look at the ReLU activation function. And this is one that we saw in our examples last lecture when we were talking about the convolutional neural network. And we saw that we interspersed ReLU nonlinearities between many of the convolutional layers. And so, this function is f of x equals max of zero and x. So it takes an elementwise operation on your input and basically if your input is negative, it's going to put it to zero. And then if it's positive, it's going to be just passed through. It's the identity. And so this is one that's pretty commonly used, and if we look at this one and look at and think about the problems that we saw earlier with the sigmoid and the tanh, we can see that it doesn't saturate in the positive region. So there's whole half of our input space where it's not going to saturate, so this is a big advantage. So this is also computationally very efficient. We saw earlier that the sigmoid has this E exponential in it. And so the ReLU is just this simple max and there's, it's extremely fast. And in practice, using this ReLU, it converges much faster than the sigmoid and the tanh, so about six times faster. And it's also turned out to be more biologically plausible than the sigmoid. So if you look at a neuron and you look at what the inputs look like, and you look at what the outputs look like, and you try to measure this in neuroscience experiments, you'll see that this one is actually a closer approximation to what's happening than sigmoids. And so ReLUs were starting to be used a lot around 2012 when we had AlexNet, the first major convolutional neural network that was able to do well on ImageNet and large-scale data. They used the ReLU in their experiments. So a problem however, with the ReLU, is that it's still, it's not not zero-centered anymore. So we saw that the sigmoid was not zero-centered. Tanh fixed this and now ReLU has this problem again. And so that's one of the issues of the ReLU. And then we also have this further annoyance of, again we saw that in the positive half of the inputs, we don't have saturation, but this is not the case of the negative half. Right, so just thinking about this a little bit more precisely. So what's happening here when X equals negative 10? So zero gradient, that's right. What happens when X is equal to positive 10? It's good, right. So, we're in the linear regime. And then what happens when X is equal to zero? Yes, it undefined here, but in practice, we'll say, you know, zero, right. And so basically, it's killing the gradient in half of the regime. And so we can get this phenomenon of basically dead ReLUs, when we're in this bad part of the regime. And so there's, you can look at this in, as coming from several potential reasons. And so if we look at our data cloud here, this is all of our training data, then if we look at where the ReLUs can fall, so the ReLUs can be, each of these is basically the half of the plane where it's going to activate. And so each of these is the plane that defines each of these ReLUs, and we can see that you can have these dead ReLUs that are basically off of the data cloud. And in this case, it will never activate and never update, as compared to an active ReLU where some of the data is going to be positive and passed through and some won't be. And so there's several reasons for this. The first is that it can happen when you have bad initialization. So if you have weights that happen to be unlucky and they happen to be off the data cloud, so they happen to specify this bad ReLU over here. Then they're never going to get a data input that causes it to activate, and so they're never going to get good gradient flow coming back. And so it'll just never update and never activate. What's the more common case is when your learning rate is too high. And so this case you started off with an okay ReLU, but because you're making these huge updates, the weights jump around and then your ReLU unit in a sense, gets knocked off of the data manifold. And so this happens through training. So it was fine at the beginning and then at some point, it became bad and it died. And so if in practice, if you freeze a network that you've trained and you pass the data through, you can see it actually is much as 10 to 20% of the network is these dead ReLUs. And so you know that's a problem, but also most networks do have this type of problem when you use ReLUs. Some of them will be dead, and in practice, people look into this, and it's a research problem, but it's still doing okay for training networks. Yeah, is there a question? [student speaking off mic] Right. So the question is, yeah, so the data cloud is just your training data. [student speaking off mic] Okay, so the question is when, how do you tell when the ReLU is going to be dead or not, with respect to the data cloud? And so if you look at, this is an example of like a simple two-dimensional case. And so our ReLU, we're going to get our input to the ReLU, which is going to be a basically you know, W1 X1 plus W2 X2, and it we apply this, so that that defines this this separating hyperplane here, and then we're going to take half of it that's going to be positive, and half of it's going to be killed off, and so yes, so you, you know you just, it's whatever the weights happened to be, and where the data happens to be is where these, where these hyperplanes fall, and so, so yeah so just throughout the course of training, some of your ReLUs will be in different places, with respect to the data cloud. Oh, question. [student speaking off mic] Yeah. So okay, so the question is for the sigmoid we talked about two drawbacks, and one of them was that the neurons can get saturated, so let's go back to the sigmoid here, and the question was this is not the case, when all of your inputs are positive. So when all of your inputs are positive, they're all going to be coming in in this zero plus region here, and so you can still get a saturating neuron, because you see up in this positive region, it also plateaus at one, and so when it's when you have large positive values as input you're also going to get the zero gradient, because you have you have a flat slope here. Okay. Okay, so in practice people also like to initialize ReLUs with slightly positive biases, in order to increase the likelihood of it being active at initialization and to get some updates. Right and so this basically just biases towards more ReLUs firing at the beginning, and in practice some say that it helps. Some say that it doesn't. Generally people don't always use this. It's yeah, a lot of times people just initialize it with zero biases still. Okay, so now we can look at some modifications on the ReLU that have come out since then, and so one example is this leaky ReLU. And so this looks very similar to the original ReLU, and the only difference is that now instead of being flat in the negative regime, we're going to give a slight negative slope here And so this solves a lot of the problems that we mentioned earlier. Right here we don't have any saturating regime, even in the negative space. It's still very computationally efficient. It still converges faster than sigmoid and tanh, very similar to a ReLU. And it doesn't have this dying problem. And there's also another example is the parametric rectifier, so PReLU. And so in this case it's just like a leaky ReLU where we again have this sloped region in the negative space, but now this slope in the negative regime is determined through this alpha parameter, so we don't specify, we don't hard-code it. but we treat it as now a parameter that we can backprop into and learn. And so this gives it a little bit more flexibility. And we also have something called an Exponential Linear Unit, an ELU, so we have all these different LUs, basically. and this one again, you know, it has all the benefits of the ReLu, but now you're, it is also closer to zero mean outputs. So, that's actually an advantage that the leaky ReLU, parametric ReLU, a lot of these they allow you to have your mean closer to zero, but compared with the leaky ReLU, instead of it being sloped in the negative regime, here you actually are building back in a negative saturation regime, and there's arguments that basically this allows you to have some more robustness to noise, and you basically get these deactivation states that can be more robust. And you can look at this paper for, there's a lot of kind of more justification for why this is the case. And in a sense this is kind of something in between the ReLUs and the leaky ReLUs, where has some of this shape, which the Leaky ReLU does, which gives it closer to zero mean output, but then it also still has some of this more saturating behavior that ReLUs have. A question? [student speaking off mic] So, whether this parameter alpha is going to be specific for each neuron. So, I believe it is often specified, but I actually can't remember exactly, so you can look in the paper for exactly, yeah, how this is defined, but yeah, so I believe this function is basically very carefully designed in order to have nice desirable properties. Okay, so there's basically all of these kinds of variants on the ReLU. And so you can see that, all of these it's kind of, you can argue that each one may have certain benefits, certain drawbacks in practice. People just want to run experiments all of them, and see empirically what works better, try and justify it, and come up with new ones, but they're all different things that are being experimented with. And so let's just mention one more. This is Maxout Neuron. So, this one looks a little bit different in that it doesn't have the same form as the others did of taking your basic dot product, and then putting this element-wise nonlinearity in front of it. Instead, it looks like this, this max of W dot product of X plus B, and a second set of weights, W2 dot product with X plus B2. And so what does this, is this is taking the max of these two functions in a sense. And so what it does is it generalizes the ReLU and the leaky ReLu, because you're just you're taking the max over these two, two linear functions. And so what this give us, it's again you're operating in a linear regime. It doesn't saturate and it doesn't die. The problem is that here, you are doubling the number of parameters per neuron. So, each neuron now has this original set of weights, W, but it now has W1 and W2, so you have twice these. So in practice, when we look at all of these activation functions, kind of a good general rule of thumb is use ReLU. This is the most standard one that generally just works well. And you know you do want to be careful in general with your learning rates to adjust them based, see how things do. We'll talk more about adjusting learning rates later in this lecture, but you can also try out some of these fancier activation functions, the leaky ReLU, Maxout, ELU, but these are generally, they're still kind of more experimental. So, you can see how they work for your problem. You can also try out tanh, but probably some of these ReLU and ReLU variants are going to be better. And in general don't use sigmoid. This is one of the earliest original activation functions, and ReLU and these other variants have generally worked better since then. Okay, so now let's talk a little bit about data preprocessing. Right, so the activation function, we design this is part of our network. Now we want to train the network, and we have our input data that we want to start training from. So, generally we want to always preprocess the data, and this is something that you've probably seen before in machine learning classes if you taken those. And some standard types of preprocessing are, you take your original data and you want to zero mean them, and then you probably want to also normalize that, so normalized by the standard deviation, And so why do we want to do this? For zero centering, you can remember earlier that we talked about when all the inputs are positive, for example, then we get all of our gradients on the weights to be positive, and we get this basically suboptimal optimization. And in general even if it's not all zero or all negative, any sort of bias will still cause this type of problem. And so then in terms of normalizing the data, this is basically you want to normalize data typically in the machine learning problems, so that all features are in the same range, and so that they contribute equally. In practice, since for images, which is what we're dealing with in this course here for the most part, we do do the zero centering, but in practice we don't actually normalize the pixel value so much, because generally for images right at each location you already have relatively comparable scale and distribution, and so we don't really need to normalize so much, compared to more general machine learning problems, where you might have different features that are very different and of very different scales. And in machine learning, you might also see a more complicated things, like PCA or whitening, but again with images, we typically just stick with the zero mean, and we don't do the normalization, and we also don't do some of these more complicated pre-processing. And one reason for this is generally with images we don't really want to take all of our input, let's say pixel values and project this onto a lower dimensional space of new kinds of features that we're dealing with. We typically just want to apply convolutional networks spatially and have our spatial structure over the original image. Yeah, question. [student speaking off mic] So the question is we do this pre-processing in a training phase, do we also do the same kind of thing in the test phase, and the answer is yes. So, let me just move to the next slide here. So, in general on the training phase is where we determine our let's say, mean, and then we apply this exact same mean to the test data. So, we'll normalize by the same empirical mean from the training data. Okay, so to summarize basically for images, we typically just do the zero mean pre-processing and we can subtract either the entire mean image. So, from the training data, you compute the mean image, which will be the same size as your, as each image. So, for example 32 by 32 by three, you'll get this array of numbers, and then you subtract that from each image that you're about to pass through the network, and you'll do the same thing at test time for this array that you determined at training time. In practice, we can also for some networks, we also do this by just of subtracting a per-channel mean, and so instead of having an entire mean image that were going to zero-center by, we just take the mean by channel, and this is just because it turns out that it was similar enough across the whole image, it didn't make such a big difference to subtract the mean image versus just a per-channel value. And this is easier to just pass around and deal with. So, you'll see this as well for example, in a VGG Network, which is a network that came after AlexNet, and we'll talk about that later. Question. [student speaking off mic] Okay, so there are two questions. The first is what's a channel, in this case, when we are subtracting a per-channel mean? And this is RGB, so our array, our images are typically for example, 32 by 32 by three. So, width, height, each are 32, and our depth, we have three channels RGB, and so we'll have one mean for the red channel, one mean for a green, one for blue. And then the second, what was your second question? [student speaking off mic] Oh. Okay, so the question is when we're subtracting the mean image, what is the mean taken over? And the mean is taking over all of your training images. So, you'll take all of your training images and just compute the mean of all of those. Does that make sense? [student speaking off mic] Yeah the question is, we do this for the entire training set, once before we start training. We don't do this per batch, and yeah, that's exactly correct. So we just want to have a good sample, an empirical mean that we have. And so if you take it per batch, if you're sampling reasonable batches, it should be basically, you should be getting the same values anyways for the mean, and so it's more efficient and easier just do this once at the beginning. You might not even have to really take it over the entire training data. You could also just sample enough training images to get a good estimate of your mean. Okay, so any other questions about data preprocessing? Yes. [student speaking off mic] So, the question is does the data preprocessing solve the sigmoid problem? So the data preprocessing is doing zero mean right? And we talked about how sigmoid, we want to have zero mean. And so it does solve this for the first layer that we pass it through. So, now our inputs to the first layer of our network is going to be zero mean, but we'll see later on that we're actually going to have this problem come up in much worse and greater form, as we have deep networks. You're going to get a lot of nonzero mean problems later on. And so in this case, this is not going to be sufficient. So this only helps at the first layer of your network. Okay, so now let's talk about how do we want to initialize the weights of our network? So, we have let's say our standard two layer neural network and we have all of these weights that we want to learn, but we have to start them with some value, right? And then we're going to update them using our gradient updates from there. So first question. What happens when we use an initialization of W equals zero? We just set all of the parameters to be zero. What's the problem with this? [student speaking off mic] So sorry, say that again. So I heard all the neurons are going to be dead. No updates ever. So not exactly. So, part of that is correct in that all the neurons will do the same thing. So, they might not all be dead. Depending on your input value, I mean, you could be in any regime of your neurons, so they might not be dead, but the key thing is that they will all do the same thing. So, since your weights are zero, given an input, every neuron is going to be, have the same operation basically on top of your inputs. And so, since they're all going to output the same thing, they're also all going to get the same gradient. And so, because of that, they're all going to update in the same way. And now you're just going to get all neurons that are exactly the same, which is not what you want. You want the neurons to learn different things. And so, that's the problem when you initialize everything equally and there's basically no symmetry breaking here. So, what's the first, yeah question? [student speaking off mic] So the question is, because that, because the gradient also depends on our loss, won't one backprop differently compared to the other? So in the last layer, like yes, you do have basically some of this, the gradients will get the same, sorry, will get different loss for each specific neuron based on which class it was connected to, but if you look at all the neurons generally throughout your network, like you're going to, you basically have a lot of these neurons that are connected in exactly the same way. They had the same updates and it's basically going to be the problem. Okay, so the first idea that we can have to try and improve upon this is to set all of the weights to be small random numbers that we can sample from a distribution. So, in this case, we're going to sample from basically a standard gaussian, but we're going to scale it so that the standard deviation is actually one E negative two, 0.01. And so, just give this many small random weights. And so, this does work okay for small networks, now we've broken the symmetry, but there's going to be problems with deeper networks. And so, let's take a look at why this is the case. So, here this is basically an experiment that we can do where let's take a deeper network. So in this case, let's initialize a 10 layer neural network to have 500 neurons in each of these 10 layers. Okay, we'll use tanh nonlinearities in this case and we'll initialize it with small random numbers as we described in the last slide. So here, we're going to basically just initialize this network. We have random data that we're going to take, and now let's just pass it through the entire network, and at each layer, look at the statistics of the activations that come out of that layer. And so, what we'll see this is probably a little bit hard to read up top, but if we compute the mean and the standard deviations at each layer, well see that at the first layer this is, the means are always around zero. There's a funny sound in here. Interesting, okay well that was fixed. So, if we look at, if we look at the outputs from here, the mean is always going to be around zero, which makes sense. So, if we look here, let's see, if we take this, we looked at the dot product of X with W, and then we took the tanh on linearity, and then we store these values and so, because it tanh is centered around zero, this will make sense, and then the standard deviation however shrinks, and it quickly collapses to zero. So, if we're plotting this, here this second row of plots here is showing the mean and standard deviations over time per layer and then in the bottom, the sequence of plots is showing for each of our layers. What's the distribution of the activations that we have? And so, we can see that at the first layer, we still have a reasonable gaussian looking thing. It's a nice distribution. But the problem is that as we multiply by this W, these small numbers at each layer, this quickly shrinks and collapses all of these values, as we multiply this over and over again. And so, by the end, we get all of these zeros, which is not what we want. So we get all the activations become zero. And so now let's think about the backwards pass. So, if we do a backward pass, now assuming this was our forward pass and now we want to compute our gradients. So first, what does the gradients look like on the weights? Does anyone have a guess? So, if we think about this, we have our input values are very small at each layer right, because they've all collapsed at this near zero, and then now each layer, we have our upstream gradient flowing down, and then in order to get the gradient on the weights remember it's our upstream gradient times our local gradient, which for this this dot product were doing W times X. It's just basically going to be X, which is our inputs. So, it's again a similar kind of problem that we saw earlier, where now since, so here because X is small, our weights are getting a very small gradient, and they're basically not updating. So, this is a way that you can basically try and think about the effect of gradient flows through your networks. You can always think about what the forward pass is doing, and then think about what's happening as you have gradient flows coming down, and different types of inputs, what the effect of this actually is on our weights and the gradients on them. And so also, if now if we think about what's the gradient that's going to be flowing back from each layer as we're chaining all these gradients. Alright, so this is going to be the flip thing where we have now the gradient flowing back is our upstream gradient times in this case the local gradient is W on our input X. And so again, because this is the dot product, and so now, actually going backwards at each layer, we're basically doing a multiplication of the upstream gradient by our weights in order to get the next gradient flowing downwards. And so because here, we're multiplying by W over and over again. You're getting basically the same phenomenon as we had in the forward pass where everything is getting smaller and smaller. And now the gradient, upstream gradients are collapsing to zero as well. Question? [student speaking off mic] Yes, I guess upstream and downstream is, can be interpreted differently, depending on if you're going forward and backward, but in this case we're going, we're doing, we're going backwards, right? We're doing back propagation. And so upstream is the gradient flowing, you can think of a flow from your loss, all the way back to your input. And so upstream is what came from what you've already done, flowing you know, down into your current node. Right, so we're for flowing downwards, and what we get coming into the node through backprop is coming from upstream. Okay, so now let's think about what happens when, you know we saw that this was a problem when our weights were pretty small, right? So, we can think about well, what if we just try and solve this by making our weights big? So, let's sample from this standard gaussian, now with standard deviation one instead of 0.01. So what's the problem here? Does anyone have a guess? If our weights are now all big, and we're passing them, and we're taking these outputs of W times X, and passing them through tanh nonlinearities, remember we were talking about what happens at different values of inputs to tanh, so what's the problem? Okay, so yeah I heard that it's going to be saturated, so that's right. Basically now, because our weights are going to be big, we're going to always be at saturated regimes of either very negative or very positive of the tanh. And so in practice, what you're going to get here is now if we look at the distribution of the activations at each of the layers here on the bottom, they're going to be all basically negative one or plus one. Right, and so this will have the problem that we talked about with the tanh earlier, when they're saturated, that all the gradients will be zero, and our weights are not updating. So basically, it's really hard to get your weight initialization right. When it's too small they all collapse. When it's too large they saturate. So, there's been some work in trying to figure out well, what's the proper way to initialize these weights. And so, one kind of good rule of thumb that you can use is the Xavier initialization. And so this is from this paper by Glorot in 2010. And so what this formula is, is if we look at W up here, we can see that we want to initialize them to these, we sample from our standard gaussian, and then we're going to scale by the number of inputs that we have. And you can go through the math, and you can see in the lecture notes as well as in this paper of exactly how this works out, but basically the way we do it is we specify that we want the variance of the input to be the same as a variance of the output, and then if you derive what the weight should be you'll get this formula, and intuitively with this kind of means is that if you have a small number of inputs right, then we're going to divide by the smaller number and get larger weights, and we need larger weights, because with small inputs, and you're multiplying each of these by weight, you need a larger weights to get the same larger variance at output, and kind of vice versa for if we have many inputs, then we want smaller weights in order to get the same spread at the output. So, you can look at the notes for more details about this. And so basically now, if we want to have a unit gaussian, right as input to each layer, we can use this kind of initialization to at training time, to be able to initialize this, so that there is approximately a unit gaussian at each layer. Okay, and so one thing is does assume though is that it is assumed that there's linear activations. and so it assumes that we are in the activation, in the active region of the tanh, for example. And so again, you can look at the notes to really try and understand its derivation, but the problem is that this breaks when now you use something like a ReLU. Right, and so with the ReLU what happens is that, because it's killing half of your units, it's setting approximately half of them to zero at each time, it's actually halving the variance that you get out of this. And so now, if you just make the same assumptions as your derivation earlier you won't actually get the right variance coming out, it's going to be too small. And so what you see is again this kind of phenomenon, as the distributions starts collapsing. In this case you get more and more peaked toward zero, and more units deactivated. And the way to address this with something that has been pointed out in some papers, which is that you can you can try to account for this with an extra, divided by two. So, now you're basically adjusting for the fact that half the neurons get killed. And so you're kind of equivalent input has actually half this number of input, and so you just add this divided by two factor in, this works much better, and you can see that the distributions are pretty good throughout all layers of the network. And so in practice this is been really important actually, for training these types of little things, to a really pay attention to how your weights are, make a big difference. And so for example, you'll see in some papers that this actually is the difference between the network even training at all and performing well versus nothing happening. So, proper initialization is still an active area of research. And so if you're interested in this, you can look at a lot of these papers and resources. A good general rule of thumb is basically use the Xavier Initialization to start with, and then you can also think about some of these other kinds of methods. And so now we're going to talk about a related idea to this, so this idea of wanting to keep activations in a gaussian range that we want. Right, and so this idea behind what we're going to call batch normalization is, okay we want unit gaussian activations. Let's just make them that way. Let's just force them to be that way. And so how does this work? So, let's consider a batch of activations at some layer. And so now we have all of our activations coming out. If we want to make this unit gaussian, we actually can just do this empirically, right. We can take the mean of the batch that we have so far of the current batch, and we can just and the variance, and we can just normalize by this. Right, and so basically, instead of with weight initialization, we're setting this at the start of training so that we try and get it into a good spot that we can have unit gaussians at every layer, and hopefully during training this will preserve this. Now we're going to explicitly make that happen on every forward pass through the network. We're going to make this happen functionally, and basically by normalizing by the mean and the variance of each neuron, we look at all of the inputs coming into it and calculate the mean and variance for that batch and normalize it by it. And the thing is that this is a, this is just a differentiable function right? If we have our mean and our variance as constants, this is just a sequence of computational operations that we can differentiate and do back prop through this. Okay, so just as I was saying earlier right, if we look at our input data, and we think of this as we have N training examples in our current batch, and then each batch has dimension D, we're going to the compute the empirical mean and variance independently for each dimension, so each basically feature element, and we compute this across our batch, our current mini-batch that we have and we normalize by this. And so this is usually inserted after fully connected or convolutional layers. We saw that would we were multiplying by W in these layers, which we do over and over again, then we can have this bad scaling effect with each one. And so this basically is able to undo this effect. Right, and since we're basically just scaling by the inputs connected to each neuron, each activation, we can apply this the same way to fully connected convolutional layers, and the only difference is that, with convolutional layers, we want to normalize not just across all the training examples, and independently for each each feature dimension, but we actually want to normalize jointly across both all the feature dimensions, all the spatial locations that we have in our activation map, as well as all of the training examples. And we do this, because we want to obey the convolutional property, and we want nearby locations to be normalized the same way, right? And so with a convolutional layer, we're basically going to have a one mean and one standard deviation, per activation map that that we have, and we're going to normalize by this across all of the examples in the batch. And so this is something that you guys are going to implement in your next homework. And so, all of these details are explained very clearly in this paper from 2015. And so on this is a very useful, useful technique that you want to use a lot in practice. You want to have these batch normalization layers. And so you should read this paper. Go through all of the derivations, and then also go through the derivations of how to compute the gradients with given these, this normalization operation. Okay, so one thing that I just want to point out is that, it's not clear that, you know, we're doing this batch normalization after every fully connected layer, and it's not clear that we necessarily want a unit gaussian input to these tanh nonlinearities, because what this is doing is this is constraining you to the linear regime of this nonlinearity, and we're not actually, you're trying to basically say, let's not have any of this saturation, but maybe a little bit of this is good, right? You you want to be able to control what's, how much saturation that you want to have. And so what, the way that we address this when we're doing batch normalization is that we have our normalization operation, but then after that we have this additional squashing and scaling operation. So, we do our normalization. Then we're going to scale by some constant gamma, and then shift by another factor of beta. Right, and so what this actually does is that this allows you to be able to recover the identity function if you wanted to. So, if the network wanted to, it could learn your scaling factor gamma to be just your variance. It could learn your beta to be your mean, and in this case you can recover the identity mapping, as if you didn't have batch normalization. And so now you have the flexibility of doing kind of everything in between and making your the network learning how to make your tanh more or less saturated, and how much to do so in order to have, to have good training. Okay, so just to sort of summarize the batch normalization idea. Right, so given our inputs, we're going to compute our mini-batch mean. So, we do this for every mini-batch that's coming in. We compute our variance. We normalize by the mean and variance, and we have this additional scaling and shifting factor. And so this improves gradient flow through the network. it's also more robust as a result. It works for more range of learning rates, and different kinds of initialization, so people have seen that once you put batch normalization in, and it's just easier to train, and so that's why you should do this. And then also when one thing that I just want to point out is that you can also think of this as in a way also doing some regularization. Right and so, because now at the output of each layer, each of these activations, each of these outputs, is an output of both your input X, as well as the other examples in the batch that it happens to be sampled with, right, because you're going to normalize each input data by the empirical mean over that batch. So because of that, it's no longer producing deterministic values for a given training example, and it's tying all of these inputs in a batch together. And so this basically, because it's no longer deterministic, kind of jitters your representation of X a little bit, and in a sense, gives some sort of regularization effect. Yeah, question? [student speaking off camera] The question is gamma and beta are learned parameters, and yes that's the case. [student speaking off mic] Yeah, so the question is why do we want to learn this gamma and beta to be able to learn the identity function back, and the reason is because you want to give it the flexibility. Right, so what batch normalization is doing, is it's forcing our data to become this unit gaussian, our inputs to be unit gaussian, but even though in general this is a good idea, it's not always that this is exactly the best thing to do. And we saw in particular for something like a tanh, you might want to control some degree of saturation that you have. And so what this does is it gives you the flexibility of doing this exact like unit gaussian normalization, if it wants to, but also learning that maybe in this particular part of the network, maybe that's not the best thing to do. Maybe we want something still in this general idea, but slightly different right, slightly scaled or shifted. And so these parameters just give it that extra flexibility to learn that if it wants to. And then yeah, if the the best thing to do is just batch normalization then it'll learn the right parameters for that. Yeah? [student speaking off mic] Yeah, so basically each neuron output. So, we have output of a fully connected layer. We have W times X. and so we have the values of each of these outputs, and then we're going to apply batch normalization separately to each of these neurons. Question? [student speaking off mic] Yeah, so the question is that for things like reinforcement learning, you might have a really small batch size. How do you deal with this? So in practice, I guess batch normalization has been used a lot for like for standard convolutional neural networks, and there's actually papers on how do we want to do normalization for different kinds of recurrent networks, or you know some of these networks that might also be in reinforcement learning. And so there's different considerations that you might want to think of there. And this is still an active area of research. There's papers on this and we might also talk about some of this more later, but for a typical convolutional neural network this generally works fine. And then if you have a smaller batch size, maybe this becomes a little bit less accurate, but you still get kind of the same effect. And you know it's possible also that you could design your mean and variance to be computed maybe over more examples, right, and I think in practice usually it's just okay, so you don't see this too much, but this is something that maybe could help if that was a problem. Yeah, question? [student speaking off mic] So the question, so the question is, if we force the inputs to be gaussian, do we lose the structure? So, no in a sense that you can think of like, if you had all your features distributed as a gaussian for example, even if you were just doing data pre-processing, this gaussian is not losing you any structure. All the, it's just shifting and scaling your data into a regime, that works well for the operations that you're going to perform on it. In convolutional layers, you do have some structure, that you want to preserve spatially, right. You want, like if you look at your activation maps, you want them to relatively all make sense to each other. So, in this case you do want to take that into consideration. And so now, we're going to normalize, find one mean for the entire activation map, so we only find the empirical mean and variance over training examples. And so that's something that you'll be doing in your homework, and also explained in the paper as well. So, you should refer to that. Yes. [student speaking off mic] So the question is, are we normalizing the weight so that they become gaussian. So, if I understand your question correctly, then the answer is, we're normalizing the inputs to each layer, so we're not changing the weights in this process. [student speaking off mic] Yeah, so the question is, once we subtract by the mean and divide by the standard deviation, does this become gaussian, and the answer is yes. So, if you think about the operations that are happening, basically you're shifting by the mean, right. And so this shift up to be zero-centered, and then you're scaling by the standard deviation. This now transforms this into a unit gaussian. And so if you want to look more into that, I think you can look at, there's a lot of machine learning explanations that go into exactly what this, visualizing with this operation is doing, but yeah this basically takes your data and turns it into a gaussian distribution. Okay, so yeah question? [student speaking off mic] Uh-huh. So the question is, if we're going to be doing the shift and scale, and learning these is the batch normalization redundant, because you could recover the identity mapping? So in the case that the network learns that identity mapping is always the best, and it learns these parameters, the yeah, there would be no point for batch normalization, but in practice this doesn't happen. So in practice, we will learn this gamma and beta. That's not the same as a identity mapping. So, it will shift and scale by some amount, but not the amount that's going to give you an identity mapping. And so what you get is you still get this batch normalization effect. Right, so having this identity mapping there, I'm only putting this here to say that in the extreme, it could learn the identity mapping, but in practice it doesn't. Yeah, question. [student speaking off mic] Yeah. [student speaking off mic] Oh, right, right. Yeah, yeah sorry, I was not clear about this, but yeah I think this is related to the other question earlier, that yeah when we're doing this we're actually getting zero mean and unit gaussian, which put this into a nice shape, but it doesn't have to actually be a gaussian. So yeah, I mean ideally, if we're looking at like inputs coming in, as you know, sort of approximately gaussian, we would like it to have this kind of effect, but yeah, in practice it doesn't have to be. Okay, so ... Okay, so the last thing I just want to mention about this is that, so at test time, the batch normalization layer, we now take the empirical mean and variance from the training data. So, we don't re-compute this at test time. We just estimate this at training time, for example using running averages, and then we're going to use this as at test time. So, we're just going to scale by that. Okay, so now I'm going to move on to babysitting the learning process. Right, so now we've defined our network architecture, and we'll talk about how do we monitor training, and how do we adjust hyperparameters as we go, to get good learning results? So as always, so the first step we want to do, is we want to pre-process the data. Right, so we want to zero mean the data as we talked about earlier. Then we want to choose the architecture, and so here we are starting with one hidden layer of 50 neurons, for example, but we've basically we can pick any architecture that we want to start with. And then the first thing that we want to do is we initialize our network. We do a forward pass through it, and we want to make sure that our loss is reasonable. So, we talked about this several lectures ago, where we have a basically a, let's say we have a Softmax classifier that we have here. We know what our loss should be, when our weights are small, and we have generally a diffuse distribution. Then we want it to be, the Softmax classifier loss is going to be your negative log likelihood, which if we have 10 classes, it'll be something like negative log of one over 10, which here is around 2.3, and so we want to make sure that our loss is what we expect it to be. So, this is a good sanity check that we want to always, always do. So, now once we've seen that our original loss is good, now we want to, so first we want to do this having zero regularization, right. So, when we disable the regularization, now our only loss term is this data loss, which is going to give 2.3 here. And so here, now we want to crank up the regularization, and when we do that, we want to see that our loss goes up, because we've added this additional regularization term. So, this is a good next step that you can do for your sanity check. And then, now we can start training. So, now we start trying to train. What we do is, a good way to do this is to start up with a very small amount of data, because if you have just a very small training set, you should be able to over fit this very well and get very good training loss on here. And so in this case we want to turn off our regularization again, and just see if we can make the loss go down to zero. And so we can see how our loss is changing, as we have all these epochs. We compute our loss at each epoch, and we want to see this go all the way down to zero. Right, and here we can see that also our training accuracy is going all the way up to one, and this makes sense right. If you have a very small number of data, you should be able to over fit this perfectly. Okay, so now once you've done that, these are all sanity checks. Now you can start really trying to train. So, now you can take your full training data, and now start with a small amount of regularization, and let's first figure out what's a good learning rate. So, learning rate is one of the most important hyperparameters, and it's something that you want to adjust first. So, you want to try some value of learning rate. and here I've tried one E negative six, and you can see that the loss is barely changing. Right, and so the reason this is barely changing is usually because your learning rate is too small. So when it's too small, your gradient updates are not big enough, and your cost is basically about the same. Okay, so, one thing that I want to point out here, is that we can notice that even though our loss with barely changing, the training and the validation accuracy jumped up to 20% very quickly. And so does anyone have any idea for why this might be the case? Why, so remember we have a Softmax function, and our loss didn't really change, but our accuracy improved a lot. Okay, so the reason for this is that here the probabilities are still pretty diffuse, so our loss term is still pretty similar, but when we shift all of these probabilities slightly in the right direction, because we're learning right? Our weights are changing the right direction. Now the accuracy all of a sudden can jump, because we're taking the maximum correct value, and so we're going to get a big jump in accuracy, even though our loss is still relatively diffuse. Okay, so now if we try another learning rate, now here I'm jumping in the other extreme, picking a very big learning rate, one E negative six. What's happening is that our cost is now giving us NaNs. And, when you have NaNs, what this usually means is that basically your cost exploded. And so, the reason for that is typically that your learning rate was too high. So, then you can adjust your learning rate down again. Here I can see that we're trying three E to the negative three. The cost is still exploding. So, usually this, the rough range for learning rates that we want to look at is between one E negative three, and one E negative five. And, this is the rough range that we want to be cross-validating in between. So, you want to try out values in this range, and depending on whether your loss is too slow, or too small, or whether it's too large, adjust it based on this. And so how do we exactly pick these hyperparameters? Do hyperparameter optimization, and pick the best values of all of these hyperparameters? So, the strategy that we're going to use is for any hyperparameter for example learning rate, is to do cross-validation. So, cross-validation is training on your training set, and then evaluating on a validation set. How well do this hyperparameter do? Something that you guys have already done in your assignment. And so typically we want to do this in stages. And so, we can do first of course stage, where we pick values pretty spread out apart, and then we learn for only a few epochs. And with only a few epochs. you can already get a pretty good sense of which hyperparameters, which values are good or not, right. You can quickly see that it's a NaN, or you can see that nothing is happening, and you can adjust accordingly. So, typically once you do that, then you can see what's sort of a pretty good range, and the range that you want to now do finer sampling of values in. And so, this is the second stage, where now you might want to run this for a longer time, and do a finer search over that region. And one tip for detecting explosions like NaNs, you can have in your training loop, right sample some hyperparameter, start training, and then look at your cost at every iteration or every epoch. And if you ever get a cost that's much larger than your original cost, so for example, something like three times original cost, then you know that this is not heading in the right direction. Right, it's getting very big, very quickly, and you can just break out of your loop, stop this this hyperparameter choice and pick something else. Alright, so an example of this, let's say here we want to run now course search for five epochs. This is a similar network that we were talking about earlier, and what we can do is we can see all of these validation accuracy that we're getting. And I've put in, highlighted in red the ones that gives better values. And so these are going to be regions that we're going to look into in more detail. And one thing to note is that it's usually better to optimize in log space. And so here instead of sampling, I'd say uniformly between you know one E to the negative 0.01 and 100, you're going to actually do 10 to the power of some range. Right, and this is because the learning rate is multiplying your gradient update. And so it has these multiplicative effects, and so it makes more sense to consider a range of learning rates that are multiplied or divided by some value, rather than uniformly sampled. So, you want to be dealing with orders of some magnitude here. Okay, so once you find that, you can then adjust your range. Right get in this case, we have a range of you know, maybe of 10 to the negative four, right, to 10 to the zero power. This is a good range that we want to narrow down into. And so we can do this again, and here we can see that we're getting a relatively good accuracy of 53%. And so this means we're headed in the right direction. The one thing that I want to point out is that here we actually have a problem. And so the problem is that we can see that our best accuracy here has a learning rate that's about, you know, all of our good learning rates are in this E to the negative four range. Right, and since the learning rate that we specified was going from 10 to the negative four to 10 to the zero, that means that all the good learning rates, were at the edge of the range that we were sampling. And so this is bad, because this means that we might not have explored our space sufficiently, right. We might actually want to go to 10 to the negative five, or 10 to the negative six. There might be still better ranges if we continue shifting down. So, you want to make sure that your range kind of has the good values somewhere in the middle, or somewhere where you get a sense that you've hit, you've explored your range fully. Okay, and so another thing is that we can sample all of our different hyperparameters, using a kind of grid search, right. We can sample for a fixed set of combinations, a fixed set of values for each hyperparameter. Sample in a grid manner over all of these values, but in practice it's actually better to sample from a random layout, so sampling random value of each hyperparameter in a range. And so what you'll get instead is we'll have these two hyper parameters here that we want to sample from. You'll get samples that look like this right side instead. And the reason for this is that if a function is really sort of more of a function of one variable than another, which is usually true. Usually we have little bit more, a lower effective dimensionality than we actually have. Then you're going to get many more samples of the important variable that you have. You're going to be able to see this shape in this green function that I've drawn on top, showing where the good values are, compared to if you just did a grid layout where we were only able to sample three values here, and you've missed where were the good regions. Right, and so basically we'll get much more useful signal overall since we have more samples of different values of the important variable. And so, hyperparameters to play with, we've talked about learning rate, things like different types of decay schedules, update types, regularization, also your network architecture, so the number of hidden units, the depth, all of these are hyperparameters that you can optimize over. And we've talked about some of these, but we'll keep talking about more of these in the next lecture. And so you can think of this as kind of, you know, if you're basically tuning all the knobs right, of some turntable where you're, you're a neural networks practitioner. You can think of the music that's output is the loss function that you want, and you want to adjust everything appropriately to get the kind of output that you want. Alright, so it's really kind of an art that you're doing. And in practice, you're going to do a lot of hyperparameter optimization, a lot of cross validation. And so you know, in order to get numbers, people will run cross validation over tons of hyperparameters, monitor all of them, see which ones are doing better, which ones are doing worse. Here we have all these loss curves. Pick the right ones, readjust, and keep going through this process. And so as I mentioned earlier, as you're monitoring each of these loss curves, learning rate is an important one, but you'll get a sense for how different learning rates, which learning rates are good and bad. So you'll see that if you have a very high exploding one, right, this is your loss explodes, then your learning rate is too high. If it's too kind of linear and too flat, you'll see that it's too low, it's not changing enough. And if you get something that looks like there's a steep change, but then a plateau, this is also an indicator of it being maybe too high, because in this case, you're taking too large jumps, and you're not able to settle well into your local optimum. And so a good learning rate usually ends up looking something like this, where you have a relatively steep curve, but then it's continuing to go down, and then you might keep adjusting your learning rate from there. And so this is something that you'll see through practice. Okay and just, I think we're very close to the end, so just one last thing that I want to point out is than in case you ever see learning rate loss curves, where it's ... So if you ever see loss curves where it's flat for a while, and then starts training all of a sudden, a potential reason could be bad initialization. So in this case, your gradients are not really flowing too well the beginning, so nothing's really learning, and then at some point, it just happens to adjust in the right way, such that it tips over and things just start training right? And so there's a lot of experience at looking at these and see what's wrong that you'll get over time. And so you'll usually want to monitor and visualize your accuracy. If you have a big gap between your training accuracy and your validation accuracy, it usually means that you might have overfitting and you might want to increase your regularization strength. If you have no gap, you might want to increase your model capacity, because you haven't overfit yet. You could potentially increase it more. And in general, we also want to track the updates, the ratio of our weight updates to our weight magnitudes. We can just take the norm of our parameters that we have to get a sense for how large they are, and when we have our update size, we can also take the norm of that, get a sense for how large that is, and we want this ratio to be somewhere around 0.001. There's a lot of variance in this range, so you don't have to be exactly on this, but it's just this sense of you don't want your updates to be too large compared to your value or too small, right? You don't want to dominate or to have no effect. And so this is just something that can help debug what might be a problem. Okay, so in summary, today we've looked at activation functions, data preprocessing, weight initialization, batch norm, babysitting the learning process, and hyperparameter optimization. These are the kind of the takeaways for each that you guys should keep in mind. Use ReLUs, subtract the mean, use Xavier Initialization, use batch norm, and sample hyperparameters randomly. And next time we'll continue to talk about the training neural networks with all these different topics. Thanks.
Stanford_Computer_Vision
Lecture_7_Training_Neural_Networks_II.txt
- Okay, it's after 12, so I think we should get started. Today we're going to kind of pick up where we left off last time. Last time we talked about a lot of sort of tips and tricks involved in the nitty gritty details of training neural networks. Today we'll pick up where we left off, and talk about a lot more of these sort of nitty gritty details about training these things. As usual, a couple administrative notes before we get into the material. As you all know, assignment one is already due. Hopefully you all turned it in. Did it go okay? Was it not okay? Rough sentiment? Mostly okay. Okay, that's good. Awesome. [laughs] We're in the process of grading those, so stay turned. We're hoping to get grades back for those before A two is due. Another reminder, that your project proposals are due tomorrow. Actually, no, today at 11:59. Make sure you send those in. Details are on the website and on Piazza. Also a reminder, assignment two is already out. That'll be due a week from Thursday. Historically, assignment two has been the longest one in the class, so if you haven't started already on assignment two, I'd recommend you take a look at that pretty soon. Another reminder is that for assignment two, I think of a lot of you will be using Google Cloud. Big reminder, make sure to stop your instances when you're not using them because whenever your instance is on, you get charged, and we only have so many coupons to distribute to you guys. Anytime your instance is on, even if you're not SSH to it, even if you're not running things immediately in your Jupyter Notebook, any time that instance is on, you're going to be charged. Just make sure that you explicitly stop your instances when you're not using them. In this example, I've got a little screenshot of my dashboard on Google Cloud. I need to go in there and explicitly go to the dropdown and click stop. Just make sure that you do this when you're done working each day. Another thing to remember is it's kind of up to you guys to keep track of your spending on Google Cloud. In particular, instances that use GPUs are a lot more expensive than those with CPUs. Rough order of magnitude, those GPU instances are around 90 cents to a dollar an hour. Those are actually quite pricey. The CPU instances are much cheaper. The general strategy is that you probably want to make two instances, one with a GPU and one without, and then only use that GPU instance when you really need the GPU. For example, on assignment two, most of the assignment, you should only need the CPU, so you should only use your CPU instance for that. But then the final question, about TensorFlow or PyTorch, that will need a GPU. This'll give you a little bit of practice with switching between multiple instances and only using that GPU when it's really necessary. Again, just kind of watch your spending. Try not to go too crazy on these things. Any questions on the administrative stuff before we move on? Question. - [Student] How much RAM should we use? - Question is how much RAM should we use? I think eight or 16 gigs is probably good for everything that you need in this class. As you scale up the number of CPUs and the number of RAM, you also end up spending more money. If you stick with two or four CPUs and eight or 16 gigs of RAM, that should be plenty for all the homework-related stuff that you need to do. As a quick recap, last time we talked about activation functions. We talked about this whole zoo of different activation functions and some of their different properties. We saw that the sigmoid, which used to be quite popular when training neural networks maybe 10 years ago or so, has this problem with vanishing gradients near the two ends of the activation function. tanh has this similar sort of problem. Kind of the general recommendation is that you probably want to stick with ReLU for most cases as sort of a default choice 'cause it tends to work well for a lot of different architectures. We also talked about weight initialization. Remember that up on the top, we have this idea that when you initialize your weights at the start of training, if those weights are initialized to be too small, then if you look at, then the activations will vanish as you go through the network because as you multiply by these small numbers over and over again, they'll all sort of decay to zero. Then everything will be zero, learning won't happen, you'll be sad. On the other hand, if you initialize your weights too big, then as you go through the network and multiply by your weight matrix over and over again, eventually they'll explode. You'll be unhappy, there'll be no learning, it will be very bad. But if you get that initialization just right, for example, using the Xavier initialization or the MSRA initialization, then you kind of keep a nice distribution of activations as you go through the network. Remember that this kind of gets more and more important and more and more critical as your networks get deeper and deeper because as your network gets deeper, you're multiplying by those weight matrices over and over again with these more multiplicative terms. We also talked last time about data preprocessing. We talked about how it's pretty typical in conv nets to zero center and normalize your data so it has zero mean and unit variance. I wanted to provide a little bit of extra intuition about why you might actually want to do this. Imagine a simple setup where we have a binary classification problem where we want to draw a line to separate these red points from these blue points. On the left, you have this idea where if those data points are kind of not normalized and not centered and far away from the origin, then we can still use a line to separate them, but now if that line wiggles just a little bit, then our classification is going to get totally destroyed. That kind of means that in the example on the left, the loss function is now extremely sensitive to small perturbations in that linear classifier in our weight matrix. We can still represent the same functions, but that might make learning quite difficult because, again, their loss is very sensitive to our parameter vector, whereas in the situation on the right, if you take that data cloud and you move it into the origin and you make it unit variance, then now, again, we can still classify that data quite well, but now as we wiggle that line a little bit, then our loss function is less sensitive to small perturbations in the parameter values. That maybe makes optimization a little bit easier, as we'll see a little bit going forward. By the way, this situation is not only in the linear classification case. Inside a neural network, remember we kind of have these interleavings of these linear matrix multiplies, or convolutions, followed by non-linear activation functions. If the input to some layer in your neural network is not centered or not zero mean, not unit variance, then again, small perturbations in the weight matrix of that layer of the network could cause large perturbations in the output of that layer, which, again, might make learning difficult. This is kind of a little bit of extra intuition about why normalization might be important. Because we have this intuition that normalization is so important, we talked about batch normalization, which is where we just add this additional layer inside our networks to just force all of the intermediate activations to be zero mean and unit variance. I've sort of resummarized the batch normalization equations here with the shapes a little bit more explicitly. Hopefully this can help you out when you're implementing this thing on assignment two. But again, in batch normalization, we have this idea that in the forward pass, we use the statistics of the mini batch to compute a mean and a standard deviation, and then use those estimates to normalize our data on the forward pass. Then we also reintroduce the scale and shift parameters to increase the expressivity of the layer. You might want to refer back to this when working on assignment two. We also talked last time a little bit about babysitting the learning process, how you should probably be looking at your loss curves during training. Here's an example of some networks I was actually training over the weekend. This is usually my setup when I'm working on these things. On the left, I have some plot showing the training loss over time. You can see it's kind of going down, which means my network is reducing the loss. It's doing well. On the right, there's this plot where the X axis is, again, time, or the iteration number, and the Y axis is my performance measure both on my training set and on my validation set. You can see that as we go over time, then my training set performance goes up and up and up and up and up as my loss function goes down, but at some point, my validation set performance kind of plateaus. This kind of suggests that maybe I'm overfitting in this situation. Maybe I should have been trying to add additional regularization. We also talked a bit last time about hyperparameter search. All these networks have sort of a large zoo of hyperparameters. It's pretty important to set them correctly. We talked a little bit about grid search versus random search, and how random search is maybe a little bit nicer in theory because in the situation where your performance might be more sensitive, with respect to one hyperparameter than other, and random search lets you cover that space a little bit better. We also talked about the idea of coarse to fine search, where when you're doing this hyperparameter optimization, probably you want to start with very wide ranges for your hyperparameters, only train for a couple iterations, and then based on those results, you kind of narrow in on the range of hyperparameters that are good. Now, again, redo your search in a smaller range for more iterations. You can kind of iterate this process to kind of hone in on the right region for hyperparameters. But again, it's really important to, at the start, have a very coarse range to start with, where you want very, very wide ranges for all your hyperparameters. Ideally, those ranges should be so wide that your network is kind of blowing up at either end of the range so that you know that you've searched a wide enough range for those things. Question? - [Student] How many [speaks too low to hear] optimize at once? [speaks too low to hear] - The question is how many hyperparameters do we typically search at a time? Here is two, but there's a lot more than two in these typical things. It kind of depends on the exact model and the exact architecture, but because the number of possibilities is exponential in the number of hyperparameters, you can't really test too many at a time. It also kind of depends on how many machines you have available. It kind of varies from person to person and from experiment to experiment. But generally, I try not to do this over more than maybe two or three or four at a time at most because, again, this exponential search just gets out of control. Typically, learning rate is the really important one that you need to nail first. Then other things, like regularization, like learning rate decay, model size, these other types of things tend to be a little bit less sensitive than learning rate. Sometimes you might do kind of a block coordinate descent, where you go and find the good learning rate, then you go back and try to look at different model sizes. This can help you cut down on the exponential search a little bit, but it's a little bit problem dependent on exactly which ones you should be searching over in which order. More questions? - [Student] [speaks too low to hear] Another parameter, but then changing that other parameter, two or three other parameters, makes it so that your learning rate or the ideal learning rate is still [speaks too low to hear]. - Question is how often does it happen where when you change one hyperparameter, then the other, the optimal values of the other hyperparameters change? That does happen sometimes, although for learning rates, that's typically less of a problem. For learning rates, typically you want to get in a good range, and then set it maybe even a little bit lower than optimal, and let it go for a long time. Then if you do that, combined with some of the fancier optimization strategies that we'll talk about today, then a lot of models tend to be a little bit less sensitive to learning rate once you get them in a good range. Sorry, did you have a question in front, as well? - [Student] [speaks too low to hear] - The question is what's wrong with having a small learning rate and increasing the number of epochs? The answer is that it might take a very long time. [laughs] - [Student] [speaks too low to hear] - Intuitively, if you set the learning rate very low and let it go for a very long time, then this should, in theory, always work. But in practice, those factors of 10 or 100 actually matter a lot when you're training these things. Maybe if you got the right learning rate, you could train it in six hours, 12 hours or a day, but then if you just were super safe and dropped it by a factor of 10 or by a factor of 100, now that one-day training becomes 100 days of training. That's three months. That's not going to be good. When you're taking these intro computer science classes, they always kind of sweep the constants under the rug, but when you're actually thinking about training things, those constants end up mattering a lot. Another question? - [Student] If you have a low learning rate, [speaks too low to hear]. - Question is for a low learning rate, are you more likely to be stuck in local optima? I think that makes some intuitive sense, but in practice, that seems not to be much of a problem. I think we'll talk a bit more about that later today. Today I wanted to talk about a couple other really interesting and important topics when we're training neural networks. In particular, I wanted to talk, we've kind of alluded to this fact of fancier, more powerful optimization algorithms a couple times. I wanted to spend some time today and really dig into those and talk about what are the actual optimization algorithms that most people are using these days. We also touched on regularization in earlier lectures. This concept of making your network do additional things to reduce the gap between train and test error. I wanted to talk about some more strategies that people are using in practice of regularization, with respect to neural networks. Finally, I also wanted to talk a bit about transfer learning, where you can sometimes get away with using less data than you think by transferring from one problem to another. If you recall from a few lectures ago, the kind of core strategy in training neural networks is an optimization problem where we write down some loss function, which defines, for each value of the network weights, the loss function tells us how good or bad is that value of the weights doing on our problem. Then we imagine that this loss function gives us some nice landscape over the weights, where on the right, I've shown this maybe small, two-dimensional problem, where the X and Y axes are two values of the weights. Then the color of the plot kind of represents the value of the loss. In this kind of cartoon picture of a two-dimensional problem, we're only optimizing over these two values, W one, W two. The goal is to find the most red region in this case, which corresponds to the setting of the weights with the lowest loss. Remember, we've been working so far with this extremely simple optimization algorithm, stochastic gradient descent, where it's super simple, it's three lines. While true, we first evaluate the loss in the gradient on some mini batch of data. Then we step, updating our parameter vector in the negative direction of the gradient because this gives, again, the direction of greatest decrease of the loss function. Then we repeat this over and over again, and hopefully we converge to the red region and we get great errors and we're very happy. But unfortunately, this relatively simple optimization algorithm has quite a lot of problems that actually could come up in practice. One problem with stochastic gradient descent, imagine what happens if our objective function looks something like this, where, again, we're plotting two values, W one and W two. As we change one of those values, the loss function changes very slowly. As we change the horizontal value, then our loss changes slowly. As we go up and down in this landscape, now our loss is very sensitive to changes in the vertical direction. By the way, this is referred to as the loss having a bad condition number at this point, which is the ratio between the largest and smallest singular values of the Hessian matrix at that point. But the intuitive idea is that the loss landscape kind of looks like a taco shell. It's sort of very sensitive in one direction, not sensitive in the other direction. The question is what might SGD, stochastic gradient descent, do on a function that looks like this? If you run stochastic gradient descent on this type of function, you might get this characteristic zigzagging behavior, where because for this type of objective function, the direction of the gradient does not align with the direction towards the minima. When you compute the gradient and take a step, you might step sort of over this line and sort of zigzag back and forth. In effect, you get very slow progress along the horizontal dimension, which is the less sensitive dimension, and you get this zigzagging, nasty, nasty zigzagging behavior across the fast-changing dimension. This is undesirable behavior. By the way, this problem actually becomes much more common in high dimensions. In this kind of cartoon picture, we're only showing a two-dimensional optimization landscape, but in practice, our neural networks might have millions, tens of millions, hundreds of millions of parameters. That's hundreds of millions of directions along which this thing can move. Now among those hundreds of millions of different directions to move, if the ratio between the largest one and the smallest one is bad, then SGD will not perform so nicely. You can imagine that if we have 100 million parameters, probably the maximum ratio between those two will be quite large. I think this is actually quite a big problem in practice for many high-dimensional problems. Another problem with SGD has to do with this idea of local minima or saddle points. Here I've sort of swapped the graph a little bit. Now the X axis is showing the value of one parameter, and then the Y axis is showing the value of the loss. In this top example, we have kind of this curvy objective function, where there's a valley in the middle. What happens to SGD in this situation? - [Student] [speaks too low to hear] - In this situation, SGD will get stuck because at this local minima, the gradient is zero because it's locally flat. Now remember with SGD, we compute the gradient and step in the direction of opposite gradient, so if at our current point, the opposite gradient is zero, then we're not going to make any progress, and we'll get stuck at this point. There's another problem with this idea of saddle points. Rather than being a local minima, you can imagine a point where in one direction we go up, and in the other direction we go down. Then at our current point, the gradient is zero. Again, in this situation, the function will get stuck at the saddle point because the gradient is zero. Although one thing I'd like to point out is that in one dimension, in a one-dimensional problem like this, local minima seem like a big problem and saddle points seem like kind of not something to worry about, but in fact, it's the opposite once you move to very high-dimensional problems because, again, if you think about you're in this 100 million dimensional space, what does a saddle point mean? That means that at my current point, some directions the loss goes up, and some directions the loss goes down. If you have 100 million dimensions, that's probably going to happen more frequently than, that's probably going to happen almost everywhere, basically. Whereas a local minima says that of all those 100 million directions that I can move, every one of them causes the loss to go up. In fact, that seems pretty rare when you're thinking about, again, these very high-dimensional problems. Really, the idea that has come to light in the last few years is that when you're training these very large neural networks, the problem is more about saddle points and less about local minima. By the way, this also is a problem not just exactly at the saddle point, but also near the saddle point. If you look at the example on the bottom, you see that in the regions around the saddle point, the gradient isn't zero, but the slope is very small. That means that if we're, again, just stepping in the direction of the gradient, and that gradient is very small, we're going to make very, very slow progress whenever our current parameter value is near a saddle point in the objective landscape. This is actually a big problem. Another problem with SGD comes from the S. Remember that SGD is stochastic gradient descent. Recall that our loss function is typically defined by computing the loss over many, many different examples. In this case, if N is your whole training set, then that could be something like a million. Each time computing the loss would be very, very expensive. In practice, remember that we often estimate the loss and estimate the gradient using a small mini batch of examples. What this means is that we're not actually getting the true information about the gradient at every time step. Instead, we're just getting some noisy estimate of the gradient at our current point. Here on the right, I've kind of faked this plot a little bit. I've just added random uniform noise to the gradient at every point, and then run SGD with these noisy, messed up gradients. This is maybe not exactly what happens with the SGD process, but it still give you the sense that if there's noise in your gradient estimates, then vanilla SGD kind of meanders around the space and might actually take a long time to get towards the minima. Now that we've talked about a lot of these problems. Sorry, was there a question? - [Student] [speaks too low to hear] - The question is do all of these just go away if we use normal gradient descent? Let's see. I think that the taco shell problem of high condition numbers is still a problem with full batch gradient descent. The noise. As we'll see, we might sometimes introduce additional noise into the network, not only due to sampling mini batches, but also due to explicit stochasticity in the network, so we'll see that later. That can still be a problem. Saddle points, that's still a problem for full batch gradient descent because there can still be saddle points in the full objective landscape. Basically, even if we go to full batch gradient descent, it doesn't really solve these problems. We kind of need to think about a slightly fancier optimization algorithm that can try to address these concerns. Thankfully, there's a really, really simple strategy that works pretty well at addressing many of these problems. That's this idea of adding a momentum term to our stochastic gradient descent. Here on the left, we have our classic old friend, SGD, where we just always step in the direction of the gradient. But now on the right, we have this minor, minor variance called SGD plus momentum, which is now two equations and five lines of code, so it's twice as complicated. But it's very simple. The idea is that we maintain a velocity over time, and we add our gradient estimates to the velocity. Then we step in the direction of the velocity, rather than stepping in the direction of the gradient. This is very, very simple. We also have this hyperparameter rho now which corresponds to friction. Now at every time step, we take our current velocity, we decay the current velocity by the friction constant, rho, which is often something high, like .9 is a common choice. We take our current velocity, we decay it by friction and we add in our gradient. Now we step in the direction of our velocity vector, rather than the direction of our raw gradient vector. This super, super simple strategy actually helps for all of these problems that we just talked about. If you think about what happens at local minima or saddle points, then if we're imagining velocity in this system, then you kind of have this physical interpretation of this ball kind of rolling down the hill, picking up speed as it comes down. Now once we have velocity, then even when we pass that point of local minima, the point will still have velocity, even if it doesn't have gradient. Then we can hopefully get over this local minima and continue downward. There's this similar intuition near saddle points, where even though the gradient around the saddle point is very small, we have this velocity vector that we've built up as we roll downhill. That can hopefully carry us through the saddle point and let us continue rolling all the way down. If you think about what happens in poor conditioning, now if we were to have these kind of zigzagging approximations to the gradient, then those zigzags will hopefully cancel each other out pretty fast once we're using momentum. This will effectively reduce the amount by which we step in the sensitive direction, whereas in the horizontal direction, our velocity will just keep building up, and will actually accelerate our descent across that less sensitive dimension. Adding momentum here can actually help us with this high condition number problem, as well. Finally, on the right, we've repeated the same visualization of gradient descent with noise. Here, the black is this vanilla SGD, which is sort of zigzagging all over the place, where the blue line is showing now SGD with momentum. You can see that because we're adding it, we're building up this velocity over time, the noise kind of gets averaged out in our gradient estimates. Now SGD ends up taking a much smoother path towards the minima, compared with the SGD, which is kind of meandering due to noise. Question? - [Student] [speaks too low to hear] - The question is how does SGD momentum help with the poorly conditioned coordinate? The idea is that if you go back and look at this velocity estimate and look at the velocity computation, we're adding in the gradient at every time step. It kind of depends on your setting of rho, that hyperparameter, but you can imagine that if the gradient is relatively small, and if rho is well behaved in this situation, then our velocity could actually monotonically increase up to a point where the velocity could now be larger than the actual gradient. Then we might actually make faster progress along the poorly conditioned dimension. Kind of one picture that you can have in mind when we're doing SGD plus momentum is that the red here is our current point. At our current point, we have some red vector, which is the direction of the gradient, or rather our estimate of the gradient at the current point. Green is now the direction of our velocity vector. Now when we do the momentum update, we're actually stepping according to a weighted average of these two. This helps overcome some noise in our gradient estimate. There's a slight variation of momentum that you sometimes see, called Nesterov accelerated gradient, also sometimes called Nesterov momentum. That switches up this order of things a little bit. In sort of normal SGD momentum, we imagine that we estimate the gradient at our current point, and then take a mix of our velocity and our gradient. With Nesterov accelerated gradient, you do something a little bit different. Here, you start at the red point. You step in the direction of where the velocity would take you. You evaluate the gradient at that point. Then you go back to your original point and kind of mix together those two. This is kind of a funny interpretation, but you can imagine that you're kind of mixing together information a little bit more. If your velocity direction was actually a little bit wrong, it lets you incorporate gradient information from a little bit larger parts of the objective landscape. This also has some really nice theoretical properties when it comes to convex optimization, but those guarantees go a little bit out the window once it comes to non-convex problems like neural networks. Writing it down in equations, Nesterov momentum looks something like this, where now to update our velocity, we take a step, according to our previous velocity, and evaluate that gradient there. Now when we take our next step, we actually step in the direction of our velocity that's incorporating information from these multiple points. Question? - [Student] [speaks too low to hear] - Oh, sorry. The question is what's a good initialization for the velocity? This is almost always zero. It's not even a hyperparameter. Just set it to zero and don't worry. Another question? - [Student] [speaks too low to hear] - Intuitively, the velocity is kind of a weighted sum of your gradients that you've seen over time. - [Student] [speaks too low to hear] - With more recent gradients being weighted heavier. At every time step, we take our old velocity, we decay by friction and we add in our current gradient. You can kind of think of this as a smooth moving average of your recent gradients with kind of a exponentially decaying weight on your gradients going back in time. This Nesterov formulation is a little bit annoying 'cause if you look at this, normally when you have your loss function, you want to evaluate your loss and your gradient at the same point. Nesterov breaks this a little bit. It's a little bit annoying to work with. Thankfully, there's a cute change of variables you can do. If you do the change of variables and reshuffle a little bit, then you can write Nesterov momentum in a slightly different way that now, again, lets you evaluate the loss and the gradient at the same point always. Once you make this change of variables, you get kind of a nice interpretation of Nesterov, which is that here in the first step, this looks exactly like updating the velocity in the vanilla SGD momentum case, where we have our current velocity, we evaluate gradient at the current point and mix these two together in a decaying way. Now in the second update, now when we're actually updating our parameter vector, if you look at the second equation, we have our current point plus our current velocity plus a weighted difference between our current velocity and our previous velocity. Here, Nesterov momentum is kind of incorporating some kind of error-correcting term between your current velocity and your previous velocity. If we look at SGD, SGD momentum and Nesterov momentum on this kind of simple problem, compared with SGD, we notice that SGD kind of takes this, SGD is in the black, kind of taking this slow progress toward the minima. The blue and the green show momentum and Nesterov. These have this behavior of kind of overshooting the minimum 'cause they're building up velocity going past the minimum, and then kind of correcting themselves and coming back towards the minima. Question? - [Student] [speaks too low to hear] - The question is this picture looks good, but what happens if your minima call lies in this very narrow basin? Will the velocity just cause you to skip right over that minima? That's actually a really interesting point, and the subject of some recent theoretical work, but the idea is that maybe those really sharp minima are actually bad minima. We don't want to even land in those 'cause the idea is that maybe if you have a very sharp minima, that actually could be a minima that overfits more. If you imagine that we doubled our training set, the whole optimization landscape would change, and maybe that very sensitive minima would actually disappear if we were to collect more training data. We kind of have this intuition that we maybe want to land in very flat minima because those very flat minima are probably more robust as we change the training data. Those flat minima might actually generalize better to testing data. This is again, sort of very recent theoretical work, but that's actually a really good point that you bring it up. In some sense, it's actually a feature and not a bug that SGD momentum actually skips over those very sharp minima. That's actually a good thing, believe it or not. Another thing you can see is if you look at the difference between momentum and Nesterov here, you can see that because of the correction factor in Nesterov, maybe it's not overshooting quite as drastically, compared to vanilla momentum. There's another kind of common optimization strategy is this algorithm called AdaGrad, which John Duchi, who's now a professor here, worked on during his Ph.D. The idea with AdaGrad is that as you, during the course of the optimization, you're going to keep a running estimate or a running sum of all the squared gradients that you see during training. Now rather than having a velocity term, instead we have this grad squared term. During training, we're going to just keep adding the squared gradients to this grad squared term. Now when we update our parameter vector, we'll divide by this grad squared term when we're making our update step. The question is what does this kind of scaling do in this situation where we have a very high condition number? - [Student] [speaks too low to hear] - The idea is that if we have two coordinates, one that always has a very high gradient and one that always has a very small gradient, then as we add the sum of the squares of the small gradient, we're going to be dividing by a small number, so we'll accelerate movement along the slow dimension, along the one dimension. Then along the other dimension, where the gradients tend to be very large, then we'll be dividing by a large number, so we'll kind of slow down our progress along the wiggling dimension. But there's kind of a problem here. That's the question of what happens with AdaGrad over the course of training, as t gets larger and larger and larger? - [Student] [speaks too low to hear] - With AdaGrad, the steps actually get smaller and smaller and smaller because we just continue updating this estimate of the squared gradients over time, so this estimate just grows and grows and grows monotonically over the course of training. Now this causes our step size to get smaller and smaller and smaller over time. Again, in the convex case, there's some really nice theory showing that this is actually really good 'cause in the convex case, as you approach a minimum, you kind of want to slow down so you actually converge. That's actually kind of a feature in the convex case. But in the non-convex case, that's a little bit problematic because as you come towards a saddle point, you might get stuck with AdaGrad, and then you kind of no longer make any progress. There's a slight variation of AdaGrad, called RMSProp, that actually addresses this concern a little bit. Now with RMSProp, we still keep this estimate of the squared gradients, but instead of just letting that squared estimate continually accumulate over training, instead, we let that squared estimate actually decay. This ends up looking kind of like a momentum update, except we're having kind of momentum over the squared gradients, rather than momentum over the actual gradients. Now with RMSProp, after we compute our gradient, we take our current estimate of the grad squared, we multiply it by this decay rate, which is commonly something like .9 or .99. Then we add in this one minus the decay rate of our current squared gradient. Now over time, you can imagine that. Then again, when we make our step, the step looks exactly the same as AdaGrad, where we divide by the squared gradient in the step to again have this nice property of accelerating movement along the one dimension, and slowing down movement along the other dimension. But now with RMSProp, because these estimates are leaky, then it kind of addresses the problem of maybe always slowing down where you might not want to. Here again, we're kind of showing our favorite toy problem with SGD in black, SGD momentum in blue and RMSProp in red. You can see that RMSProp and SGD momentum are both doing much better than SGD, but their qualitative behavior is a little bit different. With SGD momentum, it kind of overshoots the minimum and comes back, whereas with RMSProp, it's kind of adjusting its trajectory such that we're making approximately equal progress among all the dimensions. By the way, you can't actually tell, but this plot is also showing AdaGrad in green with the same learning rate, but it just gets stuck due to this problem of continually decaying learning rates. In practice, AdaGrad is maybe not so common for many of these things. That's a little bit of an unfair comparison of AdaGrad. Probably you need to increase the learning rate with AdaGrad, and then it would end up looking kind of like RMSProp in this case. But in general, we tend not to use AdaGrad so much when training neural networks. Question? - [Student] [speaks too low to hear] - The answer is yes, this problem is convex, but in this case, it's a little bit of an unfair comparison because the learning rates are not so comparable among the methods. I've been a little bit unfair to AdaGrad in this visualization by showing the same learning rate between the different algorithms, when probably you should have separately turned the learning rates per algorithm. We saw in momentum, we had this idea of velocity, where we're building up velocity by adding in the gradients, and then stepping in the direction of the velocity. We saw with AdaGrad and RMSProp that we had this other idea, of building up an estimate of the squared gradients, and then dividing by the squared gradients. Then these both seem like good ideas on their own. Why don't we just stick 'em together and use them both? Maybe that would be even better. That brings us to this algorithm called Adam, or rather brings us very close to Adam. We'll see in a couple slides that there's a slight correction we need to make here. Here with Adam, we maintain an estimate of the first moment and the second moment. Now in the red, we make this estimate of the first moment as a weighed sum of our gradients. We have this moving estimate of the second moment, like AdaGrad and like RMSProp, which is a moving estimate of our squared gradients. Now when we make our update step, we step using both the first moment, which is kind of our velocity, and also divide by the second moment, or rather the square root of the second moment, which is this squared gradient term. This idea of Adam ends up looking a little bit like RMSProp plus momentum, or ends up looking like momentum plus second squared gradients. It kind of incorporates the nice properties of both. But there's a little bit of a problem here. That's the question of what happens at the very first time step? At the very first time step, you can see that at the beginning, we've initialized our second moment with zero. Now after one update of the second moment, typically this beta two, second moment decay rate, is something like .9 or .99, something very close to one. After one update, our second moment is still very, very close to zero. Now when we're making our update step here and we divide by our second moment, now we're dividing by a very small number. We're making a very, very large step at the beginning. This very, very large step at the beginning is not really due to the geometry of the problem. It's kind of an artifact of the fact that we initialized our second moment estimate was zero. Question? - [Student] [speaks too low to hear] - That's true. The comment is that if your first moment is also very small, then you're multiplying by small and you're dividing by square root of small squared, so what's going to happen? They might cancel each other out, you might be okay. That's true. Sometimes these cancel each other out and you're okay, but sometimes this ends up in taking very large steps right at the beginning. That can be quite bad. Maybe you initialize a little bit poorly. You take a very large step. Now your initialization is completely messed up, and then you're in a very bad part of the objective landscape and you just can't converge from there. Question? - [Student] [speaks too low to hear] - The idea is what is this 10 to the minus seven term in the last equation? That's actually appeared in AdaGrad, RMSProp and Adam. The idea is that we're dividing by something. We want to make sure we're not dividing by zero, so we always add a small positive constant to the denominator, just to make sure we're not dividing by zero. That's technically a hyperparameter, but it tends not to matter too much, so just setting 10 to minus seven, 10 to minus eight, something like that, tends to work well. With Adam, remember we just talked about this idea of at the first couple steps, it gets very large, and we might take very large steps and mess ourselves up. Adam also adds this bias correction term to avoid this problem of taking very large steps at the beginning. You can see that after we update our first and second moments, we create an unbiased estimate of those first and second moments by incorporating the current time step, t. Now we actually make our step using these unbiased estimates, rather than the original first and second moment estimates. This gives us our full form of Adam. By the way, Adam is a really, [laughs] really good optimization algorithm, and it works really well for a lot of different problems, so that's kind of my default optimization algorithm for just about any new problem that I'm tackling. In particular, if you set beta one equals .9, beta two equals .999, learning rate one e minus three or five e minus four, that's a great staring point for just about all the architectures I've ever worked with. Try that. That's a really good place to start, in general. [laughs] If we actually plot these things out and look at SGD, SGD momentum, RMSProp and Adam on the same problem, you can see that Adam, in the purple here, kind of combines elements of both SGD momentum and RMSProp. Adam kind of overshoots the minimum a little bit like SGD momentum, but it doesn't overshoot quite as much as momentum. Adam also has this similar behavior of RMSProp of kind of trying to curve to make equal progress along all dimensions. Maybe in this small two-dimensional example, Adam converged about similarly to other ones, but you can see qualitatively that it's kind of combining the behaviors of both momentum and RMSProp. Any questions about optimization algorithms? - [Student] [speaks too low to hear] They still take a very long time to train. [speaks too low to hear] - The question is what does Adam not fix? Would these neural networks are still large, they still take a long time to train. There can still be a problem. In this picture where we have this landscape of things looking like ovals, if you imagine that we're kind of making estimates along each dimension independently to allow us to speed up or slow down along different coordinate axes, but one problem is that if that taco shell is kind of tilted and is not axis aligned, then we're still only making estimates along the individual axes independently. That corresponds to taking your rotated taco shell and squishing it horizontally and vertically, but you can't actually unrotate it. In cases where you have this kind of rotated picture of poor conditioning, then Adam or any of these other algorithms really can't address that, that concern. Another thing that we've seen in all these optimization algorithms is learning rate as a hyperparameter. We've seen this picture before a couple times, that as you use different learning rates, sometimes if it's too high, it might explode in the yellow. If it's a very low learning rate, in the blue, it might take a very long time to converge. It's kind of tricky to pick the right learning rate. This is a little bit of a trick question because we don't actually have to stick with one learning rate throughout the course of training. Sometimes you'll see people decay the learning rates over time, where we can kind of combine the effects of these different curves on the left, and get the nice properties of each. Sometimes you'll start with a higher learning rate near the start of training, and then decay the learning rate and make it smaller and smaller throughout the course of training. A couple strategies for these would be a step decay, where at 100,000th iteration, you just decay by some factor and you keep going. You might see an exponential decay, where you continually decay during training. You might see different variations of continually decaying the learning rate during training. If you look at papers, especially the resonate paper, you often see plots that look kind of like this, where the loss is kind of going down, then dropping, then flattening again, then dropping again. What's going on in these plots is that they're using a step decay learning rate, where at these parts where it plateaus and then suddenly drops again, those are the iterations where they dropped the learning rate by some factor. This idea of dropping the learning rate, you might imagine that it got near some good region, but now the gradients got smaller, it's kind of bouncing around too much. Then if we drop the learning rate, it lets it slow down and continue to make progress down the landscape. This tends to help in practice sometimes. Although one thing to point out is that learning rate decay is a little bit more common with SGD momentum, and a little bit less common with something like Adam. Another thing I'd like to point out is that learning rate decay is kind of a second-order hyperparameter. You typically should not optimize over this thing from the start. Usually when you're kind of getting networks to work at the beginning, you want to pick a good learning rate with no learning rate decay from the start. Trying to cross-validate jointly over learning rate decay and initial learning rate and other things, you'll just get confused. What you do for setting learning rate decay is try with no decay, see what happens. Then kind of eyeball the loss curve and see where you think you might need decay. Another thing I wanted to mention briefly is this idea of all these algorithms that we've talked about are first-order optimization algorithms. In this picture, in this one-dimensional picture, we have this kind of curvy objective function at our current point in red. What we're basically doing is computing the gradient at that point. We're using the gradient information to compute some linear approximation to our function, which is kind of a first-order Taylor approximation to our function. Now we pretend that the first-order approximation is our actual function, and we make a step to try to minimize the approximation. But this approximation doesn't hold for very large regions, so we can't step too far in that direction. But really, the idea here is that we're only incorporating information about the first derivative of the function. You can actually go a little bit fancier. There's this idea of second-order approximation, where we take into account both first derivative and second derivative information. Now we make a second-order Taylor approximation to our function and kind of locally approximate our function with a quadratic. Now with a quadratic, you can step right to the minimum, and you're really happy. That's this idea of second-order optimization. When you generalize this to multiple dimensions, you get something called the Newton step, where you compute this Hessian matrix, which is a matrix of second derivatives, and you end up inverting this Hessian matrix in order to step directly to the minimum of this quadratic approximation to your function. Does anyone spot something that's quite different about this update rule, compared to the other ones that we've seen? - [Student] [speaks too low to hear] - This doesn't have a learning rate. That's kind of cool. We're making this quadratic approximation and we're stepping right to the minimum of the quadratic. At least in this vanilla version of Newton's method, you don't actually need a learning rate. You just always step to the minimum at every time step. However, in practice, you might end up, have a learning rate anyway because, again, that quadratic approximation might not be perfect, so you might only want to step in the direction towards the minimum, rather than actually stepping to the minimum, but at least in this vanilla version, it doesn't have a learning rate. But unfortunately, this is maybe a little bit impractical for deep learning because this Hessian matrix is N by N, where N is the number of parameters in your network. If N is 100 million, then 100 million squared is way too big. You definitely can't store that in memory, and you definitely can't invert it. In practice, people sometimes use these quasi-Newton methods that, rather than working with the full Hessian and inverting the full Hessian, they work with approximations. Low-rank approximations are common. You'll sometimes see these for some problems. L-BFGS is one particular second-order optimizer that has this approximate second, keeps this approximation of the Hessian that you'll sometimes see, but in practice, it doesn't work too well for many deep learning problems because these approximations, these second-order approximations, don't really handle the stochastic case very much, very nicely. They also tend not to work so well with non-convex problems. I don't want to get into that right now too much. In practice, what you should really do is probably Adam is a really good choice for many different neural network things, but if you're in a situation where you can afford to do full batch updates, and you know that your problem doesn't have really any stochasticity, then L-BFGS is kind of a good choice. L-BFGS doesn't really get used for training neural networks too much, but as we'll see in a couple of lectures, it does sometimes get used for things like style transfer, where you actually have less stochasticity and fewer parameters, but you still want to solve an optimization problem. All of these strategies we've talked about so far are about reducing training error. All these optimization algorithms are really about driving down your training error and minimizing your objective function, but we don't really care about training error that much. Instead, we really care about our performance on unseen data. We really care about reducing this gap between train and test error. The question is once we're already good at optimizing our objective function, what can we do to try to reduce this gap and make our model perform better on unseen data? One really quick and dirty, easy thing to try is this idea of model ensembles that sometimes works across many different areas in machine learning. The idea is pretty simple. Rather than having just one model, we'll train 10 different models independently from different initial random restarts. Now at test time, we'll run our data through all of the 10 models and average the predictions of those 10 models. Adding these multiple models together tends to reduce overfitting a little bit and tend to improve performance a little bit, typically by a couple percent. This is generally not a drastic improvement, but it is a consistent improvement. You'll see that in competitions, like ImageNet and other things like that, using model ensembles is very common to get maximal performance. You can actually get a little bit creative with this. Sometimes rather than training separate models independently, you can just keep multiple snapshots of your model during the course of training, and then use these as your ensembles. Then you still, at test time, need to average the predictions of these multiple snapshots, but you can collect the snapshots during the course of training. There's actually a very nice paper being presented at ICLR this week that kind of has a fancy version of this idea, where we use a crazy learning rate schedule, where our learning rate goes very slow, then very fast, then very slow, then very fast. The idea is that with this crazy learning rate schedule, then over the course of training, the model might be able to converge to different regions in the objective landscape that all are reasonably good. If you do an ensemble over these different snapshots, then you can improve your performance quite nicely, even though you're only training the model once. Questions? - [Student] [speaks too low to hear] - The question is, it's bad when there's a large gap between error 'cause that means you're overfitting, but if there's no gap, then is that also maybe bad? Do we actually want some small, optimal gap between the two? We don't really care about the gap. What we really care about is maximizing the performance on the validation set. What tends to happen is that if you don't see a gap, then you could have improved your absolute performance, in many cases, by overfitting a little bit more. There's this weird correlation between the absolute performance on the validation set and the size of that gap. We only care about absolute performance. Question in the back? - [Student] Are hyperparameters the same for the ensemble? - Are the hyperparameters the same for the ensembles? That's a good question. Sometimes they're not. You might want to try different sizes of the model, different learning rates, different regularization strategies and ensemble across these different things. That actually does happen sometimes. Another little trick you can do sometimes is that during training, you might actually keep an exponentially decaying average of your parameter vector itself to kind of have a smooth ensemble of your own network during training. Then use this smoothly decaying average of your parameter vector, rather than the actual checkpoints themselves. This is called Polyak averaging, and it sometimes helps a little bit. It's just another one of these small tricks you can sometimes add, but it's not maybe too common in practice. Another question you might have is that how can we actually improve the performance of single models? When we have ensembles, we still need to run, like, 10 models at test time. That's not so great. We really want some strategies to improve the performance of our single models. That's really this idea of regularization, where we add something to our model to prevent it from fitting the training data too well in the attempts to make it perform better on unseen data. We've seen a couple ideas, a couple methods for regularization already, where we add some explicit extra term to the loss. Where we have this one term telling the model to fit the data, and another term that's a regularization term. You saw this in homework one, where we used L2 regularization. As we talked about in lecture a couple lectures ago, this L2 regularization doesn't really make maybe a lot of sense in the context of neural networks. Sometimes we use other things for neural networks. One regularization strategy that's super, super common for neural networks is this idea of dropout. Dropout is super simple. Every time we do a forward pass through the network, at every layer, we're going to randomly set some neurons to zero. Every time we do a forward pass, we'll set a different random subset of the neurons to zero. This kind of proceeds one layer at a time. We run through one layer, we compute the value of the layer, we randomly set some of them to zero, and then we continue up through the network. Now if you look at this fully connected network on the left versus a dropout version of the same network on the right, you can see that after we do dropout, it kind of looks like a smaller version of the same network, where we're only using some subset of the neurons. This subset that we use varies at each iteration, at each forward pass. Question? - [Student] [speaks too low to hear] - The question is what are we setting to zero? It's the activations. Each layer is computing previous activation times the weight matrix gives you our next activation. Then you just take that activation, set some of them to zero, and then your next layer will be partially zeroed activations times another matrix give you your next activations. Question? - [Student] [speaks too low to hear] - Question is which layers do you do this on? It's more common in fully connected layers, but you sometimes see this in convolutional layers, as well. When you're working in convolutional layers, sometimes instead of dropping each activation randomly, instead you sometimes might drop entire feature maps randomly. In convolutions, you have this channel dimension, and you might drop out entire channels, rather than random elements. Dropout is kind of super simple in practice. It only requires adding two lines, one line per dropout call. Here we have a three-layer neural network, and we've added dropout. You can see that all we needed to do was add this extra line where we randomly set some things to zero. This is super easy to implement. But the question is why is this even a good idea? We're seriously messing with the network at training time by setting a bunch of its values to zero. How can this possibly make sense? One sort of slightly hand wavy idea that people have is that dropout helps prevent co-adaptation of features. Maybe if you imagine that we're trying to classify cats, maybe in some universe, the network might learn one neuron for having an ear, one neuron for having a tail, one neuron for the input being furry. Then it kind of combines these things together to decide whether or not it's a cat. But now if we have dropout, then in making the final decision about catness, the network cannot depend too much on any of these one features. Instead, it kind of needs to distribute its idea of catness across many different features. This might help prevent overfitting somehow. Another interpretation of dropout that's come out a little bit more recently is that it's kind of like doing model ensembling within a single model. If you look at the picture on the left, after you apply dropout to the network, we're kind of computing this subnetwork using some subset of the neurons. Now every different potential dropout mask leads to a different potential subnetwork. Now dropout is kind of learning a whole ensemble of networks all at the same time that all share parameters. By the way, because of the number of potential dropout masks grows exponentially in the number of neurons, you're never going to sample all of these things. This is really a gigantic, gigantic ensemble of networks that are all being trained simultaneously. Then the question is what happens at test time? Once we move to dropout, we've kind of fundamentally changed the operation of our neural network. Previously, we've had our neural network, f, be a function of the weights, w, and the inputs, x, and then produce the output, y. But now, our network is also taking this additional input, z, which is some random dropout mask. That z is random. Having randomness at test time is maybe bad. Imagine that you're working at Facebook, and you want to classify the images that people are uploading. Then today, your image gets classified as a cat, and tomorrow it doesn't. That would be really weird and really bad. You'd probably want to eliminate this stochasticity at test time once the network is already trained. Then we kind of want to average out this randomness. If you write this out, you can imagine actually marginalizing out this randomness with some integral, but in practice, this integral is totally intractable. We don't know how to evaluate this thing. You're in bad shape. One thing you might imagine doing is approximating this integral via sampling, where you draw multiple samples of z and then average them out at test time, but this still would introduce some randomness, which is little bit bad. Thankfully, in the case of dropout, we can actually approximate this integral in kind of a cheap way locally. If we consider a single neuron, the output is a, the inputs are x and y, with two weights, w one, w two. Then at test time, our value a is just w one x plus w two y. Now imagine that we trained to this network. During training, we used dropout with probability 1/2 of dropping our neurons. Now the expected value of a during training, we can kind of compute analytically for this small case. There's four possible dropout masks, and we're going to average out the values across these four masks. We can see that the expected value of a during training is 1/2 w one x plus w two y. There's this disconnect between this average value of w one x plus w two y at test time, and at training time, the average value is only 1/2 as much. One cheap thing we can do is that at test time, we don't have any stochasticity. Instead, we just multiply this output by the dropout probability. Now these expected values are the same. This is kind of like a local cheap approximation to this complex integral. This is what people really commonly do in practice with dropout. At dropout, we have this predict function, and we just multiply our outputs of the layer by the dropout probability. The summary of dropout is that it's really simple on the forward pass. You're just adding two lines to your implementation to randomly zero out some nodes. Then at the test time prediction function, you just added one little multiplication by your probability. Dropout is super simple. It tends to work well sometimes for regularizing neural networks. By the way, one common trick you see sometimes is this idea of inverted dropout. Maybe at test time, you care more about efficiency, so you want to eliminate that extra multiplication by p at test time. Then what you can do is, at test time, you use the entire weight matrix, but now at training time, instead you divide by p because training is probably happening on a GPU. You don't really care if you do one extra multiply at training time, but then at test time, you kind of want this thing to be as efficient as possible. Question? - [Student] [speaks too low to hear] Now the gradient [speaks too low to hear]. - The question is what happens to the gradient during training with dropout? You're right. We only end up propagating the gradients through the nodes that were not dropped. This has the consequence that when you're training with dropout, typically training takes longer because at each step, you're only updating some subparts of the network. When you're using dropout, it typically takes longer to train, but you might have a better generalization after it's converged. Dropout, we kind of saw is like this one concrete instantiation. There's a little bit more general strategy for regularization where during training we add some kind of randomness to the network to prevent it from fitting the training data too well. To kind of mess it up and prevent it from fitting the training data perfectly. Now at test time, we want to average out all that randomness to hopefully improve our generalization. Dropout is probably the most common example of this type of strategy, but actually batch normalization kind of fits this idea, as well. Remember in batch normalization, during training, one data point might appear in different mini batches with different other data points. There's a bit of stochasticity with respect to a single data point with how exactly that point gets normalized during training. But now at test time, we kind of average out this stochasticity by using some global estimates to normalize, rather than the per mini batch estimates. Actually batch normalization tends to have kind of a similar regularizing effect as dropout because they both introduce some kind of stochasticity or noise at training time, but then average it out at test time. Actually, when you train networks with batch normalization, sometimes you don't use dropout at all, and just the batch normalization adds enough of a regularizing effect to your network. Dropout is somewhat nice because you can actually tune the regularization strength by varying that parameter p, and there's no such control in batch normalization. Another kind of strategy that fits in this paradigm is this idea of data augmentation. During training, in a vanilla version for training, we have our data, we have our label. We use it to update our CNN at each time step. But instead, what we can do is randomly transform the image in some way during training such that the label is preserved. Now we train on these random transformations of the image rather than the original images. Sometimes you might see random horizontal flips 'cause if you take a cat and flip it horizontally, it's still a cat. You'll randomly sample crops of different sizes from the image because the random crop of the cat is still a cat. Then during testing, you kind of average out this stochasticity by evaluating with some fixed set of crops, often the four corners and the middle and their flips. What's very common is that when you read, for example, papers on ImageNet, they'll report a single crop performance of their model, which is just like the whole image, and a 10 crop performance of their model, which are these five standard crops plus their flips. Also with data augmentation, you'll sometimes use color jittering, where you might randomly vary the contrast or brightness of your image during training. You can get a little bit more complex with color jittering, as well, where you try to make color jitters that are maybe in the PCA directions of your data space or whatever, where you do some color jittering in some data-dependent way, but that's a little bit less common. In general, data augmentation is this really general thing that you can apply to just about any problem. Whatever problem you're trying to solve, you kind of think about what are the ways that I can transform my data without changing the label? Now during training, you just apply these random transformations to your input data. This sort of has a regularizing effect on the network because you're, again, adding some kind of stochasticity during training, and then marginalizing it out at test time. Now we've seen three examples of this pattern, dropout, batch normalization, data augmentation, but there's many other examples, as well. Once you have this pattern in your mind, you'll kind of recognize this thing as you read other papers sometimes. There's another kind of related idea to dropout called DropConnect. With DropConnect, it's the same idea, but rather than zeroing out the activations at every forward pass, instead we randomly zero out some of the values of the weight matrix instead. Again, it kind of has this similar flavor. Another kind of cool idea that I like, this one's not so commonly used, but I just think it's a really cool idea, is this idea of fractional max pooling. Normally when you do two-by-two max pooling, you have these fixed two-by-two regions over which you pool over in the forward pass, but now with fractional max pooling, every time we have our pooling layer, we're going to randomize exactly the pool that the regions over which we pool. Here in the example on the right, I've shown three different sets of random pooling regions that you might see during training. Now during test time, you kind of average the stochasticity out by trying many different, by either sticking to some fixed set of pooling regions. or drawing many samples and averaging over them. That's kind of a cool idea, even though it's not so commonly used. Another really kind of surprising paper in this paradigm that actually came out in the last year, so this is new since the last time we taught the class, is this idea of stochastic depth. Here we have a network on the left. The idea is that we have a very deep network. We're going to randomly drop layers from the network during training. During training, we're going to eliminate some layers and only use some subset of the layers during training. Now during test time, we'll use the whole network. This is kind of crazy. It's kind of amazing that this works, but this tends to have kind of a similar regularizing effect as dropout and these other strategies. But again, this is super, super cutting-edge research. This is not super commonly used in practice, but it is a cool idea. Any last minute questions about regularization? No? Use it. It's a good idea. Yeah? - [Student] [speaks too low to hear] - The question is do you usually use more than one regularization method? You should generally be using batch normalization as kind of a good thing to have in most networks nowadays because it helps you converge, especially for very deep things. In many cases, batch normalization alone tends to be enough, but then sometimes if batch normalization alone is not enough, then you can consider adding dropout or other thing once you see your network overfitting. You generally don't do a blind cross-validation over these things. Instead, you add them in in a targeted way once you see your network is overfitting. One quick thing, it's this idea of transfer learning. We've kind of seen with regularization, we can help reduce the gap between train and test error by adding these different regularization strategies. One problem with overfitting is sometimes you overfit 'cause you don't have enough data. You want to use a big, powerful model, but that big, powerful model just is going to overfit too much on your small dataset. Regularization is one way to combat that, but another way is through using transfer learning. Transfer learning kind of busts this myth that you don't need a huge amount of data in order to train a CNN. The idea is really simple. You'll maybe first take some CNN. Here is kind of a VGG style architecture. You'll take your CNN, you'll train it in a very large dataset, like ImageNet, where you actually have enough data to train the whole network. Now the idea is that you want to apply the features from this dataset to some small dataset that you care about. Maybe instead of classifying the 1,000 ImageNet categories, now you want to classify 10 dog breeds or something like that. You only have a small dataset. Here, our small dataset only has C classes. Then what you'll typically do is for this last fully connected layer that is going from the last layer features to the final class scores, this now, you need to reinitialize that matrix randomly. For ImageNet, it was a 4,096-by-1,000 dimensional matrix. Now for your new classes, it might be 4,096-by-C or by 10 or whatever. You reinitialize this last matrix randomly, freeze the weights of all the previous layers and now just basically train a linear classifier, and only train the parameters of this last layer and let it converge on your data. This tends to work pretty well if you only have a very small dataset to work with. Now if you have a little bit more data, another thing you can try is actually fine tuning the whole network. After that top layer converges and after you learn that last layer for your data, then you can consider actually trying to update the whole network, as well. If you have more data, then you might consider updating larger parts of the network. A general strategy here is that when you're updating the network, you want to drop the learning rate from its initial learning rate because probably the original parameters in this network that converged on ImageNet probably worked pretty well generally, and you just want to change them a very small amount to tune performance for your dataset. Then when you're working with transfer learning, you kind of imagine this two-by-two grid of scenarios where on the one side, you have maybe very small amounts of data for your dataset, or very large amount of data for your dataset. Then maybe your data is very similar to images. Like, ImageNet has a lot of pictures of animals and plants and stuff like that. If you want to just classify other types of animals and plants and other types of images like that, then you're in pretty good shape. Then generally what you do is if your data is very similar to something like ImageNet, if you have a very small amount of data, you can just basically train a linear classifier on top of features, extracted using an ImageNet model. If you have a little bit more data to work with, then you might imagine fine tuning your data. However, you sometimes get in trouble if your data looks very different from ImageNet. Maybe if you're working with maybe medical images that are X-rays or CAT scans or something that looks very different from images in ImageNet, in that case, you maybe need to get a little bit more creative. Sometimes it still works well here, but those last layer features might not be so informative. You might consider reinitializing larger parts of the network and getting a little bit more creative and trying more experiments here. This is somewhat mitigated if you have a large amount of data in your very different dataset 'cause then you can actually fine tune larger parts of the network. Another point I'd like to make is this idea of transfer learning is super pervasive. It's actually the norm, rather than the exception. As you read computer vision papers, you'll often see system diagrams like this for different tasks. On the left, we're working with object detection. On the right, we're working with image captioning. Both of these models have a CNN that's kind of processing the image. In almost all applications of computer vision these days, most people are not training these things from scratch. Almost always, that CNN will be pretrained on ImageNet, and then potentially fine tuned for the task at hand. Also, in the captioning sense, sometimes you can actually pretrain some word vectors relating to the language, as well. You maybe pretrain the CNN on ImageNet, pretrain some word vectors on a large text corpus, and then fine tune the whole thing for your dataset. Although in the case of captioning, I think this pretraining with word vectors tends to be a little bit less common and a little bit less critical. The takeaway for your projects, and more generally as you work on different models, is that whenever you have some large dataset, whenever you have some problem that you want to tackle, but you don't have a large dataset, then what you should generally do is download some pretrained model that's relatively close to the task you care about, and then either reinitialize parts of that model or fine tune that model for your data. That tends to work pretty well, even if you have only a modest amount of training data to work with. Because this is such a common strategy, all of the different deep learning software packages out there provide a model zoo where you can just download pretrained versions of various models. In summary today, we talked about optimization, which is about how to improve the training loss. We talked about regularization, which is improving your performance on the test data. Model ensembling kind of fit into there. We also talked about transfer learning, which is how you can actually do better with less data. These are all super useful strategies. You should use them in your projects and beyond. Next time, we'll talk more concretely about some of the different deep learning software packages out there.
Stanford_Computer_Vision
Lecture_2_Image_Classification.txt
Okay, so welcome to lecture two of CS231N. On Tuesday we, just recall, we, sort of, gave you the big picture view of what is computer vision, what is the history, and a little bit of the overview of the class. And today, we're really going to dive in, for the first time, into the details. And we'll start to see, in much more depth, exactly how some of these learning algorithms actually work in practice. So, the first lecture of the class is probably, sort of, the largest big picture vision. And the majority of the lectures in this class will be much more detail orientated, much more focused on the specific mechanics, of these different algorithms. So, today we'll see our first learning algorithm and that'll be really exciting, I think. But, before we get to that, I wanted to talk about a couple of administrative issues. One, is Piazza. So, I saw it when I checked yesterday, it seemed like we had maybe 500 students signed up on Piazza. Which means that there are several hundred of you who are not yet there. So, we really want Piazza to be the main source of communication between the students and the core staff. So, we've gotten a lot of questions to the staff list about project ideas or questions about midterm attendance or poster session attendance. And, any, sort of, questions like that should really go to Piazza. You'll probably get answers to your questions faster on Piazza, because all the TAs are knowing to check that. And it's, sort of, easy for emails to get lost in the shuffle if you just send to the course list. It's also come to my attention that some SCPD students are having a bit of a hard time signing up for Piazza. SCPD students are supposed to receive a @stanford.edu email address. So, once you get that email address, then you can use the Stanford email to sign into Piazza. Probably that doesn't affect those of you who are sitting in the room right now, but, for those students listening on SCPD. The next administrative issue is about assignment one. Assignment one will be up later today, probably sometime this afternoon, but I promise, before I go to sleep tonight, it'll be up. But, if you're getting a little bit antsy and really want to start working on it right now, then you can look at last year's version of assignment one. It'll be pretty much the same content. We're just reshuffling it a little bit to make it, like, for example, upgrading to work with Python 3, rather than Python 2.7. And some of these minor cosmetic changes, but the content of the assignment will still be the same as last year. So, in this assignment you'll be implementing your own k-nearest neighbor classifier, which we're going to talk about in this lecture. You'll also implement several different linear classifiers, including the SVM and Softmax, as well as a simple two-layer neural network. And we'll cover all this content over the next couple of lectures. So, all of our assignments are using Python and NumPy. If you aren't familiar with Python or NumPy, then we have written a tutorial that you can find on the course website to try and get you up to speed. But, this is, actually, pretty important. NumPy lets you write these very efficient vectorized operations that let you do quite a lot of computation in just a couple lines of code. So this is super important for pretty much all aspects of numerical computing and machine learning and everything like that, is efficiently implementing these vectorized operations. And you'll get a lot of practice with this on the first assignment. So, for those of you who don't have a lot of experience with Matlab or NumPy or other types of vectorized tensor computation, I recommend that you start looking at this assignment pretty early and also, read carefully through the tutorial. The other thing I wanted to talk about is that we're happy to announce that we're officially supported through Google Cloud for this class. So, Google Cloud is somewhat similar to Amazon AWS. You can go and start virtual machines up in the cloud. These virtual machines can have GPUs. We're working on the tutorial for exactly how to use Google Cloud and get it to work for the assignments. But our intention is that you'll be able to just download some image, and it'll be very seamless for you to work on the assignments on one of these instances on the cloud. And because Google has, very generously, supported this course, we'll be able to distribute to each of you coupons that let you use Google Cloud credits for free for the class. So you can feel free to use these for the assignments and also for the course projects when you want to start using GPUs and larger machines and whatnot. So, we'll post more details about that, probably, on Piazza later today. But, I just wanted to mention, because I know there had been a couple of questions about, can I use my laptop? Do I have to run on corn? Do I have to, whatever? And the answer is that, you'll be able to run on Google Cloud and we'll provide you some coupons for that. Yeah, so, those are, kind of, the major administrative issues I wanted to talk about today. And then, let's dive into the content. So, the last lecture we talked a little bit about this task of image classification, which is really a core task in computer vision. And this is something that we'll really focus on throughout the course of the class. Is, exactly, how do we work on this image classification task? So, a little bit more concretely, when you're doing image classification, your system receives some input image, which is this cute cat in this example, and the system is aware of some predetermined set of categories or labels. So, these might be, like, a dog or a cat or a truck or a plane, and there's some fixed set of category labels, and the job of the computer is to look at the picture and assign it one of these fixed category labels. This seems like a really easy problem, because so much of your own visual system in your brain is hardwired to doing these, sort of, visual recognition tasks. But this is actually a really, really hard problem for a machine. So, if you dig in and think about, actually, what does a computer see when it looks at this image, it definitely doesn't get this holistic idea of a cat that you see when you look at it. And the computer really is representing the image as this gigantic grid of numbers. So, the image might be something like 800 by 600 pixels. And each pixel is represented by three numbers, giving the red, green, and blue values for that pixel. So, to the computer, this is just a gigantic grid of numbers. And it's very difficult to distill the cat-ness out of this, like, giant array of thousands, or whatever, very many different numbers. So, we refer to this problem as the semantic gap. This idea of a cat, or this label of a cat, is a semantic label that we're assigning to this image, and there's this huge gap between the semantic idea of a cat and these pixel values that the computer is actually seeing. And this is a really hard problem because you can change the picture in very small, subtle ways that will cause this pixel grid to change entirely. So, for example, if we took this same cat, and if the cat happened to sit still and not even twitch, not move a muscle, which is never going to happen, but we moved the camera to the other side, then every single grid, every single pixel, in this giant grid of numbers would be completely different. But, somehow, it's still representing the same cat. And our algorithms need to be robust to this. But, not only viewpoint is one problem, another is illumination. There can be different lighting conditions going on in the scene. Whether the cat is appearing in this very dark, moody scene, or like is this very bright, sunlit scene, it's still a cat, and our algorithms need to be robust to that. Objects can also deform. I think cats are, maybe, among the more deformable of animals that you might see out there. And cats can really assume a lot of different, varied poses and positions. And our algorithms should be robust to these different kinds of transforms. There can also be problems of occlusion, where you might only see part of a cat, like, just the face, or in this extreme example, just a tail peeking out from under the couch cushion. But, in these cases, it's pretty easy for you, as a person, to realize that this is probably a cat, and you still recognize these images as cats. And this is something that our algorithms also must be robust to, which is quite difficult, I think. There can also be problems of background clutter, where maybe the foreground object of the cat, could actually look quite similar in appearance to the background. And this is another thing that we need to handle. There's also this problem of intraclass variation, that this one notion of cat-ness, actually spans a lot of different visual appearances. And cats can come in different shapes and sizes and colors and ages. And our algorithm, again, needs to work and handle all these different variations. So, this is actually a really, really challenging problem. And it's sort of easy to forget how easy this is because so much of your brain is specifically tuned for dealing with these things. But now if we want our computer programs to deal with all of these problems, all simultaneously, and not just for cats, by the way, but for just about any object category you can imagine, this is a fantastically challenging problem. And it's, actually, somewhat miraculous that this works at all, in my opinion. But, actually, not only does it work, but these things work very close to human accuracy in some limited situations. And take only hundreds of milliseconds to do so. So, this is some pretty amazing, incredible technology, in my opinion, and over the course of the rest of the class we will really see what kinds of advancements have made this possible. So now, if you, kind of, think about what is the API for writing an image classifier, you might sit down and try to write a method in Python like this. Where you want to take in an image and then do some crazy magic and then, eventually, spit out this class label to say cat or dog or whatnot. And there's really no obvious way to do this, right? If you're taking an algorithms class and your task is to sort numbers or compute a convex hull or, even, do something like RSA encryption, you, sort of, can write down an algorithm and enumerate all the steps that need to happen in order for this things to work. But, when we're trying to recognize objects, or recognize cats or images, there's no really clear, explicit algorithm that makes intuitive sense, for how you might go about recognizing these objects. So, this is, again, quite challenging, if you think about, if it was your first day programming and you had to sit down and write this function, I think most people would be in trouble. That being said, people have definitely made explicit attempts to try to write, sort of, high-end coded rules for recognizing different animals. So, we touched on this a little bit in the last lecture, but maybe one idea for cats is that, we know that cats have ears and eyes and mouths and noses. And we know that edges, from Hubel and Wiesel, we know that edges are pretty important when it comes to visual recognition. So one thing we might try to do is compute the edges of this image and then go in and try to categorize all the different corners and boundaries, and say that, if we have maybe three lines meeting this way, then it might be a corner, and an ear has one corner here and one corner there and one corner there, and then, kind of, write down this explicit set of rules for recognizing cats. But this turns out not to work very well. One, it's super brittle. And, two, say, if you want to start over for another object category, and maybe not worry about cats, but talk about trucks or dogs or fishes or something else, then you need to start all over again. So, this is really not a very scalable approach. We want to come up with some algorithm, or some method, for these recognition tasks which scales much more naturally to all the variety of objects in the world. So, the insight that, sort of, makes this all work is this idea of the data-driven approach. Rather than sitting down and writing these hand-specified rules to try to craft exactly what is a cat or a fish or what have you, instead, we'll go out onto the internet and collect a large dataset of many, many cats and many, many airplanes and many, many deer and different things like this. And we can actually use tools like Google Image Search, or something like that, to go out and collect a very large number of examples of these different categories. By the way, this actually takes quite a lot of effort to go out and actually collect these datasets but, luckily, there's a lot of really good, high quality datasets out there already for you to use. Then once we get this dataset, we train this machine learning classifier that is going to ingest all of the data, summarize it in some way, and then spit out a model that summarizes the knowledge of how to recognize these different object categories. Then finally, we'll use this training model and apply it on new images that will then be able to recognize cats and dogs and whatnot. So here our API has changed a little bit. Rather than a single function that just inputs an image and recognizes a cat, we have these two functions. One, called, train, that's going to input images and labels and then output a model, and then, separately, another function called, predict, which will input the model and than make predictions for images. And this is, kind of, the key insight that allowed all these things to start working really well over the last 10, 20 years or so. So, this class is primarily about neural networks and convolutional neural networks and deep learning and all that, but this idea of a data-driven approach is much more general than just deep learning. And I think it's useful to, sort of, step through this process for a very simple classifier first, before we get to these big, complex ones. So, probably, the simplest classifier you can imagine is something we call nearest neighbor. The algorithm is pretty dumb, honestly. So, during the training step we won't do anything, we'll just memorize all of the training data. So this is very simple. And now, during the prediction step, we're going to take some new image and go and try to find the most similar image in the training data to that new image, and now predict the label of that most similar image. A very simple algorithm. But it, sort of, has a lot of these nice properties with respect to data-drivenness and whatnot. So, to be a little bit more concrete, you might imagine working on this dataset called CIFAR-10, which is very commonly used in machine learning, as kind of a small test case. And you'll be working with this dataset on your homework. So, the CIFAR-10 dataset gives you 10 different classes, airplanes and automobiles and birds and cats and different things like that. And for each of those 10 categories it provides 50,000 training images, roughly evenly distributed across these 10 categories. And then 10,000 additional testing images that you're supposed to test your algorithm on. So here's an example of applying this simple nearest neighbor classifier to some of these test images on CIFAR-10. So, on this grid on the right, for the left most column, gives a test image in the CIFAR-10 dataset. And now on the right, we've sorted the training images and show the most similar training images to each of these test examples. And you can see that they look kind of visually similar to the training images, although they are not always correct, right? So, maybe on the second row, we see that the testing, this is kind of hard to see, because these images are 32 by 32 pixels, you need to really dive in there and try to make your best guess. But, this image is a dog and it's nearest neighbor is also a dog, but this next one, I think is actually a deer or a horse or something else. But, you can see that it looks quite visually similar, because there's kind of a white blob in the middle and whatnot. So, if we're applying the nearest neighbor algorithm to this image, we'll find the closest example in the training set. And now, the closest example, we know it's label, because it comes from the training set. And now, we'll simply say that this testing image is also a dog. You can see from these examples that is probably not going to work very well, but it's still kind of a nice example to work through. But then, one detail that we need to know is, given a pair of images, how can we actually compare them? Because, if we're going to take our test image and compare it to all the training images, we actually have many different choices for exactly what that comparison function should look like. So, in the example in the previous slide, we've used what's called the L1 distance, also sometimes called the Manhattan distance. So, this is a really sort of simple, easy idea for comparing images. And that's that we're going to just compare individual pixels in these images. So, supposing that our test image is maybe just a tiny four by four image of pixel values, then we're take this upper-left hand pixel of the test image, subtract off the value in the training image, take the absolute value, and get the difference in that pixel between the two images. And then, sum all these up across all the pixels in the image. So, this is kind of a stupid way to compare images, but it does some reasonable things sometimes. But, this gives us a very concrete way to measure the difference between two images. And in this case, we have this difference of 456 between these two images. So, here's some full Python code for implementing this nearest neighbor classifier and you can see it's pretty short and pretty concise because we've made use of many of these vectorized operations offered by NumPy. So, here we can see that this training function, that we talked about earlier, is, again, very simple, in the case of nearest neighbor, you just memorize the training data, there's not really much to do here. And now, at test time, we're going to take in our image and then go in and compare using this L1 distance function, our test image to each of these training examples and find the most similar example in the training set. And you can see that, we're actually able to do this in just one or two lines of Python code by utilizing these vectorized operations in NumPy. So, this is something that you'll get practice with on the first assignment. So now, a couple questions about this simple classifier. First, if we have N examples in our training set, then how fast can we expect training and testing to be? Well, training is probably constant because we don't really need to do anything, we just need to memorize the data. And if you're just copying a pointer, that's going to be constant time no matter how big your dataset is. But now, at test time we need to do this comparison stop and compare our test image to each of the N training examples in the dataset. And this is actually quite slow. So, this is actually somewhat backwards, if you think about it. Because, in practice, we want our classifiers to be slow at training time and then fast at testing time. Because, you might imagine, that a classifier might go and be trained in a data center somewhere and you can afford to spend a lot of computation at training time to make the classifier really good. But then, when you go and deploy the classifier at test time, you want it to run on your mobile phone or in a browser or some other low power device, and you really want the testing time performance of your classifier to be quite fast. So, from this perspective, this nearest neighbor algorithm, is, actually, a little bit backwards. And we'll see that once we move to convolutional neural networks, and other types of parametric models, they'll be the reverse of this. Where you'll spend a lot of compute at training time, but then they'll be quite fast at testing time. So then, the question is, what exactly does this nearest neighbor algorithm look like when you apply it in practice? So, here we've drawn, what we call the decision regions of a nearest neighbor classifier. So, here our training set consists of these points in the two dimensional plane, where the color of the point represents the category, or the class label, of that point. So, here we see we have five classes and some blue ones up in the corner here, some purple ones in the upper-right hand corner. And now for each pixel in this entire plane, we've gone and computed what is the nearest example in these training data, and then colored the point of the background corresponding to what is the class label. So, you can see that this nearest neighbor classifier is just sort of carving up the space and coloring the space according to the nearby points. But this classifier is maybe not so great. And by looking at this picture we can start to see some of the problems that might come out with a nearest neighbor classifier. For one, this central region actually contains mostly green points, but one little yellow point in the middle. But because we're just looking at the nearest neighbor, this causes a little yellow island to appear in this middle of this green cluster. And that's, maybe, not so great. Maybe those points actually should have been green. And then, similarly we also see these, sort of, fingers, like the green region pushing into the blue region, again, due to the presence of one point, which may have been noisy or spurious. So, this kind of motivates a slight generalization of this algorithm called k-nearest neighbors. So rather than just looking for the single nearest neighbor, instead we'll do something a little bit fancier and find K of our nearest neighbors, according to our distance metric, and then take a vote among each of our neighbors. And then predict the majority vote among our neighbors. You can imagine slightly more complex ways of doing this. Maybe you'd vote weighted on the distance, or something like that, but the simplest thing that tends to work pretty well is just taking a majority vote. So here we've shown the exact same set of points using this K=1 nearest neighbor classifier, as well as K=3 and K=5 in the middle and on the right. And once we move to K=3, you can see that that spurious yellow point in the middle of the green cluster is no longer causing the points near that region to be classified as yellow. Now this entire green portion in the middle is all being classified as green. You can also see that these fingers of the red and blue regions are starting to get smoothed out due to this majority voting. And then, once we move to the K=5 case, then these decision boundaries between the blue and red regions have become quite smooth and quite nice. So, generally when you're using nearest neighbors classifiers, you almost always want to use some value of K, which is larger than one because this tends to smooth out your decision boundaries and lead to better results. Question? [student asking a question] Yes, so the question is, what is the deal with these white regions? The white regions are where there was no majority among the k-nearest neighbors. You could imagine maybe doing something slightly fancier and maybe taking a guess or randomly selecting among the majority winners, but for this simple example we're just coloring it white to indicate there was no nearest neighbor in those points. Whenever we're thinking about computer vision I think it's really useful to kind of flip back and forth between several different viewpoints. One, is this idea of high dimensional points in the plane, and then the other is actually looking at concrete images. Because the pixels of the image actually allow us to think of these images as high dimensional vectors. And it's sort of useful to ping pong back and forth between these two different viewpoints. So then, sort of taking this k-nearest neighbor and going back to the images you can see that it's actually not very good. Here I've colored in red and green which images would actually be classified correctly or incorrectly according to their nearest neighbor. And you can see that it's really not very good. But maybe if we used a larger value of K then this would involve actually voting among maybe the top three or the top five or maybe even the whole row. And you could imagine that that would end up being a lot more robust to some of this noise that we see when retrieving neighbors in this way. So another choice we have when we're working with the k-nearest neighbor algorithm is determining exactly how we should be comparing our different points. For the examples so far we've just shown we've talked about this L1 distance which takes the sum of the absolute values between the pixels. But another common choice is the L2 or Euclidean distance where you take the square root of the sum of the squares and take this as your distance. Choosing different distance metrics actually is a pretty interesting topic because different distance metrics make different assumptions about the underlying geometry or topology that you'd expect in the space. So, this L1 distance, underneath this, this is actually a circle according to the L1 distance and it forms this square shape thing around the origin. Where each of the points on this, on the square, is equidistant from the origin according to L1, whereas with the L2 or Euclidean distance then this circle is a familiar circle, it looks like what you'd expect. So one interesting thing to point out between these two metrics in particular, is that the L1 distance depends on your choice of coordinates system. So if you were to rotate the coordinate frame that would actually change the L1 distance between the points. Whereas changing the coordinate frame in the L2 distance doesn't matter, it's the same thing no matter what your coordinate frame is. Maybe if your input features, if the individual entries in your vector have some important meaning for your task, then maybe somehow L1 might be a more natural fit. But if it's just a generic vector in some space and you don't know which of the different elements, you don't know what they actually mean, then maybe L2 is slightly more natural. And another point here is that by using different distance metrics we can actually generalize the k-nearest neighbor classifier to many, many different types of data, not just vectors, not just images. So, for example, imagine you wanted to classify pieces of text, then the only thing you need to do to use k-nearest neighbors is to specify some distance function that can measure distances between maybe two paragraphs or two sentences or something like that. So, simply by specifying different distance metrics we can actually apply this algorithm very generally to basically any type of data. Even though it's a kind of simple algorithm, in general, it's a very good thing to try first when you're looking at a new problem. So then, it's also kind of interesting to think about what is actually happening geometrically if we choose different distance metrics. So here we see the same set of points on the left using the L1, or Manhattan distance, and then, on the right, using the familiar L2, or Euclidean distance. And you can see that the shapes of these decision boundaries actually change quite a bit between the two metrics. So when you're looking at L1 these decision boundaries tend to follow the coordinate axes. And this is again because the L1 depends on our choice of coordinate system. Where the L2 sort of doesn't really care about the coordinate axis, it just puts the boundaries where they should fall naturally. My confession is that each of these examples that I've shown you is actually from this interactive web demo that I built, where you can go and play with this k-nearest neighbor classifier on your own. And this is really hard to work on a projector screen. So maybe we'll do that on your own time. So, let's just go back to here. Man, this is kind of embarrassing. Okay, that was way more trouble than it was worth. So, let's skip this, but I encourage you to go play with this in your browser. It's actually pretty fun and kind of nice to build intuition about how the decision boundary changes as you change the K and change your distance metric and all those sorts of things. Okay, so then the question is once you're actually trying to use this algorithm in practice, there's several choices you need to make. We talked about choosing different values of K. We talked about choosing different distance metrics. And the question becomes how do you actually make these choices for your problem and for your data? So, these choices, of things like K and the distance metric, we call hyperparameters, because they are not necessarily learned from the training data, instead these are choices about your algorithm that you make ahead of time and there's no way to learn them directly from the data. So, the question is how do you set these things in practice? And they turn out to be very problem-dependent. And the simple thing that most people do is simply try different values of hyperparameters for your data and for your problem, and figure out which one works best. There's a question? [student asking a question] So, the question is, where L1 distance might be preferable to using L2 distance? I think it's mainly problem-dependent, it's sort of difficult to say in which cases you think one might be better than the other. but I think that because L1 has this sort of coordinate dependency, it actually depends on the coordinate system of your data, if you know that you have a vector, and maybe the individual elements of the vector have meaning. Like maybe you're classifying employees for some reason and then the different elements of that vector correspond to different features or aspects of an employee. Like their salary or the number of years they've been working at the company or something like that. So I think when your individual elements actually have some meaning, is where I think maybe using L1 might make a little bit more sense. But in general, again, this is a hyperparameter and it really depends on your problem and your data so the best answer is just to try them both and see what works better. Even this idea of trying out different values of hyperparameters and seeing what works best, there are many different choices here. What exactly does it mean to try hyperparameters and see what works best? Well, the first idea you might think of is simply choosing the hyperparameters that give you the best accuracy or best performance on your training data. This is actually a really terrible idea. You should never do this. In the concrete case of the nearest neighbor classifier, for example, if we set K=1, we will always classify the training data perfectly. So if we use this strategy we'll always pick K=1, but, as we saw from the examples earlier, in practice it seems that setting K equals to larger values might cause us to misclassify some of the training data, but, in fact, lead to better performance on points that were not in the training data. And ultimately in machine learning we don't care about fitting the training data, we really care about how our classifier, or how our method, will perform on unseen data after training. So, this is a terrible idea, don't do this. So, another idea that you might think of, is maybe we'll take our full dataset and we'll split it into some training data and some test data. And now I'll try training my algorithm with different choices of hyperparameters on the training data and then I'll go and apply that trained classifier on the test data and now I will pick the set of hyperparameters that cause me to perform best on the test data. This seems like maybe a more reasonable strategy, but, in fact, this is also a terrible idea and you should never do this. Because, again, the point of machine learning systems is that we want to know how our algorithm will perform. So, the point of the test set is to give us some estimate of how our method will do on unseen data that's coming out from the wild. And if we use this strategy of training many different algorithms with different hyperparameters, and then, selecting the one which does the best on the test data, then, it's possible, that we may have just picked the right set of hyperparameters that caused our algorithm to work quite well on this testing set, but now our performance on this test set will no longer be representative of our performance of new, unseen data. So, again, you should not do this, this is a bad idea, you'll get in trouble if you do this. What is much more common, is to actually split your data into three different sets. You'll partition most of your data into a training set and then you'll create a validation set and a test set. And now what we typically do is go and train our algorithm with many different choices of hyperparameters on the training set, evaluate on the validation set, and now pick the set of hyperparameters which performs best on the validation set. And now, after you've done all your development, you've done all your debugging, after you've dome everything, then you'd take that best performing classifier on the validation set and run it once on the test set. And now that's the number that goes into your paper, that's the number that goes into your report, that's the number that actually is telling you how your algorithm is doing on unseen data. And this is actually really, really important that you keep a very strict separation between the validation data and the test data. So, for example, when we're working on research papers, we typically only touch the test set at the very last minute. So, when I'm writing papers, I tend to only touch the test set for my problem in maybe the week before the deadline or so to really insure that we're not being dishonest here and we're not reporting a number which is unfair. So, this is actually super important and you want to make sure to keep your test data quite under control. So another strategy for setting hyperparameters is called cross validation. And this is used a little bit more commonly for small data sets, not used so much in deep learning. So here the idea is we're going to take our test data, or we're going to take our dataset, as usual, hold out some test set to use at the very end, and now, for the rest of the data, rather than splitting it into a single training and validation partition, instead, we can split our training data into many different folds. And now, in this way, we've cycled through choosing which fold is going to be the validation set. So now, in this example, we're using five fold cross validation, so you would train your algorithm with one set of hyperparameters on the first four folds, evaluate the performance on fold four, and now go and retrain your algorithm on folds one, two, three, and five, evaluate on fold four, and cycle through all the different folds. And, when you do it this way, you get much higher confidence about which hyperparameters are going to perform more robustly. So this is kind of the gold standard to use, but, in practice in deep learning when we're training large models and training is very computationally expensive, these doesn't get used too much in practice. Question? [student asking a question] Yeah, so the question is, a little bit more concretely, what's the difference between the training and the validation set? So, if you think about the k-nearest neighbor classifier then the training set is this set of images with labels where we memorize the labels. And now, to classify an image, we're going to take the image and compare it to each element in the training data, and then transfer the label from the nearest training point. So now our algorithm will memorize everything in the training set, and now we'll take each element of the validation set and compare it to each element in the training data and then use this to determine what is the accuracy of our classifier when it's applied on the validation set. So this is the distinction between training and validation. Where your algorithm is able to see the labels of the training set, but for the validation set, your algorithm doesn't have direct access to the labels. We only use the labels of the validation set to check how well our algorithm is doing. A question? [student asking a question] The question is, whether the test set, is it possible that the test set might not be representative of data out there in the wild? This definitely can be a problem in practice, the underlying statistical assumption here is that your data are all independently and identically distributed, so that all of your data points should be drawn from the same underlying probability distribution. Of course, in practice, this might not always be the case, and you definitely can run into cases where the test set might not be super representative of what you see in the wild. So this is kind of a problem that dataset creators and dataset curators need to think about. But when I'm creating datasets, for example, one thing I do, is I'll go and collect a whole bunch of data all at once, using the exact same methodology for collecting the data, and then afterwards you go and partition it randomly between train and test. One thing that can screw you up here is maybe if you're collecting data over time and you make the earlier data, that you collect first, be the training data, and the later data that you collect be the test data, then you actually might run into this shift that could cause problems. But as long as this partition is random among your entire set of data points, then that's how we try to alleviate this problem in practice. So then, once you've gone through this cross validation procedure, then you end up with graphs that look something like this. So here, on the X axis, we are showing the value of K for a k-nearest neighbor classifier on some problem, and now on the Y axis, we are showing what is the accuracy of our classifier on some dataset for different values of K. And you can see that, in this case, we've done five fold cross validation over the data, so, for each value of K we have five different examples of how well this algorithm is doing. And, actually, going back to the question about having some test sets that are better or worse for your algorithm, using K fold cross validation is maybe one way to help quantify that a little bit. And, in that, we can see the variance of how this algorithm performs on different of the validation folds. And that gives you some sense of, not just what is the best, but, also, what is the distribution of that performance. So, whenever you're training machine learning models you end up making plots like this, where they show you what is your accuracy, or your performance as a function of your hyperparameters, and then you want to go and pick the model, or the set of hyperparameters, at the end of the day, that performs the best on the validation set. So, here we see that maybe about K=7 probably works about best for this problem. So, k-nearest neighbor classifiers on images are actually almost never used in practice. Because, with all of these problems that we've talked about. So, one problem is that it's very slow at test time, which is the reverse of what we want, which we talked about earlier. Another problem is that these things like Euclidean distance, or L1 distance, are really not a very good way to measure distances between images. These, sort of, vectorial distance functions do not correspond very well to perceptual similarity between images. How you perceive differences between images. So, in this example, we've constructed, there's this image on the left of a girl, and then three different distorted images on the right where we've blocked out her mouth, we've actually shifted down by a couple pixels, or tinted the entire image blue. And, actually, if you compute the Euclidean distance between the original and the boxed, the original and the shuffled, and original in the tinted, they all have the same L2 distance. Which is, maybe, not so good because it sort of gives you the sense that the L2 distance is really not doing a very good job at capturing these perceptional distances between images. Another, sort of, problem with the k-nearest neighbor classifier has to do with something we call the curse of dimensionality. So, if you recall back this viewpoint we had of the k-nearest neighbor classifier, it's sort of dropping paint around each of the training data points and using that to sort of partition the space. So that means that if we expect the k-nearest neighbor classifier to work well, we kind of need our training examples to cover the space quite densely. Otherwise our nearest neighbors could actually be quite far away and might not actually be very similar to our testing points. And the problem is, that actually densely covering the space, means that we need a number of training examples, which is exponential in the dimension of the problem. So this is very bad, exponential growth is always bad, basically, you're never going to get enough images to densely cover this space of pixels in this high dimensional space. So that's maybe another thing to keep in mind when you're thinking about using k-nearest neighbor. So, kind of the summary is that we're using k-nearest neighbor to introduce this idea of image classification. We have a training set of images and labels and then we use that to predict these labels on the test set. Question? [student asking a question] Oh, sorry, the question is, what was going on with this picture? What are the green and the blue dots? So here, we have some training samples which are represented by points, and the color of the dot maybe represents the category of the point, of this training sample. So, if we're in one dimension, then you maybe only need four training samples to densely cover the space, but if we move to two dimensions, then, we now need, four times four is 16 training examples to densely cover this space. And if we move to three, four, five, many more dimensions, the number of training examples that we need to densely cover the space, grows exponentially with the dimension. So, this is kind of giving you the sense, that maybe in two dimensions we might have this kind of funny curved shape, or you might have sort of arbitrary manifolds of labels in different dimensional spaces. Because the k-nearest neighbor algorithm doesn't really make any assumptions about these underlying manifolds, the only way it can perform properly is if it has quite a dense sample of training points to work with. So, this is kind of the overview of k-nearest neighbors and you'll get a chance to actually implement this and try it out on images in the first assignment. So, if there's any last minute questions about K and N, I'm going to move on to the next topic. Question? [student is asking a question] Sorry, say that again. [student is asking a question] Yeah, so the question is, why do these images have the same L2 distance? And the answer is that, I carefully constructed them to have the same L2 distance. [laughing] But it's just giving you the sense that the L2 distance is not a very good measure of similarity between images. And these images are actually all different from each other in quite disparate ways. If you're using K and N, then the only thing you have to measure distance between images, is this single distance metric. And this kind of gives you an example where that distance metric is actually not capturing the full description of distance or difference between images. So, if this case, I just sort of carefully constructed these translations and these offsets to match exactly. Question? [student asking a question] So, the question is, maybe this is actually good, because all of these things are actually having the same distance to the image. That's maybe true for this example, but I think you could also construct examples where maybe we have two original images and then by putting the boxes in the right places or tinting them, we could cause it to be nearer to pretty much anything that you want, right? Because in this example, we can kind of like do arbitrary shifting and tinting to kind of change these distances nearly arbitrarily without changing the perceptional nature of these images. So, I think that this can actually screw you up if you have many different original images. Question? [student is asking a question] The question is, whether or not it's common in real-world cases to go back and retrain the entire dataset once you've found those best hyperparameters? So, people do sometimes do this in practice, but it's somewhat a matter of taste. If you're really rushing for that deadline and you've really got to get this model out the door then, if it takes a long time to retrain the model on the whole dataset, then maybe you won't do it. But if you have a little bit more time to spare and a little bit more compute to spare, and you want to squeeze out that maybe that extra 1% of performance, then that is a trick you can use. So we kind of saw that the k-nearest neighbor has a lot of the nice properties of machine learning algorithms, but in practice it's not so great, and really not used very much in images. So the next thing I'd like to talk about is linear classification. And linear classification is, again, quite a simple learning algorithm, but this will become super important and help us build up to whole neural networks and whole convolutional networks. So, one analogy people often talk about when working with neural networks is we think of them as being kind of like Lego blocks. That you can have different kinds of components of neural networks and you can stick these components together to build these large different towers of convolutional networks. One of the most basic building blocks that we'll see in different types of deep learning applications is this linear classifier. So, I think it's actually really important to have a good understanding of what's happening with linear classification. Because these will end up generalizing quite nicely to whole neural networks. So another example of kind of this modular nature of neural networks comes from some research in our own lab on image captioning, just as a little bit of a preview. So here the setup is that we want to input an image and then output a descriptive sentence describing the image. And the way this kind of works is that we have one convolutional neural network that's looking at the image, and a recurrent neural network that knows about language. And we can kind of just stick these two pieces together like Lego blocks and train the whole thing together and end up with a pretty cool system that can do some non-trivial things. And we'll work through the details of this model as we go forward in the class, but this just gives you the sense that, these deep neural networks are kind of like Legos and this linear classifier is kind of like the most basic building blocks of these giant networks. But that's a little bit too exciting for lecture two, so we have to go back to CIFAR-10 for the moment. [laughing] So, recall that CIFAR-10 has these 50,000 training examples, each image is 32 by 32 pixels and three color channels. In linear classification, we're going to take a bit of a different approach from k-nearest neighbor. So, the linear classifier is one of the simplest examples of what we call a parametric model. So now, our parametric model actually has two different components. It's going to take in this image, maybe, of a cat on the left, and this, that we usually write as X for our input data, and also a set of parameters, or weights, which is usually called W, also sometimes theta, depending on the literature. And now we're going to write down some function which takes in both the data, X, and the parameters, W, and this'll spit out now 10 numbers describing what are the scores corresponding to each of those 10 categories in CIFAR-10. With the interpretation that, like the larger score for cat, indicates a larger probability of that input X being cat. And now, a question? [student asking a question] Sorry, can you repeat that? [student asking a question] Oh, so the question is what is the three? The three, in this example, corresponds to the three color channels, red, green, and blue. Because we typically work on color images, that's nice information that you don't want to throw away. So, in the k-nearest neighbor setup there was no parameters, instead, we just kind of keep around the whole training data, the whole training set, and use that at test time. But now, in a parametric approach, we're going to summarize our knowledge of the training data and stick all that knowledge into these parameters, W. And now, at test time, we no longer need the actual training data, we can throw it away. We only need these parameters, W, at test time. So this allows our models to now be more efficient and actually run on maybe small devices like phones. So, kind of, the whole story in deep learning is coming up with the right structure for this function, F. You can imagine writing down different functional forms for how to combine weights and data in different complex ways, and these could correspond to different network architectures. But the simplest possible example of combining these two things is just, maybe, to multiply them. And this is a linear classifier. So here our F of X, W is just equal to the W times X. Probably the simplest equation you can imagine. So here, if you kind of unpack the dimensions of these things, we recall that our image was maybe 32 by 32 by 3 values. So then, we're going to take those values and then stretch them out into a long column vector that has 3,072 by one entries. And now we want to end up with 10 class scores. We want to end up with 10 numbers for this image giving us the scores for each of the 10 categories. Which means that now our matrix, W, needs to be ten by 3072. So that once we multiply these two things out then we'll end up with a single column vector 10 by one, giving us our 10 class scores. Also sometimes, you'll typically see this, we'll often add a bias term which will be a constant vector of 10 elements that does not interact with the training data, and instead just gives us some sort of data independent preferences for some classes over another. So you might imagine that if you're dataset was unbalanced and had many more cats than dogs, for example, then the bias elements corresponding to cat would be higher than the other ones. So if you kind of think about pictorially what this function is doing, in this figure we have an example on the left of a simple image with just a two by two image, so it has four pixels total. So the way that the linear classifier works is that we take this two by two image, we stretch it out into a column vector with four elements, and now, in this example, we are just restricting to three classes, cat, dog, and ship, because you can't fit 10 on a slide, and now our weight matrix is going to be four by three, so we have four pixels and three classes. And now, again, we have a three element bias vector that gives us data independent bias terms for each category. Now we see that the cat score is going to be the enter product between the pixels of our image and this row in the weight matrix added together with this bias term. So, when you look at it this way you can kind of understand linear classification as almost a template matching approach. Where each of the rows in this matrix correspond to some template of the image. And now the enter product or dot product between the row of the matrix and the column giving the pixels of the image, computing this dot product kind of gives us a similarity between this template for the class and the pixels of our image. And then bias just, again, gives you this data independence scaling offset to each of the classes. If we think about linear classification from this viewpoint of template matching we can actually take the rows of that weight matrix and unravel them back into images and actually visualize those templates as images. And this gives us some sense of what a linear classifier might actually be doing to try to understand our data. So, in this example, we've gone ahead and trained a linear classifier on our images. And now on the bottom we're visualizing what are those rows in that learned weight matrix corresponding to each of the 10 categories in CIFAR-10. And in this way we kind of get a sense for what's going on in these images. So, for example, in the left, on the bottom left, we see the template for the plane class, kind of consists of this like blue blob, this kind of blobby thing in the middle and maybe blue in the background, which gives you the sense that this linear classifier for plane is maybe looking for blue stuff and blobby stuff, and those features are going to cause the classifier to like planes more. Or if we look at this car example, we kind of see that there's a red blobby thing through the middle and a blue blobby thing at the top that maybe is kind of a blurry windshield. But this is a little bit weird, this doesn't really look like a car. No individual car actually looks like this. So the problem is that the linear classifier is only learning one template for each class. So if there's sort of variations in how that class might appear, it's trying to average out all those different variations, all those different appearances, and use just one single template to recognize each of those categories. We can also see this pretty explicitly in the horse classifier. So in the horse classifier we see green stuff on the bottom because horses are usually on grass. And then, if you look carefully, the horse actually seems to have maybe two heads, one head on each side. And I've never seen a horse with two heads. But the linear classifier is just doing the best that it can, because it's only allowed to learn one template per category. And as we move forward into neural networks and more complex models, we'll be able to achieve much better accuracy because they no longer have this restriction of just learning a single template per category. Another viewpoint of the linear classifier is to go back to this idea of images as points and high dimensional space. And you can imagine that each of our images is something like a point in this high dimensional space. And now the linear classifier is putting in these linear decision boundaries to try to draw linear separation between one category and the rest of the categories. So maybe up on the upper-left hand side we see these training examples of airplanes and throughout the process of training the linear classier will go and try to draw this blue line to separate out with a single line the airplane class from all the rest of the classes. And it's actually kind of fun if you watch during the training process these lines will start out randomly and then go and snap into place to try to separate the data properly. But when you think about linear classification in this way, from this high dimensional point of view, you can start to see again what are some of the problems that might come up with linear classification. And it's not too hard to construct examples of datasets where a linear classifier will totally fail. So, one example, on the left here, is that, suppose we have a dataset of two categories, and these are all maybe somewhat artificial, but maybe our dataset has two categories, blue and red. And the blue categories are the number of pixels in the image, which are greater than zero, is odd. And anything where the number of pixels greater than zero is even, we want to classify as the red category. So if you actually go and draw what these different decisions regions look like in the plane, you can see that our blue class with an odd number of pixels is going to be these two quadrants in the plane, and even will be the opposite two quadrants. So now, there's no way that we can draw a single linear line to separate the blue from the red. So this would be an example where a linear classifier would really struggle. And this is maybe not such an artificial thing after all. Instead of counting pixels, maybe we're actually trying to count whether the number of animals or people in an image is odd or even. So this kind of a parity problem of separating odds from evens is something that linear classification really struggles with traditionally. Other situations where a linear classifier really struggles are multimodal situations. So here on the right, maybe our blue category has these three different islands of where the blue category lives, and then everything else is some other category. So, for something like horses, we saw on the previous example, is something where this actually might be happening in practice. Where there's maybe one island in the pixel space of horses looking to the left, and another island of horses looking to the right. And now there's no good way to draw a single linear boundary between these two isolated islands of data. So anytime where you have multimodal data, like one class that can appear in different regions of space, is another place where linear classifiers might struggle. So there's kind of a lot of problems with linear classifiers, but it is a super simple algorithm, super nice and easy to interpret and easy to understand. So you'll actually be implementing these things on your first homework assignment. At this point, we kind of talked about what is the functional form corresponding to a linear classifier. And we've seen that this functional form of matrix vector multiply corresponds this idea of template matching and learning a single template for each category in your data. And then once we have this trained matrix you can use it to actually go and get your scores for any new training example. But what we have not told you is how do you actually go about choosing the right W for your dataset. We've just talked about what is the functional form and what is going on with this thing. So that's something we'll really focus on next time. And next lecture we'll talk about what are the strategies and algorithms for choosing the right W. And this will lead us to questions of loss functions and optimization and eventually ConvNets. So, that's a bit of the preview for next week. And that's all we have for today.
Stanford_Computer_Vision
Lecture_3_Loss_Functions_and_Optimization.txt
- Okay so welcome to CS 231N Lecture three. Today we're going to talk about loss functions and optimization but as usual, before we get to the main content of the lecture, there's a couple administrative things to talk about. So the first thing is that assignment one has been released. You can find the link up on the website. And since we were a little bit late in getting this assignment out to you guys, we've decided to change the due date to Thursday, April 20th at 11:59 p.m., this will give you a full two weeks from the assignment release date to go and actually finish and work on it, so we'll update the syllabus for this new due date in a little bit later today. And as a reminder, when you complete the assignment, you should go turn in the final zip file on Canvas so we can grade it and get your grades back as quickly as possible. So the next thing is always check out Piazza for interesting administrative stuff. So this week I wanted to highlight that we have several example project ideas as a pinned post on Piazza. So we went out and solicited example of project ideas from various people in the Stanford community or affiliated to Stanford, and they came up with some interesting suggestions for projects that they might want students in the class to work on. So check out this pinned post on Piazza and if you want to work on any of these projects, then feel free to contact the project mentors directly about these things. Aditionally we posted office hours on the course website, this is a Google calendar, so this is something that people have been asking about and now it's up there. The final administrative note is about Google Cloud, as a reminder, because we're supported by Google Cloud in this class, we're able to give each of you an additional $100 credit for Google Cloud to work on your assignments and projects, and the exact details of how to redeem that credit will go out later today, most likely on Piazza. So if there's, I guess if there's no questions about administrative stuff then we'll move on to course content. Okay cool. So recall from last time in lecture two, we were really talking about the challenges of recognition and trying to hone in on this idea of a data-driven approach. We talked about this idea of image classification, talked about why it's hard, there's this semantic gap between the giant grid of numbers that the computer sees and the actual image that you see. We talked about various challenges regarding this around illumination, deformation, et cetera, and why this is actually a really, really hard problem even though it's super easy for people to do with their human eyes and human visual system. Then also recall last time we talked about the k-nearest neighbor classifier as kind of a simple introduction to this whole data-driven mindset. We talked about the CIFAR-10 data set where you can see an example of these images on the upper left here, where CIFAR-10 gives you these 10 different categories, airplane, automobile, whatnot, and we talked about how the k-nearest neighbor classifier can be used to learn decision boundaries to separate these data points into classes based on the training data. This also led us to a discussion of the idea of cross validation and setting hyper parameters by dividing your data into train, validation and test sets. Then also recall last time we talked about linear classification as the first sort of building block as we move toward neural networks. Recall that the linear classifier is an example of a parametric classifier where all of our knowledge about the training data gets summarized into this parameter matrix W that is set during the process of training. And this linear classifier recall is super simple, where we're going to take the image and stretch it out into a long vector. So here the image is x and then we take that image which might be 32 by 32 by 3 pixels, stretch it out into a long column vector of 32 times 32 times 3 entries, where the 32 and 32 are the height and width, and the 3 give you the three color channels, red, green, blue. Then there exists some parameter matrix, W which will take this long column vector representing the image pixels, and convert this and give you 10 numbers giving scores for each of the 10 classes in the case of CIFAR-10. Where we kind of had this interpretation where larger values of those scores, so a larger value for the cat class means the classifier thinks that the cat is more likely for that image, and lower values for maybe the dog or car class indicate lower probabilities of those classes being present in the image. Also, so I think this point was a little bit unclear last time that linear classification has this interpretation as learning templates per class, where if you look at the diagram on the lower left, you think that, so for every pixel in the image, and for every one of our 10 classes, there exists some entry in this matrix W, telling us how much does that pixel influence that class. So that means that each of these rows in the matrix W ends up corresponding to a template for the class. And if we take those rows and unravel, so each of those rows again corresponds to a weighting between the values of, between the pixel values of the image and that class, so if we take that row and unravel it back into an image, then we can visualize the learned template for each of these classes. We also had this interpretation of linear classification as learning linear decision boundaries between pixels in some high dimensional space where the dimensions of the space correspond to the values of the pixel intensity values of the image. So this is kind of where we left off last time. And so where we kind of stopped, where we ended up last time is we got this idea of a linear classifier, and we didn't talk about how to actually choose the W. How to actually use the training data to determine which value of W should be best. So kind of where we stopped off at is that for some setting of W, we can use this W to come up with 10 with our class scores for any image. So and some of these class scores might be better or worse. So here in this simple example, we've shown maybe just a training data set of three images along with the 10 class scores predicted for some value of W for those images. And you can see that some of these scores are better or worse than others. So for example in the image on the left, if you look up, it's actually a cat because you're a human and you can tell these things, but if we look at the assigned probabilities, cat, well not probabilities but scores, then the classifier maybe for this setting of W gave the cat class a score of 2.9 for this image, whereas the frog class gave 3.78. So maybe the classifier is not doing not so good on this image, that's bad, we wanted the true class to be actually the highest class score, whereas for some of these other examples, like the car for example, you see that the automobile class has a score of six which is much higher than any of the others, so that's good. And the frog, the predicted scores are maybe negative four, which is much lower than all the other ones, so that's actually bad. So this is kind of a hand wavy approach, just kind of looking at the scores and eyeballing which ones are good and which ones are bad. But to actually write algorithms about these things and to actually to determine automatically which W will be best, we need some way to quantify the badness of any particular W. And that's this function that takes in a W, looks at the scores and then tells us how bad quantitatively is that W, is something that we'll call a loss function. And in this lecture we'll see a couple examples of different loss functions that you can use for this image classification problem. So then once we've got this idea of a loss function, this allows us to quantify for any given value of W, how good or bad is it? But then we actually need to find and come up with an efficient procedure for searching through the space of all possible Ws and actually come up with what is the correct value of W that is the least bad, and this process will be an optimization procedure and we'll talk more about that in this lecture. So I'm going to shrink this example a little bit because 10 classes is a little bit unwieldy. So we'll kind of work with this tiny toy data set of three examples and three classes going forward in this lecture. So again, in this example, the cat is maybe not so correctly classified, the car is correctly classified, and the frog, this setting of W got this frog image totally wrong, because the frog score is much lower than others. So to formalize this a little bit, usually when we talk about a loss function, we imagine that we have some training data set of xs and ys, usually N examples of these where the xs are the inputs to the algorithm in the image classification case, the xs would be the actually pixel values of your images, and the ys will be the things you want your algorithm to predict, we usually call these the labels or the targets. So in the case of image classification, remember we're trying to categorize each image for CIFAR-10 to one of 10 categories, so the label y here will be an integer between one and 10 or maybe between zero and nine depending on what programming language you're using, but it'll be an integer telling you what is the correct category for each one of those images x. And now our loss function will denote L_i to denote the, so then we have this prediction function x which takes in our example x and our weight matrix W and makes some prediction for y, in the case of image classification these will be our 10 numbers. Then we'll define some loss function L_i which will take in the predicted scores coming out of the function f together with the true target or label Y and give us some quantitative value for how bad those predictions are for that training example. And now the final loss L will be the average of these losses summed over the entire data set over each of the N examples in our data set. So this is actually a very general formulation, and actually extends even beyond image classification. Kind of as we move forward and see other tasks, other examples of tasks and deep learning, the kind of generic setup is that for any task you have some xs and ys and you want to write down some loss function that quantifies exactly how happy you are with your particular parameter settings W and then you'll eventually search over the space of W to find the W that minimizes the loss on your training data. So as a first example of a concrete loss function that is a nice thing to work with in image classification, we'll talk about the multi-class SVM loss. You may have seen the binary SVM, our support vector machine in CS 229 and the multiclass SVM is a generalization of that to handle multiple classes. In the binary SVM case as you may have seen in 229, you only had two classes, each example x was going to be classified as either positive or negative example, but now we have 10 categories, so we need to generalize this notion to handle multiple classes. So this loss function has kind of a funny functional form, so we'll walk through it in a bit more, in quite a bit of detail over the next couple of slides. But what this is saying is that the loss L_i for any individual example, the way we'll compute it is we're going to perform a sum over all of the categories, Y, except for the true category, Y_i, so we're going to sum over all the incorrect categories, and then we're going to compare the score of the correct category, and the score of the incorrect category, and now if the score for the correct category is greater than the score of the incorrect category, greater than the incorrect score by some safety margin that we set to one, if that's the case that means that the true score is much, or the score for the true category is if it's much larger than any of the false categories, then we'll get a loss of zero. And we'll sum this up over all of the incorrect categories for our image and this will give us our final loss for this one example in the data set. And again we'll take the average of this loss over the whole training data set. So this kind of like if then statement, like if the true class score is much larger than the others, this kind of if then formulation we often compactify into this single max of zero S_j minus S_Yi plus one thing, but I always find that notation a little bit confusing, and it always helps me to write it out in this sort of case based notation to figure out exactly what the two cases are and what's going on. And by the way, this style of loss function where we take max of zero and some other quantity is often referred to as some type of a hinge loss, and this name comes from the shape of the graph when you go and plot it, so here the x axis corresponds to the S_Yi, that is the score of the true class for some training example, and now the y axis is the loss, and you can see that as the score for the true category for this example increases, then the loss will go down linearly until we get to above this safety margin, after which the loss will be zero because we've already correctly classified this example. So let's, oh, question? - [Student] Sorry, in terms of notation what is S underscore Yi? Is that your right score? - Yeah, so the question is in terms of notation, what is S and what is SYI in particular, so the Ss are the predicted scores for the classes that are coming out of the classifier. So if one is the cat class and two is the dog class then S1 and S2 would be the cat and dog scores respectively. And remember we said that Yi was the category of the ground truth label for the example which is some integer. So then S sub Y sub i, sorry for the double subscript, that corresponds to the score of the true class for the i-th example in the training set. Question? - [Student] So what exactly is this computing? - Yeah the question is what exactly is this computing here? It's a little bit funny, I think it will become more clear when we walk through an explicit example, but in some sense what this loss is saying is that we are happy if the true score is much higher than all the other scores. It needs to be higher than all the other scores by some safety margin, and if the true score is not high enough, greater than any of the other scores, then we will incur some loss and that would be bad. So this might make a little bit more sense if we walk through an explicit example for this tiny three example data set. So here remember I've sort of removed the case space notation and just switching back to the zero one notation, and now if we look at, if we think about computing this multi-class SVM loss for just this first training example on the left, then remember we're going to loop over all of the incorrect classes, so for this example, cat is the correct class, so we're going to loop over the car and frog classes, and now for car, we're going to compare the, we're going to look at the car score, 5.1, minus the cat score, 3.2 plus one, when we're comparing cat and car we expect to incur some loss here because the car score is greater than the cat score which is bad. So for this one class, for this one example, we'll incur a loss of 2.9, and then when we go and compare the cat score and the frog score we see that cat is 3.2, frog is minus 1.7, so cat is more than one greater than frog, which means that between these two classes we incur zero loss. So then the multiclass SVM loss for this training example will be the sum of the losses across each of these pairs of classes, which will be 2.9 plus zero which is 2.9. Which is sort of saying that 2.9 is a quantitative measure of how much our classifier screwed up on this one training example. And then if we repeat this procedure for this next car image, then again the true class is car, so we're going to iterate over all the other categories when we compare the car and the cat score, we see that car is more than one greater than cat so we get no loss here. When we compare car and frog, we again see that the car score is more than one greater than frog, so we get again no loss here, and our total loss for this training example is zero. And now I think you hopefully get the picture by now, but, if you go look at frog, now frog, we again compare frog and cat, incur quite a lot of loss because the frog score is very low, compare frog and car, incur a lot of loss because the score is very low, and then our loss for this example is 12.9. And then our final loss for the entire data set is the average of these losses across the different examples, so when you sum those out it comes to about 5.3. So then it's sort of, this is our quantitative measure that our classifier is 5.3 bad on this data set. Is there a question? - [Student] How do you choose the plus one? - Yeah, the question is how do you choose the plus one? That's actually a really great question, it seems like kind of an arbitrary choice here, it's the only constant that appears in the loss function and that seems to offend your aesthetic sensibilities a bit maybe. But it turns out that this is somewhat of an arbitrary choice, because we don't actually care about the absolute values of the scores in this loss function, we only care about the relative differences between the scores. We only care that the correct score is much greater than the incorrect scores. So in fact if you imagine scaling up your whole W up or down, then it kind of rescales all the scores correspondingly and if you kind of work through the details and there's a detailed derivation of this in the course notes online, you find this choice of one actually doesn't matter. That this free parameter of one kind of washes out and is canceled with this scale, like the overall setting of the scale in W. And again, check the course notes for a bit more detail on that. So then I think it's kind of useful to think about a couple different questions to try to understand intuitively what this loss is doing. So the first question is what's going to happen to the loss if we change the scores of the car image just a little bit? Any ideas? Everyone's too scared to ask a question? Answer? [student speaking faintly] - Yeah, so the answer is that if we jiggle the scores for this car image a little bit, the loss will not change. So the SVM loss, remember, the only thing it cares about is getting the correct score to be greater than one more than the incorrect scores, but in this case, the car score is already quite a bit larger than the others, so if the scores for this class changed for this example changed just a little bit, this margin of one will still be retained and the loss will not change, we'll still get zero loss. The next question, what's the min and max possible loss for SVM? [student speaking faintly] Oh I hear some murmurs. So the minimum loss is zero, because if you can imagine that across all the classes, if our correct score was much larger then we'll incur zero loss across all the classes and it will be zero, and if you think back to this hinge loss plot that we had, then you can see that if the correct score goes very, very negative, then we could incur potentially infinite loss. So the min is zero and the max is infinity. Another question, sort of when you initialize these things and start training from scratch, usually you kind of initialize W with some small random values, as a result your scores tend to be sort of small uniform random values at the beginning of training. And then the question is that if all of your Ss, if all of the scores are approximately zero and approximately equal, then what kind of loss do you expect when you're using multiclass SVM? - [Student] Number of classes minus one. - Yeah, so the answer is number of classes minus one, because remember that if we're looping over all of the incorrect classes, so we're looping over C minus one classes, within each of those classes the two Ss will be about the same, so we'll get a loss of one because of the margin and we'll get C minus one. So this is actually kind of useful because when you, this is a useful debugging strategy when you're using these things, that when you start off training, you should think about what you expect your loss to be, and if the loss you actually see at the start of training at that first iteration is not equal to C minus one in this case, that means you probably have a bug and you should go check your code, so this is actually kind of a useful thing to be checking in practice. Another question, what happens if, so I said we're summing an SVM over the incorrect classes, what happens if the sum is also over the correct class if we just go over everything? - [Student] The loss increases by one. - Yeah, so the answer is that the loss increases by one. And I think the reason that we do this in practice is because normally loss of zero is kind of, has this nice interpretation that you're not losing at all, so that's nice, so I think your answers wouldn't really change, you would end up finding the same classifier if you actually looped over all the categories, but if just by conventions we omit the correct class so that our minimum loss is zero. So another question, what if we used mean instead of sum here? - [Student] Doesn't change. - Yeah, the answer is that it doesn't change. So the number of classes is going to be fixed ahead of time when we select our data set, so that's just rescaling the whole loss function by a constant, so it doesn't really matter, it'll sort of wash out with all the other scale things because we don't actually care about the true values of the scores, or the true value of the loss for that matter. So now here's another example, what if we change this loss formulation and we actually added a square term on top of this max? Would this end up being the same problem or would this be a different classification algorithm? - [Student] Different. - Yes, this would be different. So here the idea is that we're kind of changing the trade-offs between good and badness in kind of a nonlinear way, so this would end up actually computing a different loss function. This idea of a squared hinge loss actually does get used sometimes in practice, so that's kind of another trick to have in your bag when you're making up your own loss functions for your own problems. So now you'll end up, oh, was there a question? - [Student] Why would you use a squared loss instead of a non-squared loss? - Yeah, so the question is why would you ever consider using a squared loss instead of a non-squared loss? And the whole point of a loss function is to kind of quantify how bad are different mistakes. And if the classifier is making different sorts of mistakes, how do we weigh off the different trade-offs between different types of mistakes the classifier might make? So if you're using a squared loss, that sort of says that things that are very, very bad are now going to be squared bad so that's like really, really bad, like we don't want anything that's totally catastrophically misclassified, whereas if you're using this hinge loss, we don't actually care between being a little bit wrong and being a lot wrong, being a lot wrong kind of like, if an example is a lot wrong, and we increase it and make it a little bit less wrong, that's kind of the same goodness as an example which was only a little bit wrong and then increasing it to be a little bit more right. So that's a little bit hand wavy, but this idea of using a linear versus a square is a way to quantify how much we care about different categories of errors. And this is definitely something that you should think about when you're actually applying these things in practice, because the loss function is the way that you tell your algorithm what types of errors you care about and what types of errors it should trade off against. So that's actually super important in practice depending on your application. So here's just a little snippet of sort of vectorized code in numpy, and you'll end up implementing something like this for the first assignment, but this kind of gives you the sense that this sum is actually like pretty easy to implement in numpy, it only takes a couple lines of vectorized code. And you can see in practice, like one nice trick is that we can actually go in here and zero out the margins corresponding to the correct class, and that makes it easy to then just, that's sort of one nice vectorized trick to skip, iterate over all but one class. You just kind of zero out the one you want to skip and then compute the sum anyway, so that's a nice trick you might consider using on the assignment. So now, another question about this loss function. Suppose that you were lucky enough to find a W that has loss of zero, you're not losing at all, you're totally winning, this loss function is crushing it, but then there's a question, is this W unique or were there other Ws that could also have achieved zero loss? - [Student] There are other Ws. - Answer, yeah, so there are definitely other Ws. And in particular, because we talked a little bit about this thing of scaling the whole problem up or down depending on W, so you could actually take W multiplied by two and this doubled W (Is it quad U now? I don't know.) [laughing] This would also achieve zero loss. So as a concrete example of this, you can go back to your favorite example and maybe work through the numbers a little bit later, but if you're taking W and we double W, then the margins between the correct and incorrect scores will also double. So that means that if all these margins were already greater than one, and we doubled them, they're still going to be greater than one, so you'll still have zero loss. And this is kind of interesting, because if our loss function is the way that we tell our classifier which W we want and which W we care about, this is a little bit weird, now there's this inconsistency and how is the classifier to choose between these different versions of W that all achieve zero loss? And that's because what we've done here is written down only a loss in terms of the data, and we've only told our classifier that it should try to find the W that fits the training data. But really in practice, we don't actually care that much about fitting the training data, the whole point of machine learning is that we use the training data to find some classifier and then we'll apply that thing on test data. So we don't really care about the training data performance, we really care about the performance of this classifier on test data. So as a result, if the only thing we're telling our classifier to do is fit the training data, then we can lead ourselves into some of these weird situations sometimes, where the classifier might have unintuitive behavior. So a concrete, canonical example of this sort of thing, by the way, this is not linear classification anymore, this is a little bit of a more general machine learning concept, is that suppose we have this data set of blue points, and we're going to fit some curve to the training data, the blue points, then if the only thing we've told our classifier to do is to try and fit the training data, it might go in and have very wiggly curves to try to perfectly classify all of the training data points. But this is bad, because we don't actually care about this performance, we care about the performance on the test data. So now if we have some new data come in that sort of follows the same trend, then this very wiggly blue line is going to be totally wrong. And in fact, what we probably would have preferred the classifier to do was maybe predict this straight green line, rather than this very complex wiggly line to perfectly fit all the training data. And this is a core fundamental problem in machine learning, and the way we usually solve it, is this concept of regularization. So here we're going to add an additional term to the loss function. In addition to the data loss, which will tell our classifier that it should fit the training data, we'll also typically add another term to the loss function called a regularization term, which encourages the model to somehow pick a simpler W, where the concept of simple kind of depends on the task and the model. There's this whole idea of Occam's Razor, which is this fundamental idea in scientific discovery more broadly, which is that if you have many different competing hypotheses, that could explain your observations, you should generally prefer the simpler one, because that's the explanation that is more likely to generalize to new observations in the future. And the way we operationalize this intuition in machine learning is typically through some explicit regularization penalty that's often written down as R. So then your standard loss function usually has these two terms, a data loss and a regularization loss, and there's some hyper-parameter here, lambda, that trades off between the two. And we talked about hyper-parameters and cross-validation in the last lecture, so this regularization hyper-parameter lambda will be one of the more important ones that you'll need to tune when training these models in practice. Question? - [Student] What does that lambda R W term have to do with [speaking faintly]. - Yeah, so the question is, what's the connection between this lambda R W term and actually forcing this wiggly line to become a straight green line? I didn't want to go through the derivation on this because I thought it would lead us too far astray, but you can imagine, maybe you're doing a regression problem, in terms of different polynomial basis functions, and if you're adding this regression penalty, maybe the model has access to polynomials of very high degree, but through this regression term you could encourage the model to prefer polynomials of lower degree, if they fit the data properly, or if they fit the data relatively well. So you could imagine there's two ways to do this, either you can constrain your model class to just not contain the more powerful, more complex models, or you can add this soft penalty where the model still has access to more complex models, maybe high degree polynomials in this case, but you add this soft constraint saying that if you want to use these more complex models, you need to overcome this penalty for using their complexity. So that's the connection here, that is not quite linear classification, this is the picture that many people have in mind when they think about regularization at least. So there's actually a lot of different types of regularization that get used in practice. The most common one is probably L2 regularization, or weight decay. But there's a lot of other ones that you might see. This L2 regularization is just the euclidean norm of this weight vector W, or sometimes the squared norm. Or sometimes half the squared norm because it makes your derivatives work out a little bit nicer. But the idea of L2 regularization is you're just penalizing the euclidean norm of this weight vector. You might also sometimes see L1 regularization, where we're penalizing the L1 norm of the weight vector, and the L1 regularization has some nice properties like encouraging sparsity in this matrix W. Some other things you might see would be this elastic net regularization, which is some combination of L1 and L2. You sometimes see max norm regularization, penalizing the max norm rather than the L1 or L2 norm. But these sorts of regularizations are things that you see not just in deep learning, but across many areas of machine learning and even optimization more broadly. In some later lectures, we'll also see some types of regularization that are more specific to deep learning. For example dropout, we'll see in a couple lectures, or batch normalization, stochastic depth, these things get kind of crazy in recent years. But the whole idea of regularization is just any thing that you do to your model, that sort of penalizes somehow the complexity of the model, rather than explicitly trying to fit the training data. Question? [student speaking faintly] Yeah, so the question is, how does the L2 regularization measure the complexity of the model? Thankfully we have an example of that right here, maybe we can walk through. So here we maybe have some training example, x, and there's two different Ws that we're considering. So x is just this vector of four ones, and we're considering these two different possibilities for W. One is a one in the first, one is a single one and three zeros, and the other has this 0.25 spread across the four different entries. And now, when we're doing linear classification, we're really taking dot products between our x and our W. So in terms of linear classification, these two Ws are the same, because they give the same result when dot producted with x. But now the question is, if you look at these two examples, which one would L2 regression prefer? Yeah, so L2 regression would prefer W2, because it has a smaller norm. So the answer is that the L2 regression measures complexity of the classifier in this relatively coarse way, where the idea is that, remember the Ws in linear classification had this interpretation of how much does this value of the vector x correspond to this output class? So L2 regularization is saying that it prefers to spread that influence across all the different values in x. Maybe this might be more robust, in case you come up with xs that vary, then our decisions are spread out and depend on the entire x vector, rather than depending only on certain elements of the x vector. And by the way, L1 regularization has this opposite interpretation. So actually if we were using L1 regularization, then we would actually prefer W1 over W2, because L1 regularization has this different notion of complexity, saying that maybe the model is less complex, maybe we measure model complexity by the number of zeros in the weight vector, so the question of how do we measure complexity and how does L2 measure complexity? They're kind of problem dependent. And you have to think about for your particular setup, for your particular model and data, how do you think that complexity should be measured on this task? Question? - [Student] So why would L1 prefer W1? Don't they sum to the same one? - Oh yes, you're right. So in this case, L1 is actually the same between these two. But you could construct a similar example to this where W1 would be preferred by L1 regularization. I guess the general intuition behind L1 is that it generally prefers sparse solutions, that it drives all your entries of W to zero for most of the entries, except for a couple where it's allowed to deviate from zero. The way of measuring complexity for L1 is maybe the number of non-zero entries, and then for L2, it thinks that things that spread the W across all the values are less complex. So it depends on your data, depends on your problem. Oh and by the way, if you're a hardcore Bayesian, then using L2 regularization has this nice interpretation of MAP inference under a Gaussian prior on the parameter vector. I think there was a homework problem about that in 229, but we won't talk about that for the rest of the quarter. That's sort of my long, deep dive into the multi-class SVM loss. Question? - [Student] Yeah, so I'm still confused about what the kind of stuff I need to do when the linear versus polynomial thing, because the use of this loss function isn't going to change the fact that you're just doing, you're looking at a linear classifier, right? - Yeah, so the question is that, adding a regularization is not going to change the hypothesis class. This is not going to change us away from a linear classifier. The idea is that maybe this example of this polynomial regression is definitely not linear regression. That could be seen as linear regression on top of a polynomial expansion of the input, and in which case, this regression sort of says that you're not allowed to use as many polynomial coefficients as maybe you should have. Right, so you can imagine this is like, when you're doing polynomial regression, you can write out a polynomial as f of x equals A zero plus A one x plus A two x squared plus A three x whatever, in that case your parameters, your Ws, would be these As, in which case, penalizing the W could force it towards lower degree polynomials. Except in the case of polynomial regression, you don't actually want to parameterize in terms of As, there's some other paramterization that you want to use, but that's the general idea, that you're sort of penalizing the parameters of the model to force it towards the simpler hypotheses within your hypothesis class. And maybe we can take this offline if that's still a bit confusing. So then we've sort of seen this multi-class SVM loss, and just by the way as a side note, this is one extension or generalization of the SVM loss to multiple classes, there's actually a couple different formulations that you can see around in literature, but I mean, my intuition is that they all tend to work similarly in practice, at least in the context of deep learning. So we'll stick with this one particular formulation of the multi-class SVM loss in this class. But of course there's many different loss functions you might imagine. And another really popular choice, in addition to the multi-class SVM loss, another really popular choice in deep learning is this multinomial logistic regression, or a softmax loss. And this one is probably actually a bit more common in the context of deep learning, but I decided to present this second for some reason. So remember in the context of the multi-class SVM loss, we didn't actually have an interpretation for those scores. Remember, when we're doing some classification, our model F, spits our these 10 numbers, which are our scores for the classes, and for the multi-class SVM, we didn't actually give much interpretation to those scores. We just said that we want the true score, the score of the correct class to be greater than the incorrect classes, and beyond that we don't really say what those scores mean. But now, for the multinomial logistic regression loss function, we actually will endow those scores with some additional meaning. And in particular we're going to use those scores to compute a probability distribution over our classes. So we use this so-called softmax function where we take all of our scores, we exponentiate them so that now they become positive, then we re-normalize them by the sum of those exponents so now after we send our scores through this softmax function, now we end up with this probability distribution, where now we have probabilities over our classes, where each probability is between zero and one, and the sum of probabilities across all classes sum to one. And now the interpretation is that we want, there's this computed probability distribution that's implied by our scores, and we want to compare this with the target or true probability distribution. So if we know that the thing is a cat, then the target probability distribution would put all of the probability mass on cat, so we would have probability of cat equals one, and zero probability for all the other classes. So now what we want to do is encourage our computed probability distribution that's coming out of this softmax function to match this target probability distribution that has all the mass on the correct class. And the way that we do this, I mean, you can do this equation in many ways, you can do this as a KL divergence between the target and the computed probability distribution, you can do this as a maximum likelihood estimate, but at the end of the day, what we really want is that the probability of the true class is high and as close to one. So then our loss will now be the negative log of the probability of the true class. This is confusing 'cause we're putting this through multiple different things, but remember we wanted the probability to be close to one, so now log is a monotonic function, it goes like this, and it turns out mathematically, it's easier to maximize log than it is to maximize the raw probability, so we stick with log. And now log is monotonic, so if we maximize log P of correct class, that means we want that to be high, but loss functions measure badness not goodness so we need to put in the minus one to make it go the right way. So now our loss function for SVM is going to be the minus log of the probability of the true class. Yeah, so that's the summary here, is that we take our scores, we run through the softmax, and now our loss is this minus log of the probability of the true class. Okay, so then if you look at what this looks like on a concrete example, then we go back to our favorite beautiful cat with our three examples and we've got these three scores that are coming out of our linear classifier, and these scores are exactly the way that they were in the context of the SVM loss. But now, rather than taking these scores and putting them directly into our loss function, we're going to take them all and exponentiate them so that they're all positive, and then we'll normalize them to make sure that they all sum to one. And now our loss will be the minus log of the true class score. So that's the softmax loss, also called multinomial logistic regression. So now we asked several questions to try to gain intuition about the multi-class SVM loss, and it's useful to think about some of the same questions to contrast with the softmax loss. So then the question is, what's the min and max value of the softmax loss? Okay, maybe not so sure, there's too many logs and sums and stuff going on in here. So the answer is that the min loss is zero and the max loss is infinity. And the way that you can see this, the probability distribution that we want is one on the correct class, zero on the incorrect classes, the way that we do that is, so if that were the case, then this thing inside the log would end up being one, because it's the log probability of the true class, then log of one is zero, minus log of one is still zero. So that means that if we got the thing totally right, then our loss would be zero. But by the way, in order to get the thing totally right, what would our scores have to look like? Murmuring, murmuring. So the scores would actually have to go quite extreme, like towards infinity. So because we actually have this exponentiation, this normalization, the only way we can actually get a probability distribution of one and zero, is actually putting an infinite score for the correct class, and minus infinity score for all the incorrect classes. And computers don't do so well with infinities, so in practice, you'll never get to zero loss on this thing with finite precision. But you still have this interpretation that zero is the theoretical minimum loss here. And the maximum loss is unbounded. So suppose that if we had zero probability mass on the correct class, then you would have minus log of zero, log of zero is minus infinity, so minus log of zero would be plus infinity, so that's really bad. But again, you'll never really get here because the only way you can actually get this probability to be zero, is if e to the correct class score is zero, and that can only happen if that correct class score is minus infinity. So again, you'll never actually get to these minimum, maximum values with finite precision. So then, remember we had this debugging, sanity check question in the context of the multi-class SVM, and we can ask the same for the softmax. If all the Ss are small and about zero, then what is the loss here? Yeah, answer? - [Student] Minus log one over C. - So minus log of one over C? I think that's, yeah, so then it'd be minus log of one over C, because log can flip the thing so then it's just log of C. Yeah, so it's just log of C. And again, this is a nice debugging thing, if you're training a model with this softmax loss, you should check at the first iteration. If it's not log C, then something's gone wrong. So then we can compare and contrast these two loss functions a bit. In terms of linear classification, this setup looks the same. We've got this W matrix that gets multiplied against our input to produce this specter of scores, and now the difference between the two loss functions is how we choose to interpret those scores to quantitatively measure the badness afterwards. So for SVM, we were going to go in and look at the margins between the scores of the correct class and the scores of the incorrect class, whereas for this softmax or cross-entropy loss, we're going to go and compute a probability distribution and then look at the minus log probability of the correct class. So sometimes if you look at, in terms of, nevermind, I'll skip that point. [laughing] So another question that's interesting when contrasting these two loss functions is thinking, suppose that I've got this example point, and if you change its scores, so assume that we've got three scores for this, ignore the part on the bottom. But remember if we go back to this example where in the multi-class SVM loss, when we had the car, and the car score was much better than all the incorrect classes, then jiggling the scores for that car image didn't change the multi-class SVM loss at all, because the only thing that the SVM loss cared about was getting that correct score to be greater than a margin above the incorrect scores. But now the softmax loss is actually quite different in this respect. The softmax loss actually always wants to drive that probability mass all the way to one. So even if you're giving very high score to the correct class, and very low score to all the incorrect classes, softmax will want you to pile more and more probability mass on the correct class, and continue to push the score of that correct class up towards infinity, and the score of the incorrect classes down towards minus infinity. So that's the interesting difference between these two loss functions in practice. That SVM, it'll get this data point over the bar to be correctly classified and then just give up, it doesn't care about that data point any more. Whereas softmax will just always try to continually improve every single data point to get better and better and better and better. So that's an interesting difference between these two functions. In practice, I think it tends not to make a huge difference which one you choose, they tend to perform pretty similarly across, at least a lot of deep learning applications. But it is very useful to keep some of these differences in mind. Yeah, so to recap where we've come to from here, is that we've got some data set of xs and ys, we use our linear classifier to get some score function, to compute our scores S, from our inputs, x, and then we'll use a loss function, maybe softmax or SVM or some other loss function to compute how quantitatively bad were our predictions compared to this ground true targets, y. And then we'll often augment this loss function with a regularization term, that tries to trade off between fitting the training data and preferring simpler models. So this is a pretty generic overview of a lot of what we call supervised learning, and what we'll see in deep learning as we move forward, is that generally you'll want to specify some function, f, that could be very complex in structure, specify some loss function that determines how well your algorithm is doing, given any value of the parameters, some regularization term for how to penalize model complexity and then you combine these things together and try to find the W that minimizes this final loss function. But then the question is, how do we actually go about doing that? How do we actually find this W that minimizes the loss? And that leads us to the topic of optimization. So when we're doing optimization, I usually think of things in terms of walking around some large valley. So the idea is that you're walking around this large valley with different mountains and valleys and streams and stuff, and every point on this landscape corresponds to some setting of the parameters W. And you're this little guy who's walking around this valley, and you're trying to find, and the height of each of these points, sorry, is equal to the loss incurred by that setting of W. And now your job as this little man walking around this landscape, you need to somehow find the bottom of this valley. And this is kind of a hard problem in general. You might think, maybe I'm really smart and I can think really hard about the analytic properties of my loss function, my regularization all that, maybe I can just write down the minimizer, and that would sort of correspond to magically teleporting all the way to the bottom of this valley. But in practice, once your prediction function, f, and your loss function and your regularizer, once these things get big and complex and using neural networks, there's really not much hope in trying to write down an explicit analytic solution that takes you directly to the minima. So in practice we tend to use various types of iterative methods where we start with some solution and then gradually improve it over time. So the very first, stupidest thing that you might imagine is random search, that will just take a bunch of Ws, sampled randomly, and throw them into our loss function and see how well they do. So spoiler alert, this is a really bad algorithm, you probably shouldn't use this, but at least it's one thing you might imagine trying. And we can actually do this, we can actually try to train a linear classifier via random search, for CIFAR-10 and for this there's 10 classes, so random chance is 10%, and if we did some number of random trials, we eventually found just through sheer dumb luck, some setting of W that got maybe 15% accuracy. So it's better than random, but state of the art is maybe 95% so we've got a little bit of gap to close here. So again, probably don't use this in practice, but you might imagine that this is something you could potentially do. So in practice, maybe a better strategy is actually using some of the local geometry of this landscape. So if you're this little guy who's walking around this landscape, maybe you can't see directly the path down to the bottom of the valley, but what you can do is feel with your foot and figure out what is the local geometry, if I'm standing right here, which way will take me a little bit downhill? So you can feel with your feet and feel where is the slope of the ground taking me down a little bit in this direction? And you can take a step in that direction, and then you'll go down a little bit, feel again with your feet to figure out which way is down, and then repeat over and over again and hope that you'll end up at the bottom of the valley eventually. So this also seems like a relatively simple algorithm, but actually this one tends to work really well in practice if you get all the details right. So this is generally the strategy that we'll follow when training these large neural networks and linear classifiers and other things. So then, that was a little hand wavy, so what is slope? If you remember back to your calculus class, then at least in one dimension, the slope is the derivative of this function. So if we've got some one-dimensional function, f, that takes in a scalar x, and then outputs the height of some curve, then we can compute the slope or derivative at any point by imagining, if we take a small step, h, in any direction, take a small step, h, and compare the difference in the function value over that step and then drag the step size to zero, that will give us the slope of that function at that point. And this generalizes quite naturally to multi-variable functions as well. So in practice, our x is maybe not a scalar but a whole vector, 'cause remember, x might be a whole vector, so we need to generalize this notion to multi-variable things. And the generalization that we use of the derivative in the multi-variable setting is the gradient, so the gradient is a vector of partial derivatives. So the gradient will have the same shape as x, and each element of the gradient will tell us what is the slope of the function f, if we move in that coordinate direction. And the gradient turns out to have these very nice properties, so the gradient is now a vector of partial derivatives, but it points in the direction of greatest increase of the function and correspondingly, if you look at the negative gradient direction, that gives you the direction of greatest decrease of the function. And more generally, if you want to know, what is the slope of my landscape in any direction? Then that's equal to the dot product of the gradient with the unit vector describing that direction. So this gradient is super important, because it gives you this linear, first-order approximation to your function at your current point. So in practice, a lot of deep learning is about computing gradients of your functions and then using those gradients to iteratively update your parameter vector. So one naive way that you might imagine actually evaluating this gradient on a computer, is using the method of finite differences, going back to the limit definition of gradient. So here on the left, we imagine that our current W is this parameter vector that maybe gives us some current loss of maybe 1.25 and our goal is to compute the gradient, dW, which will be a vector of the same shape as W, and each slot in that gradient will tell us how much will the loss change is we move a tiny, infinitesimal amount in that coordinate direction. So one thing you might imagine is just computing these finite differences, that we have our W, we might try to increment the first element of W by a small value, h, and then re-compute the loss using our loss function and our classifier and all that. And maybe in this setting, if we move a little bit in the first dimension, then our loss will decrease a little bit from 1.2534 to 1.25322. And then we can use this limit definition to come up with this finite differences approximation to the gradient in this first dimension. And now you can imagine repeating this procedure in the second dimension, where now we take the first dimension, set it back to the original value, and now increment the second direction by a small step. And again, we compute the loss and use this finite differences approximation to compute an approximation to the gradient in the second slot. And now repeat this again for the third, and on and on and on. So this is actually a terrible idea because it's super slow. So you might imagine that computing this function, f, might actually be super slow if it's a large, convolutional neural network. And this parameter vector, W, probably will not have 10 entries like it does here, it might have tens of millions or even hundreds of millions for some of these large, complex deep learning models. So in practice, you'll never want to compute your gradients for your finite differences, 'cause you'd have to wait for hundreds of millions potentially of function evaluations to get a single gradient, and that would be super slow and super bad. But thankfully we don't have to do that. Hopefully you took a calculus course at some point in your lives, so you know that thanks to these guys, we can just write down the expression for our loss and then use the magical hammer of calculus to just write down an expression for what this gradient should be. And this'll be way more efficient than trying to compute it analytically via finite differences. One, it'll be exact, and two, it'll be much faster since we just need to compute this single expression. So what this would look like is now, if we go back to this picture of our current W, rather than iterating over all the dimensions of W, we'll figure out ahead of time what is the analytic expression for the gradient, and then just write it down and go directly from the W and compute the dW or the gradient in one step. And that will be much better in practice. So in summary, this numerical gradient is something that's simple and makes sense, but you won't really use it in practice. In practice, you'll always take an analytic gradient and use that when actually performing these gradient computations. However, one interesting note is that these numeric gradients are actually a very useful debugging tool. Say you've written some code, and you wrote some code that computes the loss and the gradient of the loss, then how do you debug this thing? How do you make sure that this analytic expression that you derived and wrote down in code is actually correct? So a really common debugging strategy for these things is to use the numeric gradient as a way, as sort of a unit test to make sure that your analytic gradient was correct. Again, because this is super slow and inexact, then when doing this numeric gradient checking, as it's called, you'll tend to scale down the parameter of the problem so that it actually runs in a reasonable amount of time. But this ends up being a super useful debugging strategy when you're writing your own gradient computations. So this is actually very commonly used in practice, and you'll do this on your assignments as well. So then once we know how to compute the gradient, then it leads us to this super simple algorithm that's like three lines, but turns out to be at the heart of how we train even these very biggest, most complex deep learning algorithms, and that's gradient descent. So gradient descent is first we initialize our W as some random thing, then while true, we'll compute our loss and our gradient and then we'll update our weights in the opposite of the gradient direction, 'cause remember that the gradient was pointing in the direction of greatest increase of the function, so minus gradient points in the direction of greatest decrease, so we'll take a small step in the direction of minus gradient, and just repeat this forever and eventually your network will converge and you'll be very happy, hopefully. But this step size is actually a hyper-parameter, and this tells us that every time we compute the gradient, how far do we step in that direction. And this step size, also sometimes called a learning rate, is probably one of the single most important hyper-parameters that you need to set when you're actually training these things in practice. Actually for me when I'm training these things, trying to figure out this step size or this learning rate, is the first hyper-parameter that I always check. Things like model size or regularization strength I leave until a little bit later, and getting the learning rate or the step size correct is the first thing that I try to set at the beginning. So pictorially what this looks like here's a simple example in two dimensions. So here we've got maybe this bowl that's showing our loss function where this red region in the center is this region of low loss we want to get to and these blue and green regions towards the edge are higher loss that we want to avoid. So now we're going to start of our W at some random point in the space, and then we'll compute the negative gradient direction, which will hopefully point us in the direction of the minima eventually. And if we repeat this over and over again, we'll hopefully eventually get to the exact minima. And what this looks like in practice is, oh man, we've got this mouse problem again. So what this looks like in practice is that if we repeat this thing over and over again, then we will start off at some point and eventually, taking tiny gradient steps each time, you'll see that the parameter will arc in toward the center, this region of minima, and that's really what you want, because you want to get to low loss. And by the way, as a bit of a teaser, we saw in the previous slide, this example of very simple gradient descent, where at every step, we're just stepping in the direction of the gradient. But in practice, over the next couple of lectures, we'll see that there are slightly fancier step, what they call these update rules, where you can take slightly fancier things to incorporate gradients across multiple time steps and stuff like that, that tend to work a little bit better in practice and are used much more commonly than this vanilla gradient descent when training these things in practice. And then, as a bit of a preview, we can look at some of these slightly fancier methods on optimizing the same problem. So again, the black will be this same gradient computation, and these, I forgot which color they are, but these two other curves are using slightly fancier update rules to decide exactly how to use the gradient information to make our next step. So one of these is gradient descent with momentum, the other is this Adam optimizer, and we'll see more details about those later in the course. But the idea is that we have this very basic algorithm called gradient descent, where we use the gradient at every time step to determine where to step next, and there exist different update rules which tell us how exactly do we use that gradient information. But it's all the same basic algorithm of trying to go downhill at every time step. But there's actually one more little wrinkle that we should talk about. So remember that we defined our loss function, we defined a loss that computes how bad is our classifier doing at any single training example, and then we said that our full loss over the data set was going to be the average loss across the entire training set. But in practice, this N could be very very large. If we're using the image net data set for example, that we talked about in the first lecture, then N could be like 1.3 million, so actually computing this loss could be actually very expensive and require computing perhaps millions of evaluations of this function. So that could be really slow. And actually, because the gradient is a linear operator, when you actually try to compute the gradient of this expression, you see that the gradient of our loss is now the sum of the gradient of the losses for each of the individual terms. So now if we want to compute the gradient again, it sort of requires us to iterate over the entire training data set all N of these examples. So if our N was like a million, this would be super super slow, and we would have to wait a very very long time before we make any individual update to W. So in practice, we tend to use what is called stochastic gradient descent, where rather than computing the loss and gradient over the entire training set, instead at every iteration, we sample some small set of training examples, called a minibatch. Usually this is a power of two by convention, like 32, 64, 128 are common numbers, and then we'll use this small minibatch to compute an estimate of the full sum, and an estimate of the true gradient. And now this is stochastic because you can view this as maybe a Monte Carlo estimate of some expectation of the true value. So now this makes our algorithm slightly fancier, but it's still only four lines. So now it's well true, sample some random minibatch of data, evaluate your loss and gradient on the minibatch, and now make an update on your parameters based on this estimate of the loss, and this estimate of the gradient. And again, we'll see slightly fancier update rules of exactly how to integrate multiple gradients over time, but this is the basic training algorithm that we use for pretty much all deep neural networks in practice. So we have another interactive web demo actually playing around with linear classifiers, and training these things via stochastic gradient descent, but given how miserable the web demo was last time, I'm not actually going to open the link. Instead, I'll just play this video. [laughing] But I encourage you to go check this out and play with it online, because it actually helps to build some intuition about linear classifiers and training them via gradient descent. So here you can see on the left, we've got this problem where we're categorizing three different classes, and we've got these green, blue and red points that are our training samples from these three classes. And now we've drawn the decision boundaries for these classes, which are the colored background regions, as well as these directions, giving you the direction of increase for the class scores for each of these three classes. And now if you see, if you actually go and play with this thing online, you can see that we can go in and adjust the Ws and changing the values of the Ws will cause these decision boundaries to rotate. If you change the biases, then the decision boundaries will not rotate, but will instead move side to side or up and down. Then we can actually make steps that are trying to update this loss, or you can change the step size with this slider. You can hit this button to actually run the thing. So now with a big step size, we're running gradient descent right now, and these decision boundaries are flipping around and trying to fit the data. So it's doing okay now, but we can actually change the loss function in real time between these different SVM formulations and the different softmax. And you can see that as you flip between these different formulations of loss functions, it's generally doing the same thing. Our decision regions are mostly in the same place, but exactly how they end up relative to each other and exactly what the trade-offs are between categorizing these different things changes a little bit. So I really encourage you to go online and play with this thing to try to get some intuition for what it actually looks like to try to train these linear classifiers via gradient descent. Now as an aside, I'd like to talk about another idea, which is that of image features. So so far we've talked about linear classifiers, which is just maybe taking our raw image pixels and then feeding the raw pixels themselves into our linear classifier. But as we talked about in the last lecture, this is maybe not such a great thing to do, because of things like multi-modality and whatnot. So in practice, actually feeding raw pixel values into linear classifiers tends to not work so well. So it was actually common before the dominance of deep neural networks, was instead to have this two-stage approach, where first, you would take your image and then compute various feature representations of that image, that are maybe computing different kinds of quantities relating to the appearance of the image, and then concatenate these different feature vectors to give you some feature representation of the image, and now this feature representation of the image would be fed into a linear classifier, rather than feeding the raw pixels themselves into the classifier. And the motivation here is that, so imagine we have a training data set on the left of these red points, and red points in the middle and blue points around that. And for this kind of data set, there's no way that we can draw a linear decision boundary to separate the red points from the blue points. And we saw more examples of this in the last lecture. But if we use a clever feature transform, in this case transforming to polar coordinates, then now after we do the feature transform, then this complex data set actually might become linearly separable, and actually could be classified correctly by a linear classifier. And the whole trick here now is to figure out what is the right feature transform that is computing the right quantities for the problem that you care about. So for images, maybe converting your pixels to polar coordinates, doesn't make sense, but you actually can try to write down feature representations of images that might make sense, and actually might help you out and might do better than putting in raw pixels into the classifier. So one example of this kind of feature representation that's super simple, is this idea of a color histogram. So you'll take maybe each pixel, you'll take this hue color spectrum and divide it into buckets and then for every pixel, you'll map it into one of those color buckets and then count up how many pixels fall into each of these different buckets. So this tells you globally what colors are in the image. Maybe if this example of a frog, this feature vector would tell us there's a lot of green stuff, and maybe not a lot of purple or red stuff. And this is kind of a simple feature vector that you might see in practice. Another common feature vector that we saw before the rise of neural networks, or before the dominance of neural networks was this histogram of oriented gradients. So remember from the first lecture, that Hubel and Wiesel found these oriented edges are really important in the human visual system, and this histogram of oriented gradients feature representation tries to capture the same intuition and measure the local orientation of edges on the image. So what this thing is going to do, is take our image and then divide it into these little eight by eight pixel regions. And then within each of those eight by eight pixel regions, we'll compute what is the dominant edge direction of each pixel, quantize those edge directions into several buckets and then within each of those regions, compute a histogram over these different edge orientations. And now your full-feature vector will be these different bucketed histograms of edge orientations across all the different eight by eight regions in the image. So this is in some sense dual to the color histogram classifier that we saw before. So color histogram is saying, globally, what colors exist in the image, and this is saying, overall, what types of edge information exist in the image. And even localized to different parts of the image, what types of edges exist in different regions. So maybe for this frog on the left, you can see he's sitting on a leaf, and these leaves have these dominant diagonal edges, and if you visualize the histogram of oriented gradient features, then you can see that in this region, we've got a lot of diagonal edges, that this histogram of oriented gradient feature representation's capturing. So this was a super common feature representation and was used a lot for object recognition actually not too long ago. Another feature representation that you might see out there is this idea of bag of words. So this is taking inspiration from natural language processing. So if you've got a paragraph, then a way that you might represent a paragraph by a feature vector is counting up the occurrences of different words in that paragraph. So we want to take that intuition and apply it to images in some way. But the problem is that there's no really simple, straightforward analogy of words to images, so we need to define our own vocabulary of visual words. So we take this two-stage approach, where first we'll get a bunch of images, sample a whole bunch of tiny random crops from those images and then cluster them using something like K means to come up with these different cluster centers that are maybe representing different types of visual words in the images. So if you look at this example on the right here, this is a real example of clustering actually different image patches from images, and you can see that after this clustering step, our visual words capture these different colors, like red and blue and yellow, as well as these different types of oriented edges in different directions, which is interesting that now we're starting to see these oriented edges come out from the data in a data-driven way. And now, once we've got these set of visual words, also called a codebook, then we can encode our image by trying to say, for each of these visual words, how much does this visual word occur in the image? And now this gives us, again, some slightly different information about what is the visual appearance of this image. And actually this is a type of feature representation that Fei-Fei worked on when she was a grad student, so this is something that you saw in practice not too long ago. So then as a bit of teaser, tying this all back together, the way that this image classification pipeline might have looked like, maybe about five to 10 years ago, would be that you would take your image, and then compute these different feature representations of your image, things like bag of words, or histogram of orientated gradients, concatenate a whole bunch of features together, and then feed these feature extractors down into some linear classifier. I'm simplifying a little bit, the pipelines were a little bit more complex than that, but this is the general intuition. And then the idea here was that after you extracted these features, this feature extractor would be a fixed block that would not be updated during training. And during training, you would only update the linear classifier if it's working on top of features. And actually, I would argue that once we move to convolutional neural networks, and these deep neural networks, it actually doesn't look that different. The only difference is that rather than writing down the features ahead of time, we're going to learn the features directly from the data. So we'll take our raw pixels and feed them into this to convolutional network, which will end up computing through many different layers some type of feature representation driven by the data, and then we'll actually train this entire weights for this entire network, rather than just the weights of linear classifier on top. So, next time we'll really start diving into this idea in more detail, and we'll introduce some neural networks, and start talking about backpropagation as well.
Stanford_Computer_Vision
Lecture_14_Deep_Reinforcement_Learning.txt
- Okay let's get started. Alright, so welcome to lecture 14, and today we'll be talking about reinforcement learning. So some administrative details first, update on grades. Midterm grades were released last night, so see Piazza for more information and statistics about that. And we also have A2 and milestone grades scheduled for later this week. Also, about your projects, all teams must register your projects. So on Piazza we have a form posted, so you should go there and this is required, every team should go and fill out this form with information about your project, that we'll use for final grading and the poster session. And the Tiny ImageNet evaluation servers are also now online for those of you who are doing the Tiny ImageNet challenge. We also have a link to a course survey on Piazza that was released a few days ago, so, please fill it out if you guys haven't already. We'd love to have your feedback and know how we can improve this class. Okay, so the topic of today, reinforcement learning. Alright, so so far we've talked about supervised learning, which is about a type of problem where we have data x and then we have labels y and our goal is to learn a function that is mapping from x to y. So, for example, the classification problem that we've been working with. We also talked last lecture about unsupervised learning, which is the problem where we have just data and no labels, and our goal is to learn some underlying, hidden structure of the data. So, an example of this is the generative models that we talked about last lecture. And so today we're going to talk about a different kind of problem set-up, the reinforcement learning problem. And so here we have an agent that can take actions in its environment, and it can receive rewards for for its action. And its goal is going to be to learn how to take actions in a way that can maximize its reward. And so we'll talk about this in a lot more detail today. So, the outline for today, we're going to first talk about the reinforcement learning problem, and then we'll talk about Markov decision processes, which is a formalism of the reinforcement learning problem, and then we'll talk about two major classes of RL algorithms, Q-learning and policy gradients. So, in the reinforcement learning set up, what we have is we have an agent and we have an environment. And so the environment gives the agent a state. In turn, the agent is going to take an action, and then the environment is going to give back a reward, as well as the next state. And so this is going to keep going on in this loop, on and on, until the environment gives back a terminal state, which then ends the episode. So, let's see some examples of this. First we have here the cart-pole problem, which is a classic problem that some of you may have seen, in, for example, 229 before. And so this objective here is that you want to balance a pole on top of a movable cart. Alright, so the state that you have here is your current description of the system. So, for example, angular, angular speed of your pole, your position, and the horizontal velocity of your cart. And the actions you can take are horizontal forces that you apply onto the cart, right? So you're basically trying to move this cart around to try and balance this pole on top of it. And the reward that you're getting from this environment is one at each time step if your pole is upright. So you basically want to keep this pole balanced for as long as you can. Okay, so here's another example of a classic RL problem. Here is robot locomotion. So we have here an example of a humanoid robot, as well as an ant robot model. And our objective here is to make the robot move forward. And so the state that we have describing our system is the angle and the positions of all the joints of our robots. And then the actions that we can take are the torques applied onto these joints, right, and so these are trying to make the robot move forward and then the reward that we get is our forward movement as well as, I think, in the time of, in the case of the humanoid, also, you can have something like a reward of one for each time step that this robot is upright. So, games are also a big class of problems that can be formulated with RL. So, for example, here we have Atari games which are a classic success of deep reinforcement learning and so here the objective is to complete these games with the highest possible score, right. So, your agent is basically a player that's trying to play these games. And the state that you have is going to be the raw pixels of the game state. Right, so these are just the pixels on the screen that you would see as you're playing the game. And then the actions that you have are your game controls, so for example, in some games maybe moving left to right, up or down. And then the score that you have is your score increase or decrease at each time step, and your goal is going to be to maximize your total score over the course of the game. And, finally, here we have another example of a game here. It's Go, which is something that was a huge achievement of deep reinforcement learning last year, when Deep Minds AlphaGo beats Lee Sedol, which is one of the best Go players of the last few years, and this is actually in the news again for, as some of you may have seen, there's another Go competition going on now with AlphaGo versus a top-ranked Go player. And so the objective here is to win the game, and our state is the position of all the pieces, the action is where to put the next piece down, and the reward is, one, if you win at the end of the game, and zero otherwise. And we'll also talk about this one in a little bit more detail, later. Okay, so how can we mathematically formalize the RL problem, right? This loop that we talked about earlier, of environments giving agents states, and then agents taking actions. So, a Markov decision process is the mathematical formulation of the RL problem, and an MDP satisfies the Markov property, which is that the current state completely characterizes the state of the world. And an MDP here is defined by tuple of objects, consisting of S, which is the set of possible states. We have A, our set of possible actions, we also have R, our distribution of our reward, given a state, action pair, so it's a function mapping from state action to your reward. You also have P, which is a transition probability distribution over your next state, that you're going to transition to given your state, action pair. And then finally we have a Gamma, a discount factor, which is basically saying how much we value rewards coming up soon versus later on. So, the way the Markov Decision Process works is that at our initial time step t equals zero, the environment is going to sample some initial state as zero, from the initial state distribution, p of s zero. And then, once it has that, then from time t equals zero until it's done, we're going to iterate through this loop where the agent is going to select an action, a sub t. The environment is going to sample a reward from here, so reward given your state and the action that you just took. It's also going to sample the next state, at time t plus one, given your probability distribution and then the agent is going to receive the reward, as well as the next state, and then we're going to through this process again, and keep looping; agent will select the next action, and so on until the episode is over. Okay, so now based on this, we can define a policy pi, which is a function from your states to your actions that specifies what action to take in each state. And this can be either deterministic or stochastic. And our objective now is to going to be to find your optimal policy pi star, that maximizes your cumulative discounted reward. So we can see here we have our some of our future rewards, which can be also discounted by your discount factor. So, let's look at an example of a simple MDP. And here we have Grid World, which is this task where we have this grid of states. So you can be in any of these cells of your grid, which are your states. And you can take actions from your states, and so these actions are going to be simple movements, moving to your right, to your left, up or down. And you're going to get a negative reward for each transition or each time step, basically, that happens. Each movement that you take, and this can be something like R equals negative one. And so your objective is going to be to reach one of the terminal states, which are the gray states shown here, in the least number of actions. Right, so the longer that you take to reach your terminal state, you're going to keep accumulating these negative rewards. Okay, so if you look at a random policy here, a random policy would consist of, basically, at any given state or cell that you're in just sampling randomly which direction that you're going to move in next. Right, so all of these have equal probability. On the other hand, an optimal policy that we would like to have is basically taking the action, the direction that will move us closest to a terminal state. So you can see here, if we're right next to one of the terminal states we should always move in the direction that gets us to this terminal state. And otherwise, if you're in one of these other states, you want to take the direction that will take you closest to one of these states. Okay, so now given this description of our MDP, what we want to do is we want to find our optimal policy pi star. Right, our policy that's maximizing the sum of the rewards. And so this optimal policy is going to tell us, given any state that we're in, what is the action that we should take in order to maximize the sum of the rewards that we'll get. And so one question is how do we handle the randomness in the MDP, right? We have randomness in terms of our initial state that we're sampling, in therms of this transition probability distribution that will give us distribution of our next states, and so on. Also what we'll do is we'll work, then, with maximizing our expected sum of the rewards. So, formally, we can write our optimal policy pi star as maximizing this expected sum of future rewards over policy's pi, where we have our initial state sampled from our state distribution. We have our actions, sampled from our policy, given the state. And then we have our next states sampled from our transition probability distributions. Okay, so before we talk about exactly how we're going to find this policy, let's first talk about a few definitions that's going to be helpful for us in doing so. So, specifically, the value function and the Q-value function. So, as we follow the policy, we're going to sample trajectories or paths, right, for every episode. And we're going to have our initial state as zero, a-zero, r-zero, s-one, a-one, r-one, and so on. We're going to have this trajectory of states, actions, and rewards that we get. And so, how good is a state that we're currently in? Well, the value function at any state s, is the expected cumulative reward following the policy from state s, from here on out. Right, so it's going to be expected value of our expected cumulative reward, starting from our current state. And then how good is a state, action pair? So how good is taking action a in state s? And we define this using a Q-value function, which is, the expected cumulative reward from taking action a in state s and then following the policy. Right, so then, the optimal Q-value function that we can get is going to be Q star, which is the maximum expected cumulative reward that we can get from a given state action pair, defined here. So now we're going to see one important thing in reinforcement learning, which is called the Bellman equation. So let's consider this a Q-value function from the optimal policy Q star, which is then going to satisfy this Bellman equation, which is this identity shown here, and what this means is that given any state, action pair, s and a, the value of this pair is going to be the reward that you're going to get, r, plus the value of whatever state that you end up in. So, let's say, s prime. And since we know that we have the optimal policy, then we also know that we're going to play the best action that we can, right, at our state s prime. And so then, the value at state s prime is just going to be the maximum over our actions, a prime, of Q star at s prime, a prime. And so then we get this identity here, for optimal Q-value. Right, and then also, as always, we have this expectation here, because we have randomness over what state that we're going to end up in. And then we can also infer, from here, that our optimal policy, right, is going to consist of taking the best action in any state, as specified by Q star. Q star is going to tell us of the maximum future reward that we can get from any of our actions, so we should just take a policy that's following this and just taking the action that's going to lead to best reward. Okay, so how can we solve for this optimal policy? So, one way we can solve for this is something called a value iteration algorithm, where we're going to use this Bellman equation as an iterative update. So at each step, we're going to refine our approximation of Q star by trying to enforce the Bellman equation. And so, under some mathematical conditions, we also know that this sequence Q, i of our Q-function is going to converge to our optimal Q star as i approaches infinity. And so this, this works well, but what's the problem with this? Well, an important problem is that this is not scalable. Right? We have to compute Q of s, a here for every state, action pair in order to make our iterative updates. Right, but then this is a problem if, for example, if we look at these the state of, for example, an Atari game that we had earlier, it's going to be your screen of pixels. And this is a huge state space, and it's basically computationally infeasible to compute this for the entire state space. Okay, so what's the solution to this? Well, we can use a function approximator to estimate Q of s, a so, for example, a neural network, right. So, we've seen before that any time, if we have some really complex function that don't know, that we want to estimate, a neural network is a good way to estimate this. Okay, so this is going to take us to our formulation of Q-learning that we're going to look at. And so, what we're going to do is we're going to use a function approximator in order to estimate our action value function. Right? And if this function approximator is a deep neural network, which is what's been used recently, then this is going to be called deep Q-learning. And so this is something that you'll hear around as one of the common approaches to deep reinforcement learning that's in use. Right, and so in this case, we also have our function parameters theta here, so our Q-value function is determined by these weights, theta, of our neural network. Okay, so given this function approximation, how do we solve for our optimal policy? So remember that we want to find a Q-function that's satisfying the Bellman equation. Right, and so we want to enforce this Bellman equation to happen, so what we can do when we have this neural network approximating our Q-function is that we can train this where our loss function is going to try and minimize the error of our Bellman equation, right? Or how far q of s, a is from its target, which is the Y_i here, the right hand side of the Bellman equation that we saw earlier. So, we're basically going to take these forward passes of our loss function, trying to minimize this error and then our backward pass, our gradient update, is just going to be you just take the gradient of this loss, with respect to our network parameter's theta. Right, and so our goal is again to have this effect as we're taking gradient steps of iteratively trying to make our Q-function closer to our target value. So, any questions about this? Okay. So let's look at a case study of an example where one of the classic examples of deep reinforcement learning where this approach was applied. And so we're going to look at this problem that we saw earlier of playing Atari games, where our objective was to complete the game with the highest score and remember our state is going to be the raw pixel inputs of the game state, and we can take these actions of moving left, right, up, down, or whatever actions of the particular game. And our reward at each time step, we're going to get a reward of our score increase or decrease that we got at this time step, and so our cumulative total reward is this total reward that we'll usually see at the top of the screen. Okay, so the network that we're going to use for our Q-function is going to look something like this, right, where we have our Q-network, with weight's theta. And then our input, our state s, is going to be our current game screen. And in practice we're going to take a stack of the last four frames, so we have some history. And so we'll take these raw pixel values, we'll do some, you know, RGB to gray-scale conversions, some down-sampling, some cropping, so, some pre-processing. And what we'll get out of this is this 84 by 84 by four stack of the last four frames. Yeah, question. [inaudible question from audience] Okay, so the question is, are we saying here that our network is going to approximate our Q-value function for different state, action pairs, for example, four of these? Yeah, that's correct. We'll see, we'll talk about that in a few slides. [inaudible question from audience] So, no. So, we don't have a Softmax layer after the connected, because here our goal is to directly predict our Q-value functions. [inaudible question from audience] Q-values. [inaudible question from audience] Yes, so it's more doing regression to our Q-values. Okay, so we have our input to this network and then on top of this, we're going to have a couple of familiar convolutional layers, and a fully-connected layer, so here we have an eight-by-eight convolutions and we have some four-by-four convolutions. Then we have a FC 256 layer, so this is just a standard kind of networK that you've seen before. And then, finally, our last fully-connected layer has a vector of outputs, which is corresponding to your Q-value for each action, right, given the state that you've input. And so, for example, if you have four actions, then here we have this four-dimensional output corresponding to Q of current s, as well as a-one, and then a-two, a-three, and a-four. Right so this is going to be one scalar value for each of our actions. And then the number of actions that we have can vary between, for example, 4 to 18, depending on the Atari game. And one nice thing here is that using this network structure, a single feedforward pass is able to compute the Q-values for all functions from the current state. And so this is really efficient. Right, so basically we take our current state in and then because we have this output of an action for each, or Q-value for each action, as our output layer, we're able to do one pass and get all of these values out. And then in order to train this, we're just going to use our loss function from before. Remember, we're trying to enforce this Bellman equation and so, on our forward pass, our loss function we're going to try and iteratively make our Q-value close to our target value, that it should have. And then our backward pass is just directly taking the gradient of this loss function that we have and then taking a gradient step based on that. So one other thing that's used here that I want to mention is something called experience replay. And so this addresses a problem with just using the plain two network that I just described, which is that learning from batches of consecutive samples is bad. And so the reason because of this, right, is so for just playing the game, taking samples of state action rewards that we have and just taking consecutive samples of these and training with these, well all of these samples are correlated and so this leads to inefficient learning, first of all, and also, because of this, our current Q-network parameters, right, this determines the policy that we're going to follow, it determines our next samples that we're going to get that we're going to use for training. And so this leads to problems where you can have bad feedback loops. So, for example, if currently the maximizing action that's going to take left, well this is going to bias all of my upcoming training examples to be dominated by samples from the left-hand side. And so this is a problem, right? And so the way that we are going to address these problems is by using something called experience replay, where we're going to keep this replay memory table of transitions of state, as state, action, reward, next state, transitions that we have, and we're going to continuously update this table with new transitions that we're getting as game episodes are played, as we're getting more experience. Right, and so now what we can do is that we can now train our Q-network on random, mini-batches of transitions from the replay memory. Right, so instead of using consecutive samples, we're now going to sample across these transitions that we've accumulated random samples of these, and this breaks all of the, these correlation problems that we had earlier. And then also, as another side benefit is that each of these transitions can also contribute to potentially multiple weight updates. We're just sampling from this table and so we could sample one multiple times. And so, this is going to lead also to greater data efficiency. Okay, so let's put this all together and let's look at the full algorithm for deep Q-learning with experience replay. So we're going to start off with initializing our replay memory to some capacity that we choose, N, and then we're also going to initialize our Q-network, just with our random weights or initial weights. And then we're going to play M episodes, or full games. This is going to be our training episodes. And then what we're going to do is we're going to initialize our state, using the starting game screen pixels at the beginning of each episode. And remember, we go through the pre-processing step to get to our actual input state. And then for each time step of a game that we're currently playing, we're going to, with a small probability, select a random action, so one thing that's important in these algorithms is to have sufficient exploration, so we want to make sure that we are sampling different parts of the state space. And then otherwise, we're going to select from the greedy action from the current policy. Right, so most of the time we'll take the greedy action that we think is a good policy of the type of actions that we want to take and states that we want to see, and with a small probability we'll sample something random. Okay, so then we'll take this action, a, t, and we'll observe the next reward and the next state. So r, t and s, t plus one. And then we'll take this and we'll store this transition in our replay memory that we're building up. And then we're going to take, we're going to train a network a little bit. So we're going to do experience replay and we'll take a sample of a random mini-batches of transitions that we have from the replay memory, and then we'll perform a gradient descent step on this. Right, so this is going to be our full training loop. We're going to be continuously playing this game and then also sampling minibatches, using experienced replay to update our weights of our Q-network and then continuing in this fashion. Okay, so let's see. Let's see if I can, is this playing? Okay, so let's take a look at this deep Q-learning algorithm from Google DeepMind, trained on an Atari game of Breakout. Alright, so it's saying here that our input is just going to be our state are raw game pixels. And so here we're looking at what's happening at the beginning of training. So we've just started training a bit. And right, so it's going to look to it's learned to kind of hit the ball, but it's not doing a very good job of sustaining it. But it is looking for the ball. Okay, so now after some more training, it looks like a couple hours. Okay, so now it's learning to do a pretty good job here. So it's able to continuously follow this ball and be able to to remove most of the blocks. Right, so after 240 minutes. Okay, so here it's found the pro strategy, right? You want to get all the way to the top and then have it go by itself. Okay, so this is an example of using deep Q-learning in order to train an agent to be able to play Atari games. It's able to do this on many Atari games and so you can check out some more of this online. Okay, so we've talked about Q-learning. But there is a problem with Q-learning, right? It can be challenging and what's the problem? Well, the problem can be that the Q-function is very complicated. Right, so we have to, we're saying that we want to learn the value of every state action pair. So, if, let's say you have something, for example, a robot grasping, wanting to grasp an object. Right, you're going to have a really high dimensional state. You have, I mean, let's say you have all of your even just joint, joint positions, and angles. Right, and so learning the exact value of every state action pair that you have, right, can be really, really hard to do. But on the other hand, your policy can be much simpler. Right, like what you want this robot to do maybe just to have this simple motion of just closing your hand, right? Just, move your fingers in this particular direction and keep going. And so, that leads to the question of can we just learn this policy directly? Right, is it possible, maybe, to just find the best policy from a collection of policies, without trying to go through this process of estimating your Q-value and then using that to infer your policy. So, this is an approach that oh, so, okay, this is an approach that we're going to call policy gradients. And so, formally, let's define a class of parametrized policies. Parametrized by weights theta, and so for each policy let's define the value of the policy. So, J, our value J, given parameters theta, is going to be, or expected some cumulative sum of future rewards that we care about. So, the same reward that we've been using. And so our goal then, under this setup is that we want to find an optimal policy, theta star, which is the maximum, right, arg max over theta of J of theta. So we want to find the policy, the policy parameters that gives our best expected reward. So, how can we do this? Any ideas? Okay, well, what we can do is just a gradient assent on our policy parameters, right? We've learned that given some objective that we have, some parameters we can just use gradient asscent and gradient assent in order to continuously improve our parameters. And so let's talk more specifically about how we can do this, which we're going to call here the reinforce algorithm. So, mathematically, we can write out our expected future reward over trajectories, and so we're going to sample these trajectories of experience, right, like for example episodes of game play that we talked about earlier. S-zero, a-zero, r-zero, s-one, a-one, r-one, and so on. Using some policy pi of theta. Right, and then so, for each trajectory we can compute a reward for that trajectory. It's the cumulative reward that we got from following this trajectory. And then the value of a policy, pi sub theta, is going to be just the expected reward of these trajectories that we can get from the following pi sub theta. So that's here, this expectation over trajectories that we can get, sampling trajectories from our policy. Okay. So, we want to do gradient ascent, right? So let's differentiate this. Once we differentiate this, then we can just take gradient steps, like normal. So, the problem is that now if we try and just differentiate this exactly, this is intractable, right? So, the gradient of an expectation is problematic when p is dependent on theta here, because here we want to take this gradient of p of tau, given theta, but this is going to be, we want to take this integral over tau. Right, so this is intractable. However, we can use a trick here to get around this. And this trick is taking this gradient that we want, of p. We can rewrite this by just multiplying this by one, by multiplying top and bottom, both by p of tau given theta. Right, and then if we look at these terms that we have now here, in the way that I've written this, on the left and the right, this is actually going to be equivalent to p of tau times our gradient with respect to theta, of log, of p. Right, because the gradient of the log of p is just going to be one over p times gradient of p. Okay, so if we then inject this back into our expression that we had earlier for this gradient, we can see that, what this will actually look like, right, because now we have a gradient of log p times our probabilities of all of these trajectories and then taking this integral here, over tau. This is now going to be an expectation over our trajectories tau, and so what we've done here is that we've taken a gradient of an expectation and we've transformed it into an expectation of gradients. Right, and so now we can use sample trajectories that we can get in order to estimate our gradient. And so we do this using Monte Carlo sampling, and this is one of the core ideas of reinforce. Okay, so looking at this expression that we want to compute, can we compute these quantities that we had here without knowing the transition probabilities? Alright, so we have that p of tau is going to be the probability of a trajectory. It's going to be the product of all of our transition probabilities of the next state that we get, given our current state and action as well as our probability of the actions that we've taken under our policy pi. Right, so we're going to multiply all of these together, and get our probability of our trajectory. So this log of p of tau that we want to compute is going to be we just take this log and this will separate this out into a sum of pushing the logs inside. And then here, when we differentiate this, we can see we want to differentiate with respect to theta, but this first term that we have here, log p of the state transition probabilities there's no theta term here, and so the only place where we have theta is the second term that we have, of log of pi sub theta, of our action, given our state, and so this is the only term that we keep in our gradient estimate, and so we can see here that this doesn't depend on the transition probabilities, right, so we actually don't need to know our transition probabilities in order to computer our gradient estimate. And then, so, therefore when we're sampling these, for any given trajectory tau, we can estimate J of theta using this gradient estimate. This is here shown for a single trajectory from what we had earlier, and then we can also sample over multiple trajectories to get the expectation. Okay, so given this gradient estimator that we've derived, the interpretation that we can make from this here, is that if our reward for a trajectory is high, if the reward that we got from taking the sequence of actions was good, then let's push up the probabilities of all the actions that we've seen. Right, we're just going to say that these were good actions that we took. And then if the reward is low, we want to push down these probabilities. We want to say these were bad actions, let's try and not sample these so much. Right and so we can see that's what's happening here, where we have pi of a, given s. This is the likelihood of the actions that we've taken and then we're going to scale this, we're going to take the gradient and the gradient is going to tell us how much should we change the parameters in order to increase our likelihood of our action, a, right? And then we're going to take this and scale it by how much reward we actually got from it, so how good were these actions, in reality. Okay, so this might seem simplistic to say that, you know, if a trajectory is good, then we're saying here that all of its actions were good. Right? But, in expectation, this actually averages out. So we have an unbiased estimator here, and so if you have many samples of this, then we will get an accurate estimate of our gradient. And this is nice because we can just take gradient steps and we know that we're going to be improving our loss function and getting closer to, at least some local optimum of our policy parameters theta. Alright, but there is a problem with this, and the problem is that this also suffers from high variance. Because this credit assignment is really hard. Right, we're saying that given a reward that we got, we're going to say all of the actions were good, we're just going to hope that this assignment of which actions were actually the best actions, that mattered, are going to average out over time. And so this is really hard and we need a lot of samples in order to have a good estimate. Alright, so this leads to the question of, is there anything that we can do to reduce the variance and improve the estimator? And so variance reduction is an important area of research in policy gradients, and in coming up with ways in order to improve the estimator and require fewer samples. Alright, so let's look at a couple of ideas of how we can do this. So given our gradient estimator, so the first idea is that we can push up the probabilities of an action only by it's affect on future rewards from that state, right? So, now with instead of scaling this likelihood, or pushing up this likelihood of this action by the total reward of its trajectory, let's look more specifically at just the sum of rewards coming from this time step on to the end, right? And so, this is basically saying that how good an action is, is only specified by how much future reward it generates. Which makes sense. Okay, so a second idea that we can also use is using a discount factor in order to ignore delayed effects. Alright so here we've added back in this discount factor, that we've seen before, which is saying that we are, you know, our discount factor's going to tell us how much we care about just the rewards that are coming up soon, versus rewards that came much later on. Right, so we were going to now say how good or bad an action is, looking more at the local neighborhood of action set it generates in the immediate near future and down weighting the the ones that come later on. Okay so these are some straightforward ideas that are generally used in practice. So, a third idea is this idea of using a baseline in order to reduce your variance. And so, a problem with just using the raw value of your trajectories, is that this isn't necessarily meaningful, right? So, for example, if your rewards are all positive, then you're just going to keep pushing up the probabilities of all your actions. And of course, you'll push them up to various degrees, but what's really important is whether a reward is better or worse than what you're expecting to be getting. Alright, so in order to address this, we can introduce a baseline function that's dependent on the state. Right, so this baseline function tell us what's, how much we, what's our guess and what we expect to get from this state, and then our reward or our scaling factor that we're going to use to be pushing up or down our probabilities, can now just be our expected sum of future rewards, minus this baseline, so now it's the relative of how much better or worse is the reward that we got from what we expected. And so how can we choose this baseline? Well, a very simple baseline, the most simple you can use, is just taking a moving average of rewards that you've experienced so far. So you can even do this overall trajectories, and this is just an average of what rewards have I been seeing as I've been training, and as I've been playing these episodes? Right, and so this gives some idea of whether the reward that I currently get was relatively better or worse. And so there's some variance on this that you can use but so far the variance reductions that we've seen so far are all used in what's typically called "vanilla REINFORCE" algorithm. Right, so looking at the cumulative future reward, having a discount factor, and some simple baselines. Now let's talk about how we can think about this idea of baseline and potentially choose better baselines. Right, so if we're going to think about what's a better baseline that we can choose, what we want to do is we want to push up the probability of an action from a state, if the action was better than the expected value of what we should get from that state. So, thinking about the value of what we're going to expect from the state, what does this remind you of? Does this remind you of anything that we talked about earlier in this lecture? Yes. [inaudible from audience] Yeah, so the value functions, right? The value functions that we talked about with Q-learning. So, exactly. So Q-functions and value functions and so, the intuition is that well, we're happy with an action, taking an action in a state s, if our Q-value of taking a specific action from this state is larger than the value function or expected value of the cumulative future reward that we can get from this state. Right, so this means that this action was better than other actions that we could've taken. And on the contrary, we're unhappy if this action, if this value or this difference is negative or small. Right, so now if we plug this in, in order to, as our scaling factor of how much we want to push up or down, our probabilities of our actions, then we can get this estimator here. Right, so, it's going to be exactly the same as before, but now where we've had before our cumulative expected reward, with our various reduction, variance reduction techniques and baselines in, here we can just plug in now this difference of how much better our current action was, based on our Q-function minus our value function from that state. Right, but what we talked about so far with our REINFORCE algorithm, we don't know what Q and V actually are. So can we learn these? And the answer is yes, using Q-learning. What we've already talked about before. So we can combine policy gradients while we've just been talking about, with Q-learning, by training both an actor, which is the policy, as well as a critic, right, a Q-function, which is going to tell us how good we think a state is, and an action in a state. Right, so using this in approach, an actor is going to decide which action to take and then the critic, or Q-function, is going to tell the actor how good its action was and how it should adjust. And so, and this also alleviates a little bit of the task of this critic compared to the Q-learning problems that we talked about earlier of having to have this learning a Q-value for every state, action pair, because here it only has to learn this for the state-action pairs that are generated by the policy. It only needs to know this where it matters for computing this scaling factor. Right, and then we can also, as we're learning this, incorporate all of the Q-learning tricks that we saw earlier, such as experience replay. And so, now I'm also going to just define this term that we saw earlier, Q of s of a, how much, how good was an action in a given state, minus V of s? Our expected value of how good the state is by this term advantage function. Right, so the advantage function is how much advantage did we get from playing this action? How much better the action was than expected. So, using this, we can put together our full actor-critic algorithm. And so what this looks like, is that we're going to start off with by initializing our policy parameters theta, and our critic parameters that we'll call phi. And then for each, for iterations of training, we're going to sample M trajectories, under the current policy. Right, we're going to play our policy and get these trajectories as s-zero, a-zero, r-zero, s-one and so on. Okay, and then we're going to compute the gradients that we want. Right, so for each of these trajectories and in each time step, we're going to compute this advantage function, and then we're going to use this advantage function, right? And then we're going to use that in our gradient estimator that we showed earlier, and accumulate our gradient estimate that we have for here. And then we're also going to train our critic parameters phi by exactly the same way, so as we saw earlier, basically trying to enforce this value function, right, to learn our value function, which is going to be pulled into, just minimizing this advantage function and this will encourage it to be closer to this Bellman equation that we saw earlier, right? And so, this is basically just iterating between learning and optimizing our policy function, as well as our critic function. And so then we're going to update the gradients and then we're going to go through and just continuously repeat this process. Okay, so now let's look at some examples of REINFORCE in action, and let's look first here at something called the Recurrent Attention Model, which is something that, which is a model also referred to as hard attention, but you'll see a lot in, recently, in computer vision tasks for various purposes. Right, and so the idea behind this is here, I've talked about the original work on hard attention, which is on image classification, and your goal is to still predict the image class, but now you're going to do this by taking a sequence of glimpses around the image. You're going to look at local regions around the image and you're basically going to selectively focus on these parts and build up information as you're looking around. Right, and so the reason that we want to do this is, well, first of all it has some nice inspiration from human perception in eye movement. Let's say we're looking at a complex image and we want to determine what's in the image. Well, you know, we might, maybe look at a low-resolution of it first, and then look specifically at parts of the image that will give us clues about what's in this image. And then, this approach of just looking at, looking around at an image at local regions, is also going to help you save computational resources, right? You don't need to process the full image. In practice, what usually happens is you look at a low-resolution image first, of a full image, to decide how to get started, and then you look at high-res portions of the image after that. And so this saves a lot of computational resources and you can think about, then, benefits of this to scalability, right, being able to, let's say process larger images more efficiently. And then, finally, this could also actually help with actual classification performance, because now you're able to ignore clutter and irrelevant parts of the image. Right? Like, you know, instead of always putting through your ConvNet, all the parts of your image, you can use this to, maybe, first prune out what are the relevant parts that I actually want to process, using my ConvNet. Okay, so what's the reinforcement learning formulation of this problem? Well, our state is going to be the glimpses that we've seen so far, right? Our what's the information that we've seen? Our action is then going to be where to look next in the image. Right, so in practice, this can be something like the x, y-coordinates, maybe centered around some fixed-sized glimpse that you want to look at next. And then the reward for the classification problem is going to be one, at the final time step, if our image is correctly classified, and zero otherwise. And so, because this glimpsing, taking these glimpses around the image is a non-differentiable operation, this is why we need to use reinforcement learning formulation, and learn policies for how to take these glimpse actions and we can train this using REINFORCE. So, given the state of glimpses so far, the core of our model is going to be this RNN that we're going to use to model the state, and then we're going to use our policy parameters in order to output the next action. Okay, so what this model looks like is we're going to take an input image. Right, and then we're going to take a glimpse at this image. So here, this glimpse is the red box here, and this is all blank, zeroes. And so we'll pass what we see so far into some neural network, and this can be any kind of network depending on your task. In the original experiments that I'm showing here, on MNIST, this is very simple, so you can just use a couple of small, fully-connected layers, but you can imagine for more complex images and other tasks you may want to use fancier ConvNets. Right, so you've passed this into some neural network, and then, remember I said we're also going to be integrating our state of, glimpses that we've seen so far, using a recurrent network. So, I'm just going to we'll see that later on, but this is going to go through that, and then it's going to output my x, y-coordinates, of where I'm going to see next. And in practice, this is going to be We want to output a distribution over actions, right, and so, what this is going to be it's going to be a gaussian distribution and we're going to output the mean. You can also output a mean and variance of this distribution in practice. The variance can also be fixed. Okay, so we're going to take this action that we're now going to sample a specific x, y location from our action distribution and then we're going to put this in to get the next, extract the next glimpse from our image. Right, so here we've moved to the end of the two, this tail part of the two. And so now we're actually starting to get some signal of what we want to see, right? Like, what we want to do is we want to look at the relevant parts of the image that are useful for classification. So we pass this through, again, our neural network layers, and then also through our recurrent network, right, that's taking this input as well as this previous hidden state, and we're going to use this to get a, so this is representing our policy, and then we're going to use this to output our distribution for the next location that we want to glimpse at. So we can continue doing this, you can see in this next glimpse here, we've moved a little bit more toward the center of the two. Alright, so it's probably learning that, you know, once I've seen this tail part of the two, that looks like this, maybe moving in this upper left-hand direction will get you more towards a center, which will also have a value, valuable information. And then we can keep doing this. And then finally, at the end, at our last time step, so we can have a fixed number of time steps here, in practice something like six or eight. And then at the final time step, since we want to do classification, we'll have our standard Softmax layer that will produce a distribution of probabilities for each class. And then here the max class was a two, so we can predict that this was a two. Right, and so this is going to be the set up of our model and our policy, and then we have our estimate for the gradient of this policy that we've said earlier we could compute by taking trajectories from here and using those to do back prop. And so we can just do this in order to train this model and learn the parameters of our policy, right? All of the weights that you can see here. Okay, so here's an example of a policies trained on MNIST, and so you can see that, in general, from wherever it's starting, usually learns to go closer to where the digit is, and then looking at the relevant parts of the digit, right? So this is pretty cool and this you know, follows kind of what you would expect, right, if you were to choose places to look next in order to most efficiently determine what digit this is. Right, and so this idea of hard attention, of recurrent attention models, has also been used in a lot of tasks in computer vision in the last couple of years, so you'll see this, used, for example, fine-grained image recognition. So, I mentioned earlier that one of the useful benefits of this can be also to both save on computational efficiency as well as to ignore clutter and irrelevant parts of the image, and when you have fine-grained image classification problems, you usually want both of these. You want to keep high-resolution, so that you can look at, you know, important differences. And then you also want to focus on these differences and ignore irrelevant parts. Yeah, question. [inaudible question from audience] Okay, so yeah, so the question is how is there is computational efficiency, because we also have this recurrent neural network in place. So that's true, it depends on exactly what's your, what is your problem, what is your network, and so on, but you can imagine that if you had some really hi- resolution image and you don't want to process the entire parts of this image with some huge ConvNet or some huge, you know, network, now you can get some savings by just focusing on specific smaller parts of the image. You only process those parts of the image. But, you're right, that it depends on exactly what problem set-up you have. This has also been used in image captioning, so if we're going to produce an caption for an image, we can choose, you know, we can have the image use this attention model to generate this caption and what it usually ends up learning is these policies where it'll focus on specific parts of the image, in sequence, and as it focuses on each part, it'll generate some words or the part of the caption referring to that part of the image. And then it's also been used, also tasks such as visual question answering, where we ask a question about the image and you want the model to output some answer to your question, for example, I don't know, how many chairs are around the table? And so you can see how this attention mechanism might be a good type of model for learning how to answer these questions. Okay, so that was an example of policy gradients in these hard attention models. And so, now I'm going to talk about one more example, that also uses policy gradients, which is learning how to play Go. Right, so DeepMind had this agent for playing Go, called AlphGo, that's been in the news a lot in the past, last year and this year. So, sorry? [inaudible comment from audience] And yesterday, yes, that's correct. So this is very exciting, recent news as well. So last year, a first version of AlphaGo was put into a competition against one of the best Go players of recent years, Lee Sedol, and the agent was able to beat him four to one, in a game of five matches. And actually, right now, just there's another match with Ke Jie, which is current world number one, and so it's best of three in China right now. And so the first game was yesterday. AlphaGo won. I think it was by just half a point, and so, so there's two more games to watch. These are all live-stream, so you guys, should also go online and watch these games. It's pretty interesting to hear the commentary. But, so what is this AlphaGo agent, right, from DeepMind? And it's based on a lot of what we've talked about so far in this lecture. And what it is it's a mixed of supervised learning and reinforcement learning, as well as a mix of some older methods for Go, Monte Carlo Tree Search, as well as recent deep RL approaches. So, okay, so how does AlphaGo beat the Go world champion? Well, what it first does is to train AlphaGo, what it takes as input is going to be a few featurization of the board. So it's basically, right, your board and the positions of the pieces on the board. That's your natural state representation. And what they do in order to improve performance a little bit is that they featurize this into some more channels of one is all the different stone colors, so this is kind of like your configuration of your board. Also some channels, for example, where, which moves are legal, some bias channels, some various things and then, given this state, right, it's going to first train a network that's initialized with supervised training from professional Go games. So, given the current board configuration or features, featurization of this, what's the correct next action to take? Alright, so given examples of professional games played, you know, just collected over time, we can just take all of these professional Go moves, train a standard, supervised mapping, from board state to action to take. Alright, so they take this, which is a pretty good start, and then they're going to use this to initialize a policy network. Right, so policy network, it's just going to take the exact same structure of input is your board state and your output is the actions that you're going to take. And this was the set-up for the policy gradients that we just saw, right? So now we're going to just continue training this using policy gradients. And it's going to do this reinforcement learning training by playing against itself for random, previous iterations. So self play, and the reward it's going to get is one, if it wins, and a negative one if it loses. And what we're also going to do is we're also going to learn a value network, so, something like a critic. And then, the final AlphaGo is going to be combining all of these together, so policy and value networks as well as with a Monte Carlo Tree Search algorithm, in order to select actions by look ahead search. Right, so after putting all this together, a value of a node, of where you are in play, and what you do next, is going to be a combination of your value function, as well as roll at outcome that you're computing from standard Monte Carlo Tree Search roll outs. Okay, so, yeah, so this is basically the various, the components of AlphaGo. If you're interested in reading more about this, there's a nature paper about this in 2016, and they trained this, I think, over, the version of AlphaGo that's being used in these matches is, like, I think a couple thousand CPUs plus a couple hundred GPUs, putting all of this together, so it's a huge amount of training that's going on, right. And yeah, so you guys should, follow the game this week. It's pretty exciting. Okay, so in summary, today we've talked about policy gradients, right, which are general. They, you're just directly taking gradient descent or ascent on your policy parameters, so this works well for a large class of problems, but it also suffers from high variance, so it requires a lot of samples, and your challenge here is sample efficiency. We also talked about Q-learning, which doesn't always work, it's harder to sometimes get it to work because of this problem that we talked earlier where you are trying to compute this exact state, action value for many, for very high dimensions, but when it does work, for problems, for example, the Atari we saw earlier, then it's usually more sample efficient than policy gradients. Right, and one of the challenges in Q-learning is that you want to make sure that you're doing sufficient exploration. Yeah? [inaudible question from audience] Oh, so for Q-learning can you do this process where you're, okay, where you're trying to start this off by some supervised training? So, I guess the direct approach for Q-learning doesn't do that because you're trying to regress to these Q-values, right, instead of policy gradients over this distribution, but I think there are ways in which you can, like, massage this type of thing to also bootstrap. Because I think bootstrapping in general or like behavior cloning is a good way to warm start these policies. Okay, so, right, so we've talked about policy gradients and Q-learning, and just another look at some of these, some of the guarantees that you have, right, with policy gradients. One thing we do know that's really nice is that this will always converge to a local minimum of J of theta, because we're just directly doing gradient ascent, and so this is often, and this local minimum is often just pretty good, right. And in Q-learning, on the other hand, we don't have any guarantees because here we're trying to approximate this Bellman equation with a complicated function approximator and so, in this case, this is the problem with Q-learning being a little bit trickier to train in terms of applicability to a wide range of problems. Alright, so today you got basically very, brief, kind of high-level overview of reinforcement learning and some major classes of algorithms in RL. And next time we're going to have a guest lecturer from, Song Han, who's done a lot of pioneering work in model compression and energy efficient deep learning, and so he's going to talk some of this, about some of this. Thank you.
Stanford_Computer_Vision
Lecture_10_Recurrent_Neural_Networks.txt
- Okay. Can everyone hear me? Okay. Sorry for the delay. I had a bit of technical difficulty. Today was the first time I was trying to use my new touch bar Mac book pro for presenting, and none of the adapters are working. So, I had to switch laptops at the last minute. So, thanks. Sorry about that. So, today is lecture 10. We're talking about recurrent neural networks. So, as of, as usual, a couple administrative notes. So, We're working hard on assignment one grading. Those grades will probably be out sometime later today. Hopefully, they can get out before the A2 deadline. That's what I'm hoping for. On a related note, Assignment two is due today at 11:59 p.m. so, who's done with that already? About half you guys. So, you remember, I did warn you when the assignment went out that it was quite long, to start early. So, you were warned about that. But, hopefully, you guys have some late days left. Also, as another reminder, the midterm will be in class on Tuesday. If you kind of look around the lecture hall, there are not enough seats in this room to seat all the enrolled students in the class. So, we'll actually be having the midterm in several other lecture halls across campus. And we'll be sending out some more details on exactly where to go in the next couple of days. So a bit of a, another bit of announcement. We've been working on this sort of fun bit of extra credit thing for you to play with that we're calling the training game. This is this cool browser based experience, where you can go in and interactively train neural networks and tweak the hyper parameters during training. And this should be a really cool interactive way for you to practice some of these hyper parameter tuning skills that we've been talking about the last couple of lectures. So this is not required, but this, I think, will be a really useful experience to gain a little bit more intuition into how some of these hyper parameters work for different types of data sets in practice. So we're still working on getting all the bugs worked out of this setup, and we'll probably send out some more instructions on exactly how this will work in the next couple of days. But again, not required. But please do check it out. I think it'll be really fun and a really cool thing for you to play with. And will give you a bit of extra credit if you do some, if you end up working with this and doing a couple of runs with it. So, we'll again send out some more details about this soon once we get all the bugs worked out. As a reminder, last time we were talking about CNN Architectures. We kind of walked through the time line of some of the various winners of the image net classification challenge, kind of the breakthrough result. As we saw was the AlexNet architecture in 2012, which was a nine layer convolutional network. It did amazingly well, and it sort of kick started this whole deep learning revolution in computer vision, and kind of brought a lot of these models into the mainstream. Then we skipped ahead a couple years, and saw that in 2014 image net challenge, we had these two really interesting models, VGG and GoogLeNet, which were much deeper. So VGG was, they had a 16 and a 19 layer model, and GoogLeNet was, I believe, a 22 layer model. Although one thing that is kind of interesting about these models is that the 2014 image net challenge was right before batch normalization was invented. So at this time, before the invention of batch normalization, training these relatively deep models of roughly twenty layers was very challenging. So, in fact, both of these two models had to resort to a little bit of hackery in order to get their deep models to converge. So for VGG, they had the 16 and 19 layer models, but actually they first trained an 11 layer model, because that was what they could get to converge. And then added some extra random layers in the middle and then continued training, actually training the 16 and 19 layer models. So, managing this training process was very challenging in 2014 before the invention of batch normalization. Similarly, for GoogLeNet, we saw that GoogLeNet has these auxiliary classifiers that were stuck into lower layers of the network. And these were not really needed for the class to, to get good classification performance. This was just sort of a way to cause extra gradient to be injected directly into the lower layers of the network. And this sort of, this again was before the invention of batch normalization and now once you have these networks with batch normalization, then you no longer need these slightly ugly hacks in order to get these deeper models to converge. Then we also saw in the 2015 image net challenge was this really cool model called ResNet, these residual networks that now have these shortcut connections that actually have these little residual blocks where we're going to take our input, pass it through the residual blocks, and then add that output of the, then add our input to the block, to the output from these convolutional layers. This is kind of a funny architecture, but it actually has two really nice properties. One is that if we just set all the weights in this residual block to zero, then this block is competing the identity. So in some way, it's relatively easy for this model to learn not to use the layers that it doesn't need. In addition, it kind of adds this interpretation to L2 regularization in the context of these neural networks, cause once you put L2 regularization, remember, on your, on the weights of your network, that's going to drive all the parameters towards zero. And maybe your standard convolutional architecture is driving towards zero. Maybe it doesn't make sense. But in the context of a residual network, if you drive all the parameters towards zero, that's kind of encouraging the model to not use layers that it doesn't need, because it will just drive those, the residual blocks towards the identity, whether or not needed for classification. The other really useful property of these residual networks has to do with the gradient flow in the backward paths. If you remember what happens at these addition gates in the backward pass, when upstream gradient is coming in through an addition gate, then it will split and fork along these two different paths. So then, when upstream gradient comes in, it'll take one path through these convolutional blocks, but it will also have a direct connection of the gradient through this residual connection. So then when you look at, when you imagine stacking many of these residual blocks on top of each other, and our network ends up with hundreds of, potentially hundreds of layers. Then, these residual connections give a sort of gradient super highway for gradients to flow backward through the entire network. And this allows it to train much easier and much faster. And actually allows these things to converge reasonably well, even when the model is potentially hundreds of layers deep. And this idea of managing gradient flow in your models is actually super important everywhere in machine learning. And super prevalent in recurrent networks as well. So we'll definitely revisit this idea of gradient flow later in today's lecture. So then, we kind of also saw a couple other more exotic, more recent CNN architectures last time, including DenseNet and FractalNet, and once you think about these architectures in terms of gradient flow, they make a little bit more sense. These things like DenseNet and FractalNet are adding these additional shortcut or identity connections inside the model. And if you think about what happens in the backwards pass for these models, these additional funny topologies are basically providing direct paths for gradients to flow from the loss at the end of the network more easily into all the different layers of the network. So I think that, again, this idea of managing gradient flow properly in your CNN Architectures is something that we've really seen a lot more in the last couple of years. And will probably see more moving forward as more exotic architectures are invented. We also saw this kind of nice plot, plotting performance of the number of flops versus the number of parameters versus the run time of these various models. And there's some interesting characteristics that you can dive in and see from this plot. One idea is that VGG and AlexNet have a huge number of parameters, and these parameters actually come almost entirely from the fully connected layers of the models. So AlexNet has something like roughly 62 million parameters, and if you look at that last fully connected layer, the final fully connected layer in AlexNet is going from an activation volume of six by six by 256 into this fully connected vector of 496. So if you imagine what the weight matrix needs to look like at that layer, the weight matrix is gigantic. It's number of entries is six by six, six times six times 256 times 496. And if you multiply that out, you see that that single layer has 38 million parameters. So more than half of the parameters of the entire AlexNet model are just sitting in that last fully connected layer. And if you add up all the parameters in just the fully connected layers of AlexNet, including these other fully connected layers, you see something like 59 of the 62 million parameters in AlexNet are sitting in these fully connected layers. So then when we move other architectures, like GoogLeNet and ResNet, they do away with a lot of these large fully connected layers in favor of global average pooling at the end of the network. And this allows these networks to really cut, these nicer architectures, to really cut down the parameter count in these architectures. So that was kind of our brief recap of the CNN architectures that we saw last lecture, and then today, we're going to move to one of my favorite topics to talk about, which is recurrent neural networks. So, so far in this class, we've seen, what I like to think of as kind of a vanilla feed forward network, all of our network architectures have this flavor, where we receive some input and that input is a fixed size object, like an image or vector. That input is fed through some set of hidden layers and produces a single output, like a classifications, like a set of classifications scores over a set of categories. But in some context in machine learning, we want to have more flexibility in the types of data that our models can process. So once we move to this idea of recurrent neural networks, we have a lot more opportunities to play around with the types of input and output data that our networks can handle. So once we have recurrent neural networks, we can do what we call these one to many models. Or where maybe our input is some object of fixed size, like an image, but now our output is a sequence of variable length, such as a caption. Where different captions might have different numbers of words, so our output needs to be variable in length. We also might have many to one models, where our input could be variably sized. This might be something like a piece of text, and we want to say what is the sentiment of that text, whether it's positive or negative in sentiment. Or in a computer vision context, you might imagine taking as input a video, and that video might have a variable number of frames. And now we want to read this entire video of potentially variable length. And then at the end, make a classification decision about maybe what kind of activity or action is going on in that video. We also have a, we might also have problems where we want both the inputs and the output to be variable in length. We might see something like this in machine translation, where our input is some, maybe, sentence in English, which could have a variable length, and our output is maybe some sentence in French, which also could have a variable length. And crucially, the length of the English sentence might be different from the length of the French sentence. So we need some models that have the capacity to accept both variable length sequences on the input and on the output. Finally, we might also consider problems where our input is variably length, like something like a video sequence with a variable number of frames. And now we want to make a decision for each element of that input sequence. So in the context of videos, that might be making some classification decision along every frame of the video. And recurrent neural networks are this kind of general paradigm for handling variable sized sequence data that allow us to pretty naturally capture all of these different types of setups in our models. So recurring neural networks are actually important, even for some problems that have a fixed size input and a fixed size output. Recurrent neural networks can still be pretty useful. So in this example, we might want to do, for example, sequential processing of our input. So here, we're receiving a fixed size input like an image, and we want to make a classification decision about, like, what number is being shown in this image? But now, rather than just doing a single feed forward pass and making the decision all at once, this network is actually looking around the image and taking various glimpses of different parts of the image. And then after making some series of glimpses, then it makes its final decision as to what kind of number is present. So here, we had one, So here, even though our input and outputs, our input was an image, and our output was a classification decision, even this context, this idea of being able to handle variably length processing with recurrent neural networks can lead to some really interesting types of models. There's a really cool paper that I like that applied this same type of idea to generating new images. Where now, we want the model to synthesize brand new images that look kind of like the images it saw in training, and we can use a recurrent neural network architecture to actually paint these output images sort of one piece at a time in the output. You can see that, even though our output is this fixed size image, we can have these models that are working over time to compute parts of the output one at a time sequentially. And we can use recurrent neural networds for that type of setup as well. So after this sort of cool pitch about all these cool things that RNNs can do, you might wonder, like what exactly are these things? So in general, a recurrent neural network is this little, has this little recurrent core cell and it will take some input x, feed that input into the RNN, and that RNN has some internal hidden state, and that internal hidden state will be updated every time that the RNN reads a new input. And that internal hidden state will be then fed back to the model the next time it reads an input. And frequently, we will want our RNN"s to also produce some output at every time step, so we'll have this pattern where it will read an input, update its hidden state, and then produce an output. So then the question is what is the functional form of this recurrence relation that we're computing? So inside this little green RNN block, we're computing some recurrence relation, with a function f. So this function f will depend on some weights, w. It will accept the previous hidden state, h t - 1, as well as the input at the current state, x t, and this will output the next hidden state, or the updated hidden state, that we call h t. And now, then as we read the next input, this hidden state, this new hidden state, h t, will then just be passed into the same function as we read the next input, x t plus one. And now, if we wanted to produce some output at every time step of this network, we might attach some additional fully connected layers that read in this h t at every time step. And make that decision based on the hidden state at every time step. And one thing to note is that we use the same function, f w, and the same weights, w, at every time step of the computation. So then kind of the simplest function form that you can imagine is what we call this vanilla recurrent neural network. So here, we have this same functional form from the previous slide, where we're taking in our previous hidden state and our current input and we need to produce the next hidden state. And the kind of simplest thing you might imagine is that we have some weight matrix, w x h, that we multiply against the input, x t, as well as another weight matrix, w h h, that we multiply against the previous hidden state. So we make these two multiplications against our two states, add them together, and squash them through a tanh, so we get some kind of non linearity in the system. You might be wondering why we use a tanh here and not some other type of non-linearity? After all that we've said negative about tanh's in previous lectures, and I think we'll return a little bit to that later on when we talk about more advanced architectures, like lstm. So then, this, So then, in addition in this architecture, if we wanted to produce some y t at every time step, you might have another weight matrix, w, you might have another weight matrix that accepts this hidden state and then transforms it to some y to produce maybe some class score predictions at every time step. And when I think about recurrent neural networks, I kind of think about, you can also, you can kind of think of recurrent neural networks in two ways. One is this concept of having a hidden state that feeds back at itself, recurrently. But I find that picture a little bit confusing. And sometimes, I find it clearer to think about unrolling this computational graph for multiple time steps. And this makes the data flow of the hidden states and the inputs and the outputs and the weights maybe a little bit more clear. So then at the first time step, we'll have some initial hidden state h zero. This is usually initialized to zeros for most context, in most contexts, an then we'll have some input, x t. This initial hidden state, h zero, and our current input, x t, will go into our f w function. This will produce our next hidden state, h one. And then, we'll repeat this process when we receive the next input. So now our current h one and our x one, will go into that same f w, to produce our next output, h two. And this process will repeat over and over again, as we consume all of the input, x ts, in our sequence of inputs. And now, one thing to note, is that we can actually make this even more explicit and write the w matrix in our computational graph. And here you can see that we're re-using the same w matrix at every time step of the computation. So now every time that we have this little f w block, it's receiving a unique h and a unique x, but all of these blocks are taking the same w. And if you remember, we talked about how gradient flows in back propagation, when you re-use the same, when you re-use the same node multiple times in a computational graph, then remember during the backward pass, you end up summing the gradients into the w matrix when you're computing a d los d w. So, if you kind of think about the back propagation for this model, then you'll have a separate gradient for w flowing from each of those time steps, and then the final gradient for w will be the sum of all of those individual per time step gradiants. We can also write to this y t explicitly in this computational graph. So then, this output, h t, at every time step might feed into some other little neural network that can produce a y t, which might be some class scores, or something like that, at every time step. We can also make the loss more explicit. So in many cases, you might imagine producing, you might imagine that you have some ground truth label at every time step of your sequence, and then you'll compute some loss, some individual loss, at every time step of these outputs, y t's. And this loss might, it will frequently be something like soft max loss, in the case where you have, maybe, a ground truth label at every time step of the sequence. And now the final loss for the entire, for this entire training stop, will be the sum of these individual losses. So now, we had a scaler loss at every time step? And we just summed them up to get our final scaler loss at the top of the network. And now, if you think about, again, back propagation through this thing, we need, in order to train the model, we need to compute the gradient of the loss with respect to w. So, we'll have loss flowing from that final loss into each of these time steps. And then each of those time steps will compute a local gradient on the weights, w, which will all then be summed to give us our final gradient for the weights, w. Now if we have a, sort of, this many to one situation, where maybe we want to do something like sentiment analysis, then we would typically make that decision based on the final hidden state of this network. Because this final hidden state kind of summarizes all of the context from the entire sequence. Also, if we have a kind of a one to many situation, where we want to receive a fix sized input and then produce a variably sized output. Then you'll commonly use that fixed size input to initialize, somehow, the initial hidden state of the model, and now the recurrent network will tick for each cell in the output. And now, as you produce your variably sized output, you'll unroll the graph for each element in the output. So this, when we talk about the sequence to sequence models where you might do something like machine translation, where you take a variably sized input and a variably sized output. You can think of this as a combination of the many to one, plus a one to many. So, we'll kind of proceed in two stages, what we call an encoder and a decoder. So if you're the encoder, we'll receive the variably sized input, which might be your sentence in English, and then summarize that entire sentence using the final hidden state of the encoder network. And now we're in this many to one situation where we've summarized this entire variably sized input in this single vector, and now, we have a second decoder network, which is a one to many situation, which will input that single vector summarizing the input sentence and now produce this variably sized output, which might be your sentence in another language. And now in this variably sized output, we might make some predictions at every time step, maybe about what word to use. And you can imagine kind of training this entire thing by unrolling this computational graph summing the losses at the output sequence and just performing back propagation, as usual. So as a bit of a concrete example, one thing that we frequently use recurrent neural networks for, is this problem called language modeling. So in the language modeling problem, we want to read some sequence of, we want to have our network, sort of, understand how to produce natural language. So in the, so this, this might happen at the character level where our model will produce characters one at a time. This might also happen at the word level where our model will produce words one at a time. But in a very simple example, you can imagine this character level language model where we want, where the network will read some sequence of characters and then it needs to predict, what will the next character be in this stream of text? So in this example, we have this very small vocabulary of four letters, h, e, l, and o, and we have this example training sequence of the word hello, h, e, l, l, o. So during training, when we're training this language model, we will feed the characters of this training sequence as inputs, as x ts, to out input of our, we'll feed the characters of our training sequence, these will be the x ts that we feed in as the inputs to our recurrent neural network. And then, each of these inputs, it's a letter, and we need to figure out a way to represent letters in our network. So what we'll typically do is figure out what is our total vocabulary. In this case, our vocabulary has four elements. And each letter will be represented by a vector that has zeros in every slot but one, and a one for the slot in the vocabulary corresponding to that letter. In this little example, since our vocab has the four letters, h, e, l, o, then our input sequence, the h is represented by a four element vector with a one in the first slot and zero's in the other three slots. And we use the same sort of pattern to represent all the different letters in the input sequence. Now, during this forward pass of what this network is doing, at the first time step, it will receive the input letter h. That will go into the first RNN, to the RNN cell, and then we'll produce this output, y t, which is the network making predictions about for each letter in the vocabulary, which letter does it think is most likely going to come next. In this example, the correct output letter was e because our training sequence was hello, but the model is actually predicting, I think it's actually predicting o as the most likely letter. So in this case, this prediction was wrong and we would use softmaxt loss to quantify our unhappiness with these predictions. The next time step, we would feed in the second letter in the training sequence, e, and this process will repeat. We'll now represent e as a vector. Use that input vector together with the previous hidden state to produce a new hidden state and now use the second hidden state to, again, make predictions over every letter in the vocabulary. In this case, because our training sequence was hello, after the letter e, we want our model to predict l. In this case, our model may have very low predictions for the letter l, so we would incur high loss. And you kind of repeat this process over and over, and if you train this model with many different sequences, then eventually it should learn how to predict the next character in a sequence based on the context of all the previous characters that it's seen before. And now, if you think about what happens at test time, after we train this model, one thing that we might want to do with it is a sample from the model, and actually use this trained neural network model to synthesize new text that kind of looks similar in spirit to the text that it was trained on. The way that this will work is we'll typically see the model with some input prefix of text. In this case, the prefix is just the single letter h, and now we'll feed that letter h through the first time step of our recurrent neural network. It will product this distribution of scores over all the characters in the vocabulary. Now, at training time, we'll use these scores to actually sample from it. So we'll use a softmaxt function to convert those scores into a probability distribution and then we will sample from that probability distribution to actually synthesize the second letter in the sequence. And in this case, even though the scores were pretty bad, maybe we got lucky and sampled the letter e from this probability distribution. And now, we'll take this letter e that was sampled from this distribution and feed it back as input into the network at the next time step. Now, we'll take this e, pull it down from the top, feed it back into the network as one of these, sort of, one hot vectorial representations, and then repeat the process in order to synthesize the second letter in the output. And we can repeat this process over and over again to synthesize a new sequence using this trained model, where we're synthesizing the sequence one character at a time using these predicted probability distributions at each time step. Question? Yeah, that's a great question. So the question is why might we sample instead of just taking the character with the largest score? In this case, because of the probability distribution that we had, it was impossible to get the right character, so we had the sample so the example could work out, and it would make sense. But in practice, sometimes you'll see both. So sometimes you'll just take the argmax probability, and that will sometimes be a little bit more stable, but one advantage of sampling, in general, is that it lets you get diversity from your models. Sometimes you might have the same input, maybe the same prefix, or in the case of image captioning, maybe the same image. But then if you sample rather than taking the argmax, then you'll see that sometimes these trained models are actually able to produce multiple different types of reasonable output sequences, depending on the kind, depending on which samples they take at the first time steps. It's actually kind of a benefit cause we can get now more diversity in our outputs. Another question? Could we feed in the softmax vector instead of the one element vector? You mean at test time? Yeah yeah, so the question is, at test time, could we feed in this whole softmax vector rather than a one hot vector? There's kind of two problems with that. One is that that's very different from the data that it saw at training time. In general, if you ask your model to do something at test time, which is different from training time, then it'll usually blow up. It'll usually give you garbage and you'll usually be sad. The other problem is that in practice, our vocabularies might be very large. So maybe, in this simple example, our vocabulary is only four elements, so it's not a big problem. But if you're thinking about generating words one at a time, now your vocabulary is every word in the English language, which could be something like tens of thousands of elements. So in practice, this first element, this first operation that's taking in this one hot vector, is often performed using sparse vector operations rather than dense factors. It would be, sort of, computationally really bad if you wanted to have this load of 10,000 elements softmax vector. So that's usually why we use a one hot instead, even at test time. This idea that we have a sequence and we produce an output at every time step of the sequence and then finally compute some loss, this is sometimes called backpropagation through time because you're imagining that in the forward pass, you're kind of stepping forward through time and then during the backward pass, you're sort of going backwards through time to compute all your gradients. This can actually be kind of problematic if you want to train the sequences that are very, very long. So if you imagine that we were kind of trying to train a neural network language model on maybe the entire text of Wikipedia, which is, by the way, something that people do pretty frequently, this would be super slow, and every time we made a gradient step, we would have to make a forward pass through the entire text of all of wikipedia, and then make a backward pass through all of wikipedia, and then make a single gradient update. And that would be super slow. Your model would never converge. It would also take a ridiculous amount of memory so this would be just really bad. In practice, what people do is this, sort of, approximation called truncated backpropagation through time. Here, the idea is that, even though our input sequence is very, very long, and even potentially infinite, what we'll do is that during, when we're training the model, we'll step forward for some number of steps, maybe like a hundred is kind of a ballpark number that people frequently use, and we'll step forward for maybe a hundred steps, compute a loss only over this sub sequence of the data, and then back propagate through this sub sequence, and now make a gradient step. And now, when we repeat, well, we still have these hidden states that we computed from the first batch, and now, when we compute this next batch of data, we will carry those hidden states forward in time, so the forward pass will be exactly the same. But now when we compute a gradient step for this next batch of data, we will only backpropagate again through this second batch. Now, we'll make a gradient step based on this truncated backpropagation through time. This process will continue, where now when we make the next batch, we'll again copy these hidden states forward, but then step forward and then step backward, but only for some small number of time steps. So this is, you can kind of think of this as being an alegist who's the cast at gradient descent in the case of sequences. Remember, when we talked about training our models on large data sets, then these data sets, it would be super expensive to compute the gradients over every element in the data set. So instead, we kind of take small samples, small mini batches instead, and use mini batches of data to compute gradient stops in any kind of image classification case. Question? Is this kind of, the question is, is this kind of making the Mark Hobb assumption? No, not really. Because we're carrying this hidden state forward in time forever. It's making a Marcovian assumption in the sense that, conditioned on the hidden state, but the hidden state is all that we need to predict the entire future of the sequence. But that assumption is kind of built into the recurrent neural network formula from the start. And that's not really particular to back propagation through time. Back propagation through time, or sorry, truncated back prop though time is just the way to approximate these gradients without going making a backwards pass through your potentially very large sequence of data. This all sounds very complicated and confusing and it sounds like a lot of code to write, but in fact, this can acutally be pretty concise. Andrea has this example of what he calls min-char-rnn, that does all of this stuff in just like a 112 lines of Python. It handles building the vocabulary. It trains the model with truncated back propagation through time. And then, it can actually sample from that model in actually not too much code. So even though this sounds like kind of a big, scary process, it's actually not too difficult. I'd encourage you, if you're confused, to maybe go check this out and step through the code on your own time, and see, kind of, all of these concrete steps happening in code. So this is all in just a single file, all using numpy with no dependencies. This was relatively easy to read. So then, once we have this idea of training a recurrent neural network language model, we can actually have a lot of fun with this. And we can take in, sort of, any text that we want. Take in, like, whatever random text you can think of from the internet, train our recurrent neural network language model on this text, and then generate new text. So in this example, we took this entire text of all of Shakespeare's works, and then used that to train a recurrent neural network language model on all of Shakespeare. And you can see that the beginning of training, it's kind of producing maybe random gibberish garbage, but throughout the course of training, it ends up producing things that seem relatively reasonable. And after you've, after this model has been trained pretty well, then it produces text that seems, kind of, Shakespeare-esque to me. "Why do what that day," replied, whatever, right, you can read this. Like, it kind of looks kind of like Shakespeare. And if you actually train this model even more, and let it converge even further, and then sample these even longer sequences, you can see that it learns all kinds of crazy cool stuff that really looks like a Shakespeare play. It knows that it uses, maybe, these headings to say who's speaking. Then it produces these bits of text that have crazy dialogue that sounds kind of Shakespeare-esque. It knows to put line breaks in between these different things. And this is all, like, really cool, all just sort of learned from the structure of the data. We can actually get even crazier than this. This was one of my favorite examples. I found online, there's this. Is anyone a mathematician in this room? Has anyone taken an algebraic topology course by any chance? Wow, a couple, that's impressive. So you probably know more algebraic topology than me, but I found this open source algebraic topology textbook online. It's just a whole bunch of tech files that are like this super dense mathematics. And LaTac, cause LaTac is sort of this, let's you write equations and diagrams and everything just using plain text. We can actually train our recurrent neural network language model on the raw Latac source code of this algebraic topology textbook. And if we do that, then after we sample from the model, then we get something that seems like, kind of like algebraic topology. So it knows to like put equations. It puts all kinds of crazy stuff. It's like, to prove study, we see that F sub U is a covering of x prime, blah, blah, blah, blah, blah. It knows where to put unions. It knows to put squares at the end of proofs. It makes lemmas. It makes references to previous lemmas. Right, like we hear, like. It's namely a bi-lemma question. We see that R is geometrically something. So it's actually pretty crazy. It also sometimes tries to make diagrams. For those of you that have taken algebraic topology, you know that these commutative diagrams are kind of a thing that you work with a lot So it kind of got the general gist of how to make those diagrams, but they actually don't make any sense. And actually, one of my favorite examples here is that it sometimes omits proofs. So it'll sometimes say, it'll sometimes say something like theorem, blah, blah, blah, blah, blah, proof omitted. This thing kind of has gotten the gist of how some of these math textbooks look like. We can have a lot of fun with this. So we also tried training one of these models on the entire source code of the Linux kernel. 'Cause again, this character level stuff that we can train on, And then, when we sample this, it acutally again looks like C source code. It knows how to write if statements. It has, like, pretty good code formatting skills. It knows to indent after these if statements. It knows to put curly braces. It actually even makes comments about some things that are usually nonsense. One problem with this model is that it knows how to declare variables. But it doesn't always use the variables that it declares. And sometimes it tries to use variables that haven't been declared. This wouldn't compile. I would not recommend sending this as a pull request to Linux. This thing also figures out how to recite the GNU, this GNU license character by character. It kind of knows that you need to recite the GNU license and after the license comes some includes, then some other includes, then source code. This thing has actually learned quite a lot about the general structure of the data. Where, again, during training, all we asked this model to do was try to predict the next character in the sequence. We didn't tell it any of this structure, but somehow, just through the course of this training process, it learned a lot about the latent structure in the sequential data. Yeah, so it knows how to write code. It does a lot of cool stuff. I had this paper with Andre a couple years ago where we trained a bunch of these models and then we wanted to try to poke into the brains of these models and figure out like what are they doing and why are they working. So we saw, in our, these recurring neural networks has this hidden vector which is, maybe, some vector that's updated over every time step. And then what we wanted to try to figure out is could we find some elements of this vector that have some Symantec interpretable meaning. So what we did is we trained a neural network language model, one of these character level models on one of these data sets, and then we picked one of the elements in that hidden vector and now we look at what is the value of that hidden vector over the course of a sequence to try to get some sense of maybe what these different hidden states are looking for. When you do this, a lot of them end up looking kind of like random gibberish garbage. So here again, what we've done, is we've picked one element of that vector, and now we run the sequence forward through the trained model, and now the color of each character corresponds to the magnitude of that single scaler element of the hidden vector at every time step when it's reading the sequence. So you can see that a lot of the vectors in these hidden states are kind of not very interpretable. It seems like they're kind of doing some of this low level language modeling to figure out what character should come next. But some of them end up quite nice. So here we found this vector that is looking for quotes. You can see that there's this one hidden element, this one element in the vector, that is off, off, off, off, off blue and then once it hits a quote, it turns on and remains on for the duration of this quote. And now when we hit the second quotation mark, then that cell turns off. So somehow, even though this model was only trained to predict the next character in a sequence, it somehow learned that a useful thing, in order to do this, might be to have some cell that's trying to detect quotes. We also found this other cell that is, looks like it's counting the number of characters since a line break. So you can see that at the beginning of each line, this element starts off at zero. Throughout the course of the line, it's gradually more red, so that value increases. And then after the new line character, it resets to zero. So you can imagine that maybe this cell is letting the network keep track of when it needs to write to produce these new line characters. We also found some that, when we trained on the linux source code, we found some examples that are turning on inside the conditions of if statements. So this maybe allows the network to differentiate whether it's outside an if statement or inside that condition, which might help it model these sequences better. We also found some that turn on in comments, or some that seem like they're counting the number of indentation levels. This is all just really cool stuff because it's saying that even though we are only trying to train this model to predict next characters, it somehow ends up learning a lot of useful structure about the input data. One kind of thing that we often use, so this is not really been computer vision so far, and we need to pull this back to computer vision since this is a vision class. We've alluded many times to this image captioning model where we want to build models that can input an image and then output a caption in natural language. There were a bunch of papers a couple years ago that all had relatively similar approaches. But I'm showing the figure from the paper from our lab in a totally un-biased way. But, the idea here is that the caption is this variably length sequence that we might, the sequence might have different numbers of words for different captions. So this is a totally natural fit for a recurrent neural network language model. So then what this model looks like is we have some convolutional network which will input the, which will take as input the image, and we've seen a lot about how convolution networks work at this point, and that convolutional network will produce a summary vector of the image which will then feed into the first time step of one of these recurrent neural network language models which will then produce words of the caption one at a time. So the way that this kind of works at test time after the model is trained looks almost exactly the same as these character level language models that we saw a little bit ago. We'll take our input image, feed it through our convolutional network. But now instead of taking the softmax scores from an image net model, we'll instead take this 4,096 dimensional vector from the end of the model, and we'll take that vector and use it to summarize the whole content of the image. Now, remember when we talked about RNN language models, we said that we need to see the language model with that first initial input to tell it to start generating text. So in this case, we'll give it some special start token, which is just saying, hey, this is the start of a sentence. Please start generating some text conditioned on this image information. So now previously, we saw that in this RNN language model, we had these matrices that were taking the previous, the input at the current time step and the hidden state of the previous time step and combining those to get the next hidden state. Well now, we also need to add in this image information. So one way, people play around with exactly different ways to incorporate this image information, but one simple way is just to add a third weight matrix that is adding in this image information at every time step to compute the next hidden state. So now, we'll compute this distribution over all scores in our vocabulary and here, our vocabulary is something like all English words, so it could be pretty large. We'll sample from that distribution and now pass that word back as input at the next time step. And that will then feed that word in, again get a distribution over all words in the vocab, and again sample to produce the next word. So then, after that thing is all done, we'll maybe generate, we'll generate this complete sentence. We stop generation once we sample the special ends token, which kind of corresponds to the period at the end of the sentence. Then once the network samples this ends token, we stop generation and we're done and we've gotten our caption for this image. And now, during training, we trained this thing to generate, like we put an end token at the end of every caption during training so that the network kind of learned during training that end tokens come at the end of sequences. So then, during test time, it tends to sample these end tokens once it's done generating. So we trained this model in kind of a completely supervised way. You can find data sets that have images together with natural language captions. Microsoft COCO is probably the biggest and most widely used for this task. But you can just train this model in a purely supervised way. And then backpropagate through to jointly train both this recurrent neural network language model and then also pass gradients back into this final layer of this the CNN and additionally update the weights of the CNN to jointly tune all parts of the model to perform this task. Once you train these models, they actually do some pretty reasonable things. These are some real results from a model, from one of these trained models, and it says things like a cat sitting on a suitcase on the floor, which is pretty impressive. It knows about cats sitting on a tree branch, which is also pretty cool. It knows about two people walking on the beach with surfboards. So these models are actually pretty powerful and can produce relatively complex captions to describe the image. But that being said, these models are really not perfect. They're not magical. Just like any machine learning model, if you try to run them on data that was very different from the training data, they don't work very well. So for example, this example, it says a woman is holding a cat in her hand. There's clearly no cat in the image. But she is wearing a fur coat, and maybe the texture of that coat kind of looked like a cat to the model. Over here, we see a woman standing on a beach holding a surfboard. Well, she's definitely not holding a surfboard and she's doing a handstand, which is maybe the interesting part of that image, and the model totally missed that. Also, over here, we see this example where there's this picture of a spider web in the tree branch, and it totally, and it says something like a bird sitting on a tree branch. So it totally missed the spider, but during training, it never really saw examples of spiders. It just knows that birds sit on tree branches during training. So it kind of makes these reasonable mistakes. Or here at the bottom, it can't really tell the difference between this guy throwing and catching the ball, but it does know that it's a baseball player and there's balls and things involved. So again, just want to say that these models are not perfect. They work pretty well when you ask them to caption images that were similar to the training data, but they definitely have a hard time generalizing far beyond that. So another thing you'll sometimes see is this slightly more advanced model called Attention, where now when we're generating the words of this caption, we can allow the model to steer it's attention to different parts of the image. And I don't want to spend too much time on this. But the general way that this works is that now our convolutional network, rather than producing a single vector summarizing the entire image, now it produces some grid of vectors that summarize the, that give maybe one vector for each spatial location in the image. And now, when we, when this model runs forward, in addition to sampling the vocabulary at every time step, it also produces a distribution over the locations in the image where it wants to look. And now this distribution over image locations can be seen as a kind of a tension of where the model should look during training. So now that first hidden state computes this distribution over image locations, which then goes back to the set of vectors to give a single summary vector that maybe focuses the attention on one part of that image. And now that summary vector gets fed, as an additional input, at the next time step of the neural network. And now again, it will produce two outputs. One is our distribution over vocabulary words. And the other is a distribution over image locations. This whole process will continue, and it will sort of do these two different things at every time step. And after you train the model, then you can see that it kind of will shift it's attention around the image for every word that it generates in the caption. Here you can see that it produced the caption, a bird is flying over, I can't see that far. But you can see that its attention is shifting around different parts of the image for each word in the caption that it generates. There's this notion of hard attention versus soft attention, which I don't really want to get into too much, but with this idea of soft attention, we're kind of taking a weighted combination of all features from all image locations, whereas in the hard attention case, we're forcing the model to select exactly one location to look at in the image at each time step. So the hard attention case where we're selecting exactly one image location is a little bit tricky because that is not really a differentiable function, so you need to do something slightly fancier than vanilla backpropagation in order to just train the model in that scenario. And I think we'll talk about that a little bit later in the lecture on reinforcement learning. Now, when you look at after you train one of these attention models and then run it on to generate captions, you can see that it tends to focus it's attention on maybe the salient or semanticly meaningful part of the image when generating captions. You can see that the caption was a woman is throwing a frisbee in a park and you can see that this attention mask, when it generated the word, when the model generated the word frisbee, at the same time, it was focusing it's attention on this image region that actually contains the frisbee. This is actually really cool. We did not tell the model where it should be looking at every time step. It sort of figured all that out for itself during the training process. Because somehow, it figured out that looking at that image region was the right thing to do for this image. And because everything in this model is differentiable, because we can backpropagate through all these soft attention steps, all of this soft attention stuff just comes out through the training process. So that's really, really cool. By the way, this idea of recurrent neural networks and attention actually gets used in other tasks beyond image captioning. One recent example is this idea of visual question answering. So here, our model is going to take two things as input. It's going to take an image and it will also take a natural language question that's asking some question about the image. Here, we might see this image on the left and we might ask the question, what endangered animal is featured on the truck? And now the model needs to select from one of these four natural language answers about which of these answers correctly answers that question in the context of the image. So you can imagine kind of stitching this model together using CNNs and RNNs in kind of a natural way. Now, we're in this many to one scenario, where now our model needs to take as input this natural language sequence, so we can imagine running a recurrent neural network over each element of that input question, to now summarize the input question in a single vector. And then we can have a CNN to again summarize the image, and now combine both the vector from the CNN and the vector from the question and coding RNN to then predict a distribution over answers. We also sometimes, you'll also sometimes see this idea of soft spacial attention being incorporated into things like visual question answering. So you can see that here, this model is also having the spatial attention over the image when it's trying to determine answers to the questions. Just to, yeah, question? So the question is How are the different inputs combined? Do you mean like the encoded question vector and the encoded image vector? Yeah, so the question is how are the encoded image and the encoded question vector combined? Kind of the simplest thing to do is just to concatenate them and stick them into fully connected layers. That's probably the most common and that's probably the first thing to try. Sometimes people do slightly fancier things where they might try to have multiplicative interactions between those two vectors to allow a more powerful function. But generally, concatenation is kind of a good first thing to try. Okay, so now we've talked about a bunch of scenarios where RNNs are used for different kinds of problems. And I think it's super cool because it allows you to start tackling really complicated problems combining images and computer vision with natural language processing. And you can see that we can kind of stith together these models like Lego blocks and attack really complicated things, Like image captioning or visual question answering just by stitching together these relatively simple types of neural network modules. But I'd also like to mention that so far, we've talked about this idea of a single recurrent network layer, where we have sort of one hidden state, and another thing that you'll see pretty commonly is this idea of a multilayer recurrent neural network. Here, this is a three layer recurrent neural network, so now our input goes in, goes into, goes in and produces a sequence of hidden states from the first recurrent neural network layer. And now, after we run kind of one recurrent neural network layer, then we have this whole sequence of hidden states. And now, we can use the sequence of hidden states as an input sequence to another recurrent neural network layer. And then you can just imagine, which will then produce another sequence of hidden states from the second RNN layer. And then you can just imagine stacking these things on top of each other, cause we know that we've seen in other contexts that deeper models tend to perform better for various problems. And the same kind of holds in RNNs as well. For many problems, you'll see maybe a two or three layer recurrent neural network model is pretty commonly used. You typically don't see super deep models in RNNs. So generally, like two, three, four layer RNNs is maybe as deep as you'll typically go. Then, I think it's also really interesting and important to think about, now we've seen kind of what kinds of problems these RNNs can be used for, but then you need to think a little bit more carefully about exactly what happens to these models when we try to train them. So here, I've drawn this little vanilla RNN cell that we've talked about so far. So here, we're taking our current input, x t, and our previous hidden state, h t minus one, and then we stack, those are two vectors. So we can just stack them together. And then perform this matrix multiplication with our weight matrix, to give our, and then squash that output through a tanh, and that will give us our next hidden state. And that's kind of the basic functional form of this vanilla recurrent neural network. But then, we need to think about what happens in this architecture during the backward pass when we try to compute gradients? So then if we think about trying to compute, so then during the backwards pass, we'll receive the derivative of our h t, we'll receive derivative of loss with respect to h t. And during the backward pass through the cell, we'll need to compute derivative of loss to the respect of h t minus one. Then, when we compute this backward pass, we see that the gradient flows backward through this red path. So first, that gradient will flow backwards through this tanh gate, and then it will flow backwards through this matrix multiplication gate. And then, as we've seen in the homework and when implementing these matrix multiplication layers, when you backpropagate through this matrix multiplication gate, you end up mulitplying by the transpose of that weight matrix. So that means that every time we backpropagate through one of these vanilla RNN cells, we end up multiplying by some part of the weight matrix. So now if you imagine that we are sticking many of these recurrent neural network cells in sequence, because again this is an RNN. We want a model sequences. Now if you imagine what happens to the gradient flow through a sequence of these layers, then something kind of fishy starts to happen. Because now, when we want to compute the gradient of the loss with respect to h zero, we need to backpropagate through every one of these RNN cells. And every time you backpropagate through one cell, you'll pick up one of these w transpose factors. So that means that the final expression for the gradient on h zero will involve many, many factors of this weight matrix, which could be kind of bad. Maybe don't think about the weight, the matrix case, but imagine a scaler case. If we end up, if we have some scaler and we multiply by that same number over and over and over again, maybe not for four examples, but for something like a hundred or several hundred time steps, then multiplying by the same number over and over again is really bad. In the scaler case, it's either going to explode in the case that that number is greater than one or it's going to vanish towards zero in the case that number is less than one in absolute value. And the only way in which this will not happen is if that number is exactly one, which is actually very rare to happen in practice. That leaves us to, that same intuition extends to the matrix case, but now, rather than the absolute value of a scaler number, you instead need to look at the largest, the largest singular value of this weight matrix. Now if that largest singular value is greater than one, then during this backward pass, when we multiply by the weight matrix over and over, that gradient on h w, on h zero, sorry, will become very, very large, when that matrix is too large. And that's something we call the exploding gradient problem. Where now this gradient will explode exponentially in depth with the number of time steps that we backpropagate through. And if the largest singular value is less than one, then we get the opposite problem, where now our gradients will shrink and shrink and shrink exponentially, as we backpropagate and pick up more and more factors of this weight matrix. That's called the vanishing gradient problem. THere's a bit of a hack that people sometimes do to fix the exploding gradient problem called gradient clipping, which is just this simple heuristic saying that after we compute our gradient, if that gradient, if it's L2 norm is above some threshold, then just clamp it down and divide, just clamp it down so it has this maximum threshold. This is kind of a nasty hack, but it actually gets used in practice quite a lot when training recurrent neural networks. And it's a relatively useful tool for attacking this exploding gradient problem. But now for the vanishing gradient problem, what we typically do is we might need to move to a more complicated RNN architecture. So that motivates this idea of an LSTM. An LSTM, which stands for Long Short Term Memory, is this slightly fancier recurrence relation for these recurrent neural networks. It's really designed to help alleviate this problem of vanishing and exploding gradients. So that rather than kind of hacking on top of it, we just kind of design the architecture to have better gradient flow properties. Kind of an analogy to those fancier CNN architectures that we saw at the top of the lecture. Another thing to point out is that the LSTM cell actually comes from 1997. So this idea of an LSTM has been around for quite a while, and these folks were working on these ideas way back in the 90s, were definitely ahead of the curve. Because these models are kind of used everywhere now 20 years later. And LSTMs kind of have this funny functional form. So remember when we had this vanilla recurrent neural network, it had this hidden state. And we used this recurrence relation to update the hidden state at every time step. Well, now in an LSTM, we actually have two, we maintain two hidden states at every time step. One is this h t, which is called the hidden state, which is kind of an analogy to the hidden state that we had in the vanilla RNN. But an LSTM also maintains the second vector, c t, called the cell state. And the cell state is this vector which is kind of internal, kept inside the LSTM, and it does not really get exposed to the outside world. And we'll see, and you can kind of see that through this update equation, where you can see that when we, first when we compute these, we take our two inputs, we use them to compute these four gates called i, f, o, n, g. We use those gates to update our cell states, c t, and then we expose part of our cell state as the hidden state at the next time step. This is kind of a funny functional form, and I want to walk through for a couple slides exactly why do we use this architecture and why does it make sense, especially in the context of vanishing or exploding gradients. This first thing that we do in an LSTM is that we're given this previous hidden state, h t, and we're given our current input vector, x t, and just like the vanilla RNN. In the vanilla RNN, remember, we took those two input vectors. We concatenated them. Then we did a matrix multiply to directly compute the next hidden state in the RNN. Now, the LSTM does something a little bit different. We're going to take our previous hidden state and our current input, stack them, and now multiply by a very big weight matrix, w, to compute four different gates, Which all have the same size as the hidden state. Sometimes, you'll see this written in different ways. Some authors will write a different weight matrix for each gate. Some authors will combine them all into one big weight matrix. But it's all really the same thing. The ideas is that we take our hidden state, our current input, and then we use those to compute these four gates. These four gates are the, you often see this written as i, f, o, g, ifog, which makes it pretty easy to remember what they are. I is the input gate. It says how much do we want to input into our cell. F is the forget gate. How much do we want to forget the cell memory at the previous, from the previous time step. O is the output gate, which is how much do we want to reveal ourself to the outside world. And G really doesn't have a nice name, so I usually call it the gate gate. G, it tells us how much do we want to write into our input cell. And then you notice that each of these four gates are using a different non linearity. The input, forget and output gate are all using sigmoids, which means that their values will be between zero and one. Whereas the gate gate uses a tanh, which means it's output will be between minus one and one. So, these are kind of weird, but it makes a little bit more sense if you imagine them all as binary values. Right, like what happens at the extremes of these two values? It's kind of what happens, if you look after we compute these gates if you look at this next equation, you can see that our cell state is being multiplied element wise by the forget gate. Sorry, our cell state from the previous time step is being multiplied element wise by this forget gate. And now if this forget gate, you can think of it as being a vector of zeros and ones, that's telling us for each element in the cell state, do we want to forget that element of the cell in the case if the forget gate was zero? Or do we want to remember that element of the cell in the case if the forget gate was one. Now, once we've used the forget gate to gate off the part of the cell state, then we have the second term, which is the element wise product of i and g. So now, i is this vector of zeros and ones, cause it's coming through a sigmoid, telling us for each element of the cell state, do we want to write to that element of the cell state in the case that i is one, or do we not want to write to that element of the cell state at this time step in the case that i is zero. And now the gate gate, because it's coming through a tanh, will be either one or minus one. So that is the value that we want, the candidate value that we might consider writing to each element of the cell state at this time step. Then if you look at the cell state equation, you can see that at every time step, the cell state has these kind of these different, independent scaler values, and they're all being incremented or decremented by one. So there's kind of like, inside the cell state, we can either remember or forget our previous state, and then we can either increment or decrement each element of that cell state by up to one at each time step. So you can kind of think of these elements of the cell state as being little scaler integer counters that can be incremented and decremented at each time step. And now, after we've computed our cell state, then we use our now updated cell state to compute a hidden state, which we will reveal to the outside world. So because this cell state has this interpretation of being counters, and sort of counting up by one or minus one at each time step, we want to squash that counter value into a nice zero to one range using a tanh. And now, we multiply element wise, by this output gate. And the output gate is again coming through a sigmoid, so you can think of it as being mostly zeros and ones, and the output gate tells us for each element of our cell state, do we want to reveal or not reveal that element of our cell state when we're computing the external hidden state for this time step. And then, I think there's kind of a tradition in people trying to explain LSTMs, that everyone needs to come up with their own potentially confusing LSTM diagram. So here's my attempt. Here, we can see what's going on inside this LSTM cell, is that we take our, we're taking as input on the left our previous cell state and the previous hidden state, as well as our current input, x t. Now we're going to take our current, our previous hidden state, as well as our current input, stack them, and then multiply with this weight matrix, w, to produce our four gates. And here, I've left out the non linearities because we saw those on a previous slide. And now the forget gate multiplies element wise with the cell state. The input and gate gate are multiplied element wise and added to the cell state. And that gives us our next cell. The next cell gets squashed through a tanh, and multiplied element wise with this output gate to produce our next hidden state. Question? No, So they're coming through this, they're coming from different parts of this weight matrix. So if our hidden, if our x and our h all have this dimension h, then after we stack them, they'll be a vector size two h, and now our weight matrix will be this matrix of size four h times two h. So you can think of that as sort of having four chunks of this weight matrix. And each of these four chunks of the weight matrix is going to compute a different one of these gates. You'll often see this written for clarity, kind of combining all four of those different weight matrices into a single large matrix, w, just for notational convenience. But they're all computed using different parts of the weight matrix. But you're correct in that they're all computed using the same functional form of just stacking the two things and taking the matrix multiplication. Now that we have this picture, we can think about what happens to an LSTM cell during the backwards pass? We saw, in the context of vanilla recurrent neural network, that some bad things happened during the backwards pass, where we were continually multiplying by that weight matrix, w. But now, the situation looks much, quite a bit different in the LSTM. If you imagine this path backwards of computing the gradients of the cell state, we get quite a nice picture. Now, when we have our upstream gradient from the cell coming in, then once we backpropagate backwards through this addition operation, remember that this addition just copies that upstream gradient into the two branches, so our upstream gradient gets copied directly and passed directly to backpropagating through this element wise multiply. So then our upstream gradient ends up getting multiplied element wise by the forget gate. As we backpropagate backwards through this cell state, the only thing that happens to our upstream cell state gradient is that it ends up getting multiplied element wise by the forget gate. This is really a lot nicer than the vanilla RNN for two reasons. One is that this forget gate is now an element wise multiplication rather than a full matrix multiplication. So element wise multiplication is going to be a little bit nicer than full matrix multiplication. Second is that element wise multiplication will potentially be multiplying by a different forget gate at every time step. So remember, in the vanilla RNN, we were continually multiplying by that same weight matrix over and over again, which led very explicitly to these exploding or vanishing gradients. But now in the LSTM case, this forget gate can vary from each time step. Now, it's much easier for the model to avoid these problems of exploding and vanishing gradients. Finally, because this forget gate is coming out from a sigmoid, this element wise multiply is guaranteed to be between zero and one, which again, leads to sort of nicer numerical properties if you imagine multiplying by these things over and over again. Another thing to notice is that in the context of the vanilla recurrent neural network, we saw that during the backward pass, our gradients were flowing through also a tanh at every time step. But now, in an LSTM, our outputs are, in an LSTM, our hidden state is used to compute those outputs, y t, so now, each hidden state, if you imagine backpropagating from the final hidden state back to the first cell state, then through that backward path, we only backpropagate through a single tanh non linearity rather than through a separate tanh at every time step. So kind of when you put all these things together, you can see this backwards pass backpropagating through the cell state is kind of a gradient super highway that lets gradients pass relatively unimpeded from the loss at the very end of the model all the way back to the initial cell state at the beginning of the model. Was there a question? Yeah, what about the gradient in respect to w? 'Cause that's ultimately the thing that we care about. So, the gradient with respect to w will come through, at every time step, will take our current cell state as well as our current hidden state and that will give us an element, that will give us our local gradient on w for that time step. So because our cell state, and just in the vanilla RNN case, we'll end up adding those first time step w gradients to compute our final gradient on w. But now, if you imagine the situation where we have a very long sequence, and we're only getting gradients to the very end of the sequence. Now, as you backpropagate through, we'll get a local gradient on w for each time step, and that local gradient on w will be coming through these gradients on c and h. So because we're maintaining the gradients on c much more nicely in the LSTM case, those local gradients on w at each time step will also be carried forward and backward through time much more cleanly. Another question? Yeah, so the question is due to the non linearities, could this still be susceptible to vanishing gradients? And that could be the case. Actually, so one problem you might imagine is that maybe if these forget gates are always less than zero, or always less than one, you might get vanishing gradients as you continually go through these forget gates. Well, one sort of trick that people do in practice is that they will, sometimes, initialize the biases of the forget gate to be somewhat positive. So that at the beginning of training, those forget gates are always very close to one. So that at least at the beginning of training, then we have not so, relatively clean gradient flow through these forget gates, since they're all initialized to be near one. And then throughout the course of training, then the model can learn those biases and kind of learn to forget where it needs to. You're right that there still could be some potential for vanishing gradients here. But it's much less extreme than the vanilla RNN case, both because those fs can vary at each time step, and also because we're doing this element wise multiplication rather than a full matrix multiplication. So you can see that this LSTM actually looks quite similar to ResNet. In this residual network, we had this path of identity connections going backward through the network and that gave, sort of a gradient super highway for gradients to flow backward in ResNet. And now it's kind of the same intuition in LSTM where these additive and element wise multiplicative interactions of the cell state can give a similar gradient super highway for gradients to flow backwards through the cell state in an LSTM. And by the way, there's this other kind of nice paper called highway networks, which is kind of in between this idea of this LSTM cell and these residual networks. So these highway networks actually came before residual networks, and they had this idea where at every layer of the highway network, we're going to compute sort of a candidate activation, as well as a gating function that tells us that interprelates between our previous input at that layer, and that candidate activation that came through our convolutions or what not. So there's actually a lot of architectural similarities between these things, and people take a lot of inspiration from training very deep CNNs and very deep RNNs and there's a lot of crossover here. Very briefly, you'll see a lot of other types of variance of recurrent neural network architectures out there in the wild. Probably the most common, apart from the LSTM, is this GRU, called the gated recurrent unit. And you can see those update equations here, and it kind of has this similar flavor of the LSTM, where it uses these multiplicative element wise gates together with these additive interactions to avoid this vanishing gradient problem. There's also this cool paper called LSTM: a search based oddysey, very inventive title, where they tried to play around with the LSTM equations and swap out the non linearities at one point, like do we really need that tanh for exposing the output gate, and they tried to answer a lot of these different questions about each of those non linearities, each of those pieces of the LSTM update equations. What happens if we change the model and tweak those LSTM equations a little bit. And kind of the conclusion is that they all work about the same Some of them work a little bit better than others for one problem or another. But generally, none of the things, none of the tweaks of LSTM that they tried were significantly better that the original LSTM for all problems. So that gives you a little bit more faith that the LSTM update equations seem kind of magical but they're useful anyway. You should probably consider them for your problem. There's also this cool paper from Google a couple years ago where they tried to use, where they did kind of an evolutionary search and did a search over many, over a very large number of random RNN architectures, they kind of randomly premute these update equations and try putting the additions and the multiplications and the gates and the non linearities in different kinds of combinations. They blasted this out over their huge Google cluster and just tried a whole bunch of these different weigh updates in various flavors. And again, it was the same story that they didn't really find anything that was significantly better than these existing GRU or LSTM styles. Although there were some variations that worked maybe slightly better or worse for certain problems. But kind of the take away is that probably and using an LSTM or GRU is not so much magic in those equations, but this idea of managing gradient flow properly through these additive connections and these multiplicative gates is super useful. So yeah, the summary is that RNNs are super cool. They can allow you to attack tons of new types of problems. They sometimes are susceptible to vanishing or exploding gradients. But we can address that with weight clipping and with fancier architectures. And there's a lot of cool overlap between CNN architectures and RNN architectures. So next time, you'll be taking the midterm. But after that, we'll have a, sorry, a question? Midterm is after this lecture so anything up to this point is fair game. And so you guys, good luck on the midterm on Tuesday.
Stanford_Computer_Vision
Lecture_9_CNN_Architectures.txt
- All right welcome to lecture nine. So today we will be talking about CNN Architectures. And just a few administrative points before we get started, assignment two is due Thursday. The mid term will be in class on Tuesday May ninth, so next week and it will cover material through Tuesday through this coming Thursday May fourth. So everything up to recurrent neural networks are going to be fair game. The poster session we've decided on a time, it's going to be Tuesday June sixth from twelve to three p.m. So this is the last week of classes. So we have our our poster session a little bit early during the last week so that after that, once you guys get feedback you still have some time to work for your final report which will be due finals week. Okay, so just a quick review of last time. Last time we talked about different kinds of deep learning frameworks. We talked about you know PyTorch, TensorFlow, Caffe2 and we saw that using these kinds of frameworks we were able to easily build big computational graphs, for example very large neural networks and comm nets, and be able to really easily compute gradients in these graphs. So to compute all of the gradients for all the intermediate variables weights inputs and use that to train our models and to run all this efficiently on GPUs And we saw that for a lot of these frameworks the way this works is by working with these modularized layers that you guys have been working writing with, in your home works as well where we have a forward pass, we have a backward pass, and then in our final model architecture, all we need to do then is to just define all of these sequence of layers together. So using that we're able to very easily be able to build up very complex network architectures. So today we're going to talk about some specific kinds of CNN Architectures that are used today in cutting edge applications and research. And so we'll go into depth in some of the most commonly used architectures for these that are winners of ImageNet classification benchmarks. So in chronological order AlexNet, VGG net, GoogLeNet, and ResNet. And so these will go into a lot of depth. And then I'll also after that, briefly go through some other architectures that are not as prominently used these days, but are interesting either from a historical perspective, or as recent areas of research. Okay, so just a quick review. We talked a long time ago about LeNet, which was one of the first instantiations of a comNet that was successfully used in practice. And so this was the comNet that took an input image, used com filters five by five filters applied at stride one and had a couple of conv layers, a few pooling layers and then some fully connected layers at the end. And this fairly simple comNet was very successfully applied to digit recognition. So AlexNet from 2012 which you guys have also heard already before in previous classes, was the first large scale convolutional neural network that was able to do well on the ImageNet classification task so in 2012 AlexNet was entered in the competition, and was able to outperform all previous non deep learning based models by a significant margin, and so this was the comNet that started the spree of comNet research and usage afterwards. And so the basic comNet AlexNet architecture is a conv layer followed by pooling layer, normalization, com pool norm, and then a few more conv layers, a pooling layer, and then several fully connected layers afterwards. So this actually looks very similar to the LeNet network that we just saw. There's just more layers in total. There is five of these conv layers, and two fully connected layers before the final fully connected layer going to the output classes. So let's first get a sense of the sizes involved in the AlexNet. So if we look at the input to the AlexNet this was trained on ImageNet, with inputs at a size 227 by 227 by 3 images. And if we look at this first layer which is a conv layer for the AlexNet, it's 11 by 11 filters, 96 of these applied at stride 4. So let's just think about this for a moment. What's the output volume size of this first layer? And there's a hint. So remember we have our input size, we have our convolutional filters, ray. And we have this formula, which is the hint over here that gives you the size of the output dimensions after applying com right? So remember it was the full image, minus the filter size, divided by the stride, plus one. So given that that's written up here for you 55, does anyone have a guess at what's the final output size after this conv layer? [student speaks off mic] - So I had 55 by 55 by 96, yep. That's correct. Right so our spatial dimensions at the output are going to be 55 in each dimension and then we have 96 total filters so the depth after our conv layer is going to be 96. So that's the output volume. And what's the total number of parameters in this layer? So remember we have 96 11 by 11 filters. [student speaks off mic] - [Lecturer] 96 by 11 by 11, almost. So yes, so I had another by three, yes that's correct. So each of the filters is going to see through a local region of 11 by 11 by three, right because the input depth was three. And so, that's each filter size, times we have 96 of these total. And so there's 35K parameters in this first layer. Okay, so now if we look at the second layer this is a pooling layer right and in this case we have three three by three filters applied at stride two. So what's the output volume of this layer after pooling? And again we have a hint, very similar to the last question. Okay, 27 by 27 by 96. Yes that's correct. Right so the pooling layer is basically going to use this formula that we had here. Again because these are pooling applied at a stride of two so we're going to use the same formula to determine the spatial dimensions and so the spatial dimensions are going to be 27 by 27, and pooling preserves the depth. So we had 96 as depth as input, and it's still going to be 96 depth at output. And next question. What's the number of parameters in this layer? I hear some muttering. [student answers off mic] - Nothing. Okay. Yes, so pooling layer has no parameters, so, kind of a trick question. Okay, so we can basically, yes, question? [student speaks off mic] - The question is, why are there no parameters in the pooling layer? The parameters are the weights right, that we're trying to learn. And so convolutional layers have weights that we learn but pooling all we do is have a rule, we look at the pooling region, and we take the max. So there's no parameters that are learned. So we can keep on doing this and you can just repeat the process and it's kind of a good exercise to go through this and figure out the sizes, the parameters, at every layer. And so if you do this all the way, you can look at this is the final architecture that you can work with. There's 11 by 11 filters at the beginning, then five by five and some three by three filters. And so these are generally pretty familiar looking sizes that you've seen before and then at the end we have a couple of fully connected layers of size 4096 and finally the last layer, is FC8 going to the soft max, which is going to the 1000 ImageNet classes. And just a couple of details about this, it was the first use of the ReLu non-linearity that we've talked about that's the most commonly used non-linearity. They used local response normalization layers basically trying to normalize the response across neighboring channels but this is something that's not really used anymore. It turned out not to, other people showed that it didn't have so much of an effect. There's a lot of heavy data augmentation, and so you can look in the paper for more details, but things like flipping, jittering, cropping, color normalization all of these things which you'll probably find useful for you when you're working on your projects for example, so a lot of data augmentation here. They also use dropout batch size of 128, and learned with SGD with momentum which we talked about in an earlier lecture, and basically just started with a base learning rate of 1e negative 2. Every time it plateaus, reduce by a factor of 10 and then just keep going. Until they finish training and a little bit of weight decay and in the end, in order to get the best numbers they also did an ensembling of models and so training multiple of these, averaging them together and this also gives an improvement in performance. And so one other thing I want to point out is that if you look at this AlexNet diagram up here, it looks kind of like the normal comNet diagrams that we've been seeing, except for one difference, which is that it's, you can see it's kind of split in these two different rows or columns going across. And so the reason for this is mostly historical note, so AlexNet was trained on GTX580 GPUs older GPUs that only had three gigs of memory. So it couldn't actually fit this entire network on here, and so what they ended up doing, was they spread the network across two GPUs. So on each GPU you would have half of the neurons, or half of the feature maps. And so for example if you look at this first conv layer, we have 55 by 55 by 96 output, but if you look at this diagram carefully, you can zoom in later in the actual paper, you can see that, it's actually only 48 depth-wise, on each GPU, and so they just spread it, the feature maps, directly in half. And so what happens is that for most of these layers, for example com one, two, four and five, the connections are only with feature maps on the same GPU, so you would take as input, half of the feature maps that were on the the same GPU as before and you don't look at the full 96 feature maps for example. You just take as input the 48 in that first layer. And then there's a few layers so com three, as well as FC six, seven and eight, where here are the GPUs do talk to each other and so there's connections with all feature maps in the preceding layer. so there's communication across the GPUs, and each of these neurons are then connected to the full depth of the previous input layer. Question. - [Student] It says the full simplified AlexNetwork architecture. [mumbles] - Oh okay, so the question is why does it say full simplified AlexNet architecture here? It just says that because I didn't put all the details on here, so for example this is the full set of layers in the architecture, and the strides and so on, but for example the normalization layer, there's other, these details are not written on here. And then just one little note, if you look at the paper and try and write out the math and architectures and so on, there's a little bit of an issue on the very first layer they'll say if you'll look in the figure they'll say 224 by 224 , but there's actually some kind of funny pattern going on and so the numbers actually work out if you look at it as 227. AlexNet was the winner of the ImageNet classification benchmark in 2012, you can see that it cut the error rate by quite a large margin. It was the first CNN base winner, and it was widely used as a base to our architecture almost ubiquitously from then until a couple years ago. It's still used quite a bit. It's used in transfer learning for lots of different tasks and so it was used for basically a long time, and it was very famous and now though there's been some more recent architectures that have generally just had better performance and so we'll talk about these next and these are going to be the more common architectures that you'll be wanting to use in practice. So just quickly first in 2013 the ImageNet challenge was won by something called a ZFNet. Yes, question. [student speaks off mic] - So the question is intuition why AlexNet was so much better than the ones that came before, DefLearning comNets [mumbles] this is just a very different kind of approach in architecture. So this was the first deep learning based approach first comNet that was used. So in 2013 the challenge was won by something called a ZFNet [Zeller Fergus Net] named after the creators. And so this mostly was improving hyper parameters over the AlexNet. It had the same number of layers, the same general structure and they made a few changes things like changing the stride size, different numbers of filters and after playing around with these hyper parameters more, they were able to improve the error rate. But it's still basically the same idea. So in 2014 there are a couple of architectures that were now more significantly different and made another jump in performance, and the main difference with these networks first of all was much deeper networks. So from the eight layer network that was in 2012 and 2013, now in 2014 we had two very close winners that were around 19 layers and 22 layers. So significantly deeper. And the winner of this was GoogleNet, from Google but very close behind was something called VGGNet from Oxford, and on actually the localization challenge VGG got first place in some of the other tracks. So these were both very, very strong networks. So let's first look at VGG in a little bit more detail. And so the VGG network is the idea of much deeper networks and with much smaller filters. So they increased the number of layers from eight layers in AlexNet right to now they had models with 16 to 19 layers in VGGNet. And one key thing that they did was they kept very small filter so only three by three conv all the way, which is basically the smallest com filter size that is looking at a little bit of the neighboring pixels. And they just kept this very simple structure of three by three convs with the periodic pooling all the way through the network. And it's very simple elegant network architecture, was able to get 7.3% top five error on the ImageNet challenge. So first the question of why use smaller filters. So when we take these small filters now we have fewer parameters and we try and stack more of them instead of having larger filters, have smaller filters with more depth instead, have more of these filters instead, what happens is that you end up having the same effective receptive field as if you only have one seven by seven convolutional layer. So here's a question, what is the effective receptive field of three of these three by three conv layers with stride one? So if you were to stack three three by three conv layers with Stride one what's the effective receptive field, the total area of the input, spatial area of the input that enure at the top layer of the three layers is looking at. So I heard fifteen pixels, why fifteen pixels? - [Student] Okay, so the reason given was because they overlap-- - Okay, so the reason given was because they overlap. So it's on the right track. What actually is happening though is you have to see, at the first layer, the receptive field is going to be three by three right? And then at the second layer, each of these neurons in the second layer is going to look at three by three other first layer filters, but the corners of these three by three have an additional pixel on each side, that is looking at in the original input layer. So the second layer is actually looking at five by five receptive field and then if you do this again, the third layer is looking at three by three in the second layer but this is going to, if you just draw out this pyramid is looking at seven by seven in the input layer. So the effective receptive field here is going to be seven by seven. Which is the same as one seven by seven conv layer. So what happens is that this has the same effective receptive field as a seven by seven conv layer but it's deeper. It's able to have more non-linearities in there, and it's also fewer parameters. So if you look at the total number of parameters, each of these conv filters for the three by threes is going to have nine parameters in each conv [mumbles] three times three, and then times the input depth, so three times three times C, times this total number of output feature maps, which is again C is we're going to preserve the total number of channels. So you get three times three, times C times C for each of these layers, and we have three layers so it's going to be three times this number, compared to if you had a single seven by seven layer then you get, by the same reasoning, seven squared times C squared. So you're going to have fewer parameters total, which is nice. So now if we look at this full network here there's a lot of numbers up here that you can go back and look at more carefully but if we look at all of the sizes and number of parameters the same way that we calculated the example for AlexNet, this is a good exercise to go through, we can see that you know going the same way we have a couple of these conv layers and a pooling layer a couple more conv layers, pooling layer, several more conv layers and so on. And so this just keeps going up. And if you counted the total number of convolutional and fully connected layers, we're going to have 16 in this case for VGG 16, and then VGG 19, it's just a very similar architecture, but with a few more conv layers in there. And so the total memory usage of this network, so just making a forward pass through counting up all of these numbers so in the memory numbers here written in terms of the total numbers, like we calculated earlier, and if you look at four bytes per number, this is going to be about 100 megs per image, and so this is the scale of the memory usage that's happening and this is only for a forward pass right, when you do a backward pass you're going to have to store more and so this is pretty heavy memory wise. 100 megs per image, if you have on five gigs of total memory, then you're only going to be able to store about 50 of these. And so also the total number of parameters here we have is 138 million parameters in this network, and this compares with 60 million for AlexNet. Question? [student speaks off mic] - So the question is what do we mean by deeper, is it the number of filters, number of layers? So deeper in this case is always referring to layers. So there are two usages of the word depth which is confusing one is the depth rate per channel, width by height by depth, you can use the word depth here, but in general we talk about the depth of a network, this is going to be the total number of layers in the network, and usually in particular we're counting the total number of weight layers. So the total number of layers with trainable weight, so convolutional layers and fully connected layers. [student mumbles off mic] - Okay, so the question is, within each layer what do different filters need? And so we talked about this back in the comNet lecture, so you can also go back and refer to that, but each filter is a set of let's say three by three convs, so each filter is looking at a, is a set of weight looking at a three by three value input input depth, and this produces one feature map, one activation map of all the responses of the different spatial locations. And then we have we can have as many filters as we want right so for example 96 and each of these is going to produce a feature map. And so it's just like each filter corresponds to a different pattern that we're looking for in the input that we convolve around and we see the responses everywhere in the input, we create a map of these and then another filter will we convolve over the image and create another map. Question. [student speaks off mic] - So question is, is there intuition behind, as you go deeper into the network we have more channel depth so more number of filters right and so you can have any design that you want so you don't have to do this. In practice you will see this happen a lot of the times and one of the reasons is people try and maintain kind of a relatively constant level of compute, so as you go higher up or deeper into your network, you're usually also using basically down sampling and having smaller total spatial area and then so then they also increase now you increase by depth a little bit, it's not as expensive now to increase by depth because it's spatially smaller and so, yeah that's just a reason. Question. [student speaks off mic] - So performance-wise is there any reason to use SBN [mumbles] instead of SouthMax [mumbles], so no, for a classifier you can use either one, and you did that earlier in the class as well, but in general SouthMax losses, have generally worked well and been standard use for classification here. Okay yeah one more question. [student mumbles off mic] - Yes, so the question is, we don't have to store all of the memory like we can throw away the parts that we don't need and so on? And yes this is true. Some of this you don't need to keep, but you're also going to be doing a backwards pass through ware for the most part, when you were doing the chain rule and so on you needed a lot of these activations as part of it and so in large part a lot of this does need to be kept. So if we look at the distribution of where memory is used and where parameters are, you can see that a lot of memories in these early layers right where you still have spatial dimensions you're going to have more memory usage and then a lot of the parameters are actually in the last layers, the fully connected layers have a huge number of parameters right, because we have all of these dense connections. And so that's something just to know and then keep in mind so later on we'll see some networks actually get rid of these fully connected layers and be able to save a lot on the number of parameters. And then just one last thing to point out, you'll also see different ways of calling all of these layers right. So here I've written out exactly what the layers are. conv3-64 means three by three convs with 64 total filters. But for VGGNet on this diagram on the right here there's also common ways that people will look at each group of filters, so each orange block here, as in conv1 part one, so conv1-1, conv1-2, and so on. So just something to keep in mind. So VGGNet ended up getting second place in the ImageNet 2014 classification challenge, first in localization. They followed a very similar training procedure as Alex Krizhevsky for the AlexNet. They didn't use local response normalization, so as I mentioned earlier, they found out this didn't really help them, and so they took it out. You'll see VGG 16 and VGG 19 are common variants of the cycle here, and this is just the number of layers, 19 is slightly deeper than 16. In practice VGG 19 works very little bit better, and there's a little bit more memory usage, so you can use either but 16 is very commonly used. For best results, like AlexNet, they did ensembling in order to average several models, and you get better results. And they also showed in their work that the FC7 features of the last fully connected layer before going to the 1000 ImageNet classes. The 4096 size layer just before that, is a good feature representation, that can even just be used as is, to extract these features from other data, and generalized these other tasks as well. And so FC7 is a good feature representation. Yeah question. [student speaks off mic] - Sorry what was the question? Okay, so the question is what is localization here? And so this is a task, and we'll talk about it a little bit more in a later lecture on detection and localization so I don't want to go into detail here but it's basically an image, not just classifying What's the class of the image, but also drawing a bounding box around where that object is in the image. And the difference with detection, which is a very related task is that detection there can be multiple instances of this object in the image localization we're assuming there's just one, this classification but we just how this additional bounding box. So we looked at VGG which was one of the deep networks from 2014 and then now we'll talk about GoogleNet which was the other one that won the classification challenge. So GoogleNet again was a much deeper network with 22 layers but one of the main insights and special things about GoogleNet is that it really looked at this problem of computational efficiency and it tried to design a network architecture that was very efficient in the amount of compute. And so they did this using this inception module which we'll go into more detail and basically stacking a lot of these inception modules on top of each other. There's also no fully connected layers in this network, so they got rid of that were able to save a lot of parameters and so in total there's only five million parameters which is twelve times less than AlexNet, which had 60 million even though it's much deeper now. It got 6.7% top five error. So what's the inception module? So the idea behind the inception module is that they wanted to design a good local network typology and it has this idea of this local topology that's you know you can think of it as a network within a network and then stack a lot of these local typologies one on top of each other. And so in this local network that they're calling an inception module what they're doing is they're basically applying several different kinds of filter operations in parallel on top of the same input coming into this same layer. So we have our input coming in from the previous layer and then we're going to do different kinds of convolutions. So a one by one conv, right a three by three conv, five by five conv, and then they also have a pooling operation in this case three by three pooling, and so you get all of these different outputs from these different layers, and then what they do is they concatenate all these filter outputs together depth wise, and so then this creates one tenser output at the end that is going tom pass on to the next layer. So if we look at just a naive way of doing this we just do exactly that we have all of these different operations we get the outputs we concatenate them together. So what's the problem with this? And it turns out that computational complexity is going to be a problem here. So if we look more carefully at an example, so here just for as an example I've put one by one conv, 128 filter so three by three conv 192 filters, five by five convs and 96 filters. Assume everything has basically the stride that's going to maintain the spatial dimensions, and that we have this input coming in. So what is the output size of the one by one filter with 128 , one by one conv with 128 filters? Who has a guess? OK so I heard 28 by 28, by 128 which is correct. So right by one by one conv we're going to maintain spatial dimensions and then on top of that, each conv filter is going to look through the entire 256 depth of the input, but then the output is going to be, we have a 28 by 28 feature map for each of the 128 filters that we have in this conv layer. So we get 28 by 28 by 128. OK and then now if we do the same thing and we look at the filter sizes of the output sizes sorry of all of the different filters here, after the three by three conv we're going to have this volume of 28 by 28 by 192 right after five by five conv we have 96 filters here. So 28 by 28 by 96, and then out pooling layer is just going to keep the same spatial dimension here, so pooling layer will preserve it in depth, and here because of our stride, we're also going to preserve our spatial dimensions. And so now if we look at the output size after filter concatenation what we're going to get is 28 by 28, these are all 28 by 28, and we concatenating depth wise. So we get 28 by 28 times all of these added together, and the total output size is going to be 28 by 28 by 672. So the input to our inception module was 28 by 28 by 256, then the output from this module is 28 by 28 by 672. So we kept the same spatial dimensions, and we blew up the depth. Question. [student speaks off mic] OK So in this case, yeah, the question is, how are we getting 28 by 28 for everything? So here we're doing all the zero padding in order to maintain the spatial dimensions, and that way we can do this filter concatenation depth-wise. Question in the back. [student speaks off mic] - OK The question is what's the 256 deep at the input, and so this is not the input to the network, this is the input just to this local module that I'm looking at. So in this case 256 is the depth of the previous inception module that came just before this. And so now coming out we have 28 by 28 by 672, and that's going to be the input to the next inception module. Question. [student speaks off mic] - Okay the question is, how did we get 28 by 28 by 128 for the first one, the first conv, and this is basically it's a one by one convolution right, so we're going to take this one by one convolution slide it across our 28 by 28 by 256 input spatially where it's at each location, it's going to multiply, it's going to do a [mumbles] through the entire 256 depth, and so we do this one by one conv slide it over spatially and we get a feature map out that's 28 by 28 by one. There's one number at each spatial location coming out, and each filter produces one of these 28 by 28 by one maps, and we have here a total 128 filters, and that's going to produce 28 by 28, by 128. OK so if you look at the number of operations that are happening in the convolutional layer, let's look at the first one for example this one by one conv as I was just saying at each each location we're doing a one by one by 256 dot product. So there's 256 multiply operations happening here and then for each filter map we have 28 by 28 spatial locations, so that's the first 28 times 28 first two numbers that are multiplied here. These are the spatial locations for each filter map, and so we have to do this to 25 60 multiplication each one of these then we have 128 total filters at this layer, or we're producing 128 total feature maps. And so the total number of these operations here is going to be 28 times 28 times 128 times 256. And so this is going to be the same for, you can think about this for the three by three conv, and the five by five conv, that's exactly the same principle. And in total we're going to get 854 million operations that are happening here. - [Student] And the 128, 192, and 96 are just values [mumbles] - Question the 128, 192 and 256 are values that I picked. Yes, these are not values that I just came up with. They are similar to the ones that you will see in like a particular layer of inception net, so in GoogleNet basically, each module has a different set of these kinds of parameters, and I picked one that was similar to one of these. And so this is very expensive computationally right, these these operations. And then the other thing that I also want to note is that the pooling layer also adds to this problem because it preserves the whole feature depth. So at every layer your total depth can only grow right, you're going to take the full featured depth from your pooling layer, as well as all the additional feature maps from the conv layers and add these up together. So here our input was 256 depth and our output is 672 depth and you're just going to keep increasing this as you go up. So how do we deal with this and how do we keep this more manageable? And so one of the key insights that GoogleNet used was that well we can we can address this by using bottleneck layers and try and project these feature maps to lower dimension before our our convolutional operations, so before our expensive layers. And so what exactly does that mean? So reminder one by one convolution, I guess we were just going through this but it's taking your input volume, it's performing a dot product at each spatial location and what it does is it preserves spatial dimension but it reduces the depth and it reduces that by projecting your input depth to a lower dimension. It just takes it's basically like a linear combination of your input feature maps. And so this main idea is that it's projecting your depth down and so the inception module takes these one by one convs and adds these at a bunch of places in these modules where there's going to be, in order to alleviate this expensive compute. So before the three by three and five by five conv layers, it puts in one of these one by one convolutions. And then after the pooling layer it also puts an additional one by one convolution. Right so these are the one by one bottleneck layers that are added in. And so how does this change the math that we were looking at earlier? So now basically what's happening is that we still have the same input here 28 by 28 by 256, but these one by one convs are going to reduce the depth dimension and so you can see before the three by three convs, if I put a one by one conv with 64 filters, my output from that is going to be, 28 by 28 by 64. So instead of now going into the three by three convs afterwards instead of 28 by 28 by 256 coming in, we only have a 28 by 28, by 64 block coming in. And so this is now reducing the smaller input going into these conv layers, the same thing for the five by five conv, and then for the pooling layer, after the pooling comes out, we're going to reduce the depth after this. And so, if you work out the math the same way for all of the convolutional ops here, adding in now all these one by one convs on top of the three by threes and five by fives, the total number of operations is 358 million operations, so it's much less than the 854 million that we had in the naive version, and so you can see how you can use this one by one conv, and the filter size for that to control your computation. Yes, question in the back. [student speaks off mic] - Yes, so the question is, have you looked into what information might be lost by doing this one by one conv at the beginning. And so there might be some information loss, but at the same time if you're doing these projections you're taking a linear combination of these input feature maps which has redundancy in them, you're taking combinations of them, and you're also introducing an additional non-linearity after the one by one conv, so it also actually helps in that way with adding a little bit more depth and so, I don't think there's a rigorous analysis of this, but basically in general this works better and there's reasons why it helps as well. OK so here we have, we're basically using these one by one convs to help manage our computational complexity, and then what GooleNet does is it takes these inception modules and it's going to stack all these together. So this is a full inception architecture. And if we look at this a little bit more detail, so here I've flipped it, because it's so big, it's not going to fit vertically any more on the slide. So what we start with is we first have this stem network, so this is more the kind of vanilla plain conv net that we've seen earlier [mumbles] six sequence of layers. So conv pool a couple of convs in another pool just to get started and then after that we have all of our different our multiple inception modules all stacked on top of each other, and then on top we have our classifier output. And notice here that they've really removed the expensive fully connected layers it turns out that the model works great without them, even and you reduce a lot of parameters. And then what they also have here is, you can see these couple of extra stems coming out and these are auxiliary classification outputs and so these are also you know just a little mini networks with an average pooling, a one by one conv, a couple of fully connected layers here going to the soft Max and also a 1000 way SoftMax with the ImageNet classes. And so you're actually using your ImageNet training classification loss in three separate places here. The standard end of the network, as well as in these two places earlier on in the network, and the reason they do that is just this is a deep network and they found that having these additional auxiliary classification outputs, you get more gradient training injected at the earlier layers, and so more just helpful signal flowing in because these intermediate layers should also be helpful. You should be able to do classification based off some of these as well. And so this is the full architecture, there's 22 total layers with weights and so within each of these modules each of those one by one, three by three, five by five is a weight layer, just including all of these parallel layers, and in general it's a relatively more carefully designed architecture and part of this is based on some of these intuitions that we're talking about and part of them also is just you know Google the authors they had huge clusters and they're cross validating across all kinds of design choices and this is what ended up working well. Question? [student speaks off mic] - Yeah so the question is, are the auxiliary outputs actually useful for the final classification, to use these as well? I think when they're training them they do average all these for the losses coming out. I think they are helpful. I can't remember if in the final architecture, whether they average all of these or just take one, it seems very possible that they would use all of them, but you'll need to check on that. [student speaks off mic] - So the question is for the bottleneck layers, is it possible to use some other types of dimensionality reduction and yes you can use other kinds of dimensionality reduction. The benefits here of this one by one conv is, you're getting this effect, but it's all, you know it's a conv layer just like any other. You have the soul network of these, you just train it this full network back [mumbles] through everything, and it's learning how to combine the previous feature maps. Okay yeah, question in the back. [student speaks off mic] - Yes so, question is are any weights shared or all they all separate and yeah, all of these layers have separate weights. Question. [student speaks off mic] - Yes so the question is why do we have to inject gradients at earlier layers? So our classification output at the very end, where we get a gradient on this, it's passed all the way back through the chain roll but the problem is when you have very deep networks and you're going all the way back through these, some of this gradient signal can become minimized and lost closer to the beginning, and so that's why having these additional ones in earlier parts can help provide some additional signal. [student mumbles off mic] - So the question is are you doing back prop all the times for each output. No it's just one back prop all the way through, and you can think of these three, you can think of there being kind of like an addition at the end of these if you were to draw up your computational graph, and so you get your final signal and you can just take all of these gradients and just back plot them all the way through. So it's as if they were added together at the end in a computational graph. OK so in the interest of time because we still have a lot to get through, can take other questions offline. Okay so GoogleNet basically 22 layers. It has an efficient inception module, there's no fully connected layers. 12 times fewer parameters than AlexNet, and it's the ILSVRC 2014 classification winner. And so now let's look at the 2015 winner, which is the ResNet network and so here this idea is really, this revolution of depth net right. We were starting to increase depth in 2014, and here we've just had this hugely deeper model at 152 layers was the ResNet architecture. And so now let's look at that in a little bit more detail. So the ResNet architecture, is getting extremely deep networks, much deeper than any other networks before and it's doing this using this idea of residual connections which we'll talk about. And so, they had 152 layer model for ImageNet. They were able to get 3.5 of 7% top 5 error with this and the really special thing is that they swept all classification and detection contests in the ImageNet mart benchmark and this other benchmark called COCO. It just basically won everything. So it was just clearly better than everything else. And so now let's go into a little bit of the motivation behind ResNet and residual connections that we'll talk about. And the question that they started off by trying to answer is what happens when we try and stack deeper and deeper layers on a plain convolutional neural network? So if we take something like VGG or some normal network that's just stacks of conv and pool layers on top of each other can we just continuously extend these, get deeper layers and just do better? And and the answer is no. So if you so if you look at what happens when you get deeper, so here I'm comparing a 20 layer network and a 56 layer network and so this is just a plain kind of network you'll see that in the test error here on the right the 56 layer network is doing worse than the 28 layer network. So the deeper network was not able to do better. But then the really weird thing is now if you look at the training error right we here have again the 20 layer network and a 56 layer network. The 56 layer network, one of the obvious problems you think, I have a really deep network, I have tons of parameters maybe it's probably starting to over fit at some point. But what actually happens is that when you're over fitting you would expect to have very good, very low training error rate, and just bad test error, but what's happening here is that in the training error the 56 layer network is also doing worse than the 20 layer network. And so even though the deeper model performs worse, this is not caused by over-fitting. And so the hypothesis of the ResNet creators is that the problem is actually an optimization problem. Deeper models are just harder to optimize, than more shallow networks. And the reasoning was that well, a deeper model should be able to perform at least as well as a shallower model. You can have actually a solution by construction where you just take the learned layers from your shallower model, you just copy these over and then for the remaining additional deeper layers you just add identity mappings. So by construction this should be working just as well as the shallower layer. And your model that weren't able to learn properly, it should be able to learn at least this. And so motivated by this their solution was well how can we make it easier for our architecture, our model to learn these kinds of solutions, or at least something like this? And so their idea is well instead of just stacking all these layers on top of each other and having every layer try and learn some underlying mapping of a desired function, lets instead have these blocks, where we try and fit a residual mapping, instead of a direct mapping. And so what this looks like is here on this right where the input to these block is just the input coming in and here we are going to use our, here on the side, we're going to use our layers to try and fit some residual of our desire to H of X, minus X instead of the desired function H of X directly. And so basically at the end of this block we take the step connection on this right here, this loop, where we just take our input, we just use pass it through as an identity, and so if we had no weight layers in between it was just going to be the identity it would be the same thing as the output, but now we use our additional weight layers to learn some delta, for some residual from our X. And so now the output of this is going to be just our original R X plus some residual that we're going to call it. It's basically a delta and so the idea is that now the output it should be easy for example, in the case where identity is ideal, to just squash all of these weights of F of X from our weight layers just set it to all zero for example, then we're just going to get identity as the output, and we can get something, for example, close to this solution by construction that we had earlier. Right, so this is just a network architecture that says okay, let's try and fit this, learn how our weight layers residual, and be something close, that way it'll more likely be something close to X, it's just modifying X, than to learn exactly this full mapping of what it should be. Okay, any questions about this? [student speaks off mic] - Question is is there the same dimension? So yes these two paths are the same dimension. In general either it's the same dimension, or what they actually do is they have these projections and shortcuts and they have different ways of padding to make things work out to be the same dimension. Depth wise. Yes - [Student] When you use the word residual you were talking about [mumbles off mic] - So the question is what exactly do we mean by residual this output of this transformation is a residual? So we can think of our output here right as this F of X plus X, where F of X is the output of our transformation and then X is our input, just passed through by the identity. So we'd like to using a plain layer, what we're trying to do is learn something like H of X, but what we saw earlier is that it's hard to learn H of X. It's a good H of X as we get very deep networks. And so here the idea is let's try and break it down instead of as H of X is equal to F of X plus, and let's just try and learn F of X. And so instead of learning directly this H of X we just want to learn what is it that we need to add or subtract to our input as we move on to the next layer. So you can think of it as kind of modifying this input, in place in a sense. We have-- [interrupted by student mumbling off mic] - The question is, when we're saying the word residual are we talking about F of X? Yeah. So F of X is what we're calling the residual. And it just has that meaning. Yes another question. [student mumbles off mic] - So the question is in practice do we just sum F of X and X together, or do we learn some weighted combination and you just do a direct sum. Because when you do a direct sum, this is the idea of let me just learn what is it I have to add or subtract onto X. Is this clear to everybody, the main intuition? Question. [student speaks off mic] - Yeah, so the question is not clear why is it that learning the residual should be easier than learning the direct mapping? And so this is just their hypotheses, and a hypotheses is that if we're learning the residual you just have to learn what's the delta to X right? And if our hypotheses is that generally even something like our solution by construction, where we had some number of these shallow layers that were learned and we had all these identity mappings at the top this was a solution that should have been good, and so that implies that maybe a lot of these layers, actually something just close to identity, would be a good layer And so because of that, now we formulate this as being able to learn the identity plus just a little delta. And if really the identity is best we just make F of X squashes transformation to just be zero, which is something that's relatively, might seem easier to learn, also we're able to get things that are close to identity mappings. And so again this is not something that's necessarily proven or anything it's just the intuition and hypothesis, and then we'll also see later some works where people are actually trying to challenge this and say oh maybe it's not actually the residuals that are so necessary, but at least this is the hypothesis for this paper, and in practice using this model, it was able to do very well. Question. [student speaks off mic] - Yes so the question is have people tried other ways of combining the inputs from previous layers and yes so this is basically a very active area of research on and how we formulate all these connections, and what's connected to what in all of these structures. So we'll see a few more examples of different network architectures briefly later but this is an active area of research. OK so we basically have all of these residual blocks that are stacked on top of each other. We can see the full resident architecture. Each of these residual blocks has two three by three conv layers as part of this block and there's also been work just saying that this happens to be a good configuration that works well. We stack all these blocks together very deeply. Another thing like with this very deep architecture it's basically also enabling up to 150 layers deep of this, and then what we do is we stack all these and periodically we also double the number of filters and down sample spatially using stride two when we do that. And then we have this additional [mumbles] at the very beginning of our network and at the end we also hear, don't have any fully connected layers and we just have a global average pooling layer that's going to average over everything spatially, and then be input into the last 1000 way classification. So this is the full ResNet architecture and it's very simple and elegant just stacking up all of these ResNet blocks on top of each other, and they have total depths of up to 34, 50, 100, and they tried up to 152 for ImageNet. OK so one additional thing just to know is that for a very deep network, so the ones that are more than 50 layers deep, they also use bottleneck layers similar to what GoogleNet did in order to improve efficiency and so within each block now you're going to, what they did is, have this one by one conv filter, that first projects it down to a smaller depth. So again if we are looking at let's say 28 by 28 by 256 implant, we do this one by one conv, it's taking it's projecting the depth down. We get 28 by 28 by 64. Now your convolution your three by three conv, in here they only have one, is operating over this reduced step so it's going to be less expensive, and then afterwards they have another one by one conv that projects the depth back up to 256, and so, this is the actual block that you'll see in deeper networks. So in practice the ResNet also uses batch normalization after every conv layer, they use Xavier initialization with an extra scaling factor that they helped introduce to improve the initialization trained with SGD + momentum. Their learning rate they use a similar learning rate type of schedule where you decay your learning rate when your validation error plateaus. Mini batch size 256, a little bit of weight decay and no drop out. And so experimentally they were able to show that they were able to train these very deep networks, without degrading. They were able to have basically good gradient flow coming all the way back down through the network. They tried up to 152 layers on ImageNet, 1200 on Cifar, which is a, you have played with it, but a smaller data set and they also saw that now you're deeper networks are able to achieve lower training errors as expected. So you don't have the same strange plots that we saw earlier where the behavior was in the wrong direction. And so from here they were able to sweep first place at all of the ILSVRC competitions, and all of the COCO competitions in 2015 by a significant margins. Their total top five error was 3.6 % for a classification and this is actually better than human performance in the ImageNet paper. There was also a human metric that came from actually [mumbles] our lab Andre Kapathy spent like a week training himself and then basically did all of, did this task himself and was I think somewhere around 5-ish %, and so I was basically able to do better than the then that human at least. Okay, so these are kind of the main networks that have been used recently. We had AlexNet starting off with first, VGG and GoogleNet are still very popular, but ResNet is the most recent best performing model that if you're looking for something training a new network ResNet is available, you should try working with it. So just quickly looking at some of this getting a better sense of the complexity involved. So here we have some plots that are sorted by performance so this is top one accuracy here, and higher is better. And so you'll see a lot of these models that we talked about, as well as some different versions of them so, this GoogleNet inception thing, I think there's like V2, V3 and the best one here is V4, which is actually a ResNet plus inception combination, so these are just kind of more incremental, smaller changes that they've built on top of them, and so that's the best performing model here. And if we look on the right, these plots of their computational complexity here it's sorted. The Y axis is your top one accuracy so higher is better. The X axis is your operations and so the more to the right, the more ops you're doing, the more computationally expensive and then the bigger the circle, your circle is your memory usage, so the gray circles are referenced here, but the bigger the circle the more memory usage and so here we can see that VGG these green ones are kind of the least efficient. They have the biggest memory, the most operations, but they they do pretty well. GoogleNet is the most efficient here. It's way down on the operation side, as well as a small little circle for memory usage. AlexNet, our earlier model, has lowest accuracy. It's relatively smaller compute, because it's a smaller network, but it's also not particularly memory efficient. And then ResNet here, we have moderate efficiency. It's kind of in the middle, both in terms of memory and operations, and it has the highest accuracy. And so here also are some additional plots. You can look at these more on your own time, but this plot on the left is showing the forward pass time and so this is in milliseconds and you can up at the top VGG forward passes about 200 milliseconds you can get about five frames per second with this, and this is sorted in order. There's also this plot on the right looking at power consumption and if you look more at this paper here, there's further analysis of these kinds of computational comparisons. So these were the main architectures that you should really know in-depth and be familiar with, and be thinking about actively using. But now I'm going just to go briefly through some other architectures that are just good to know either historical inspirations or more recent areas of research. So the first one Network in Network, this is from 2014, and the idea behind this is that we have these vanilla convolutional layers but we also have these, this introduces the idea of MLP conv layers they call it, which are micro networks or basically network within networth, the name of the paper. Where within each conv layer trying to stack an MLP with a couple of fully connected layers on top of just the standard conv and be able to compute more abstract features for these local patches right. So instead of sliding just a conv filter around, it's sliding a slightly more complex hierarchical set of filters around and using that to get the activation maps. And so, it uses these fully connected, or basically one by one conv kind of layers. It's going to stack them all up like the bottom diagram here where we just have these networks within networks stacked in each of the layers. And the main reason to know this is just it was kind of a precursor to GoogleNet and ResNet in 2014 with this idea of bottleneck layers that you saw used very heavily in there. And it also had a little bit of philosophical inspiration for GoogleNet for this idea of a local network typology network in network that they also used, with a different kind of structure. Now I'm going to talk about a series of works, on, or works since ResNet that are mostly geared towards improving resNet and so this is more recent research has been done since then. I'm going to go over these pretty fast, and so just at a very high level. If you're interested in any of these you should look at the papers, to have more details. So the authors of ResNet a little bit later on in 2016 also had this paper where they improved the ResNet block design. And so they basically adjusted what were the layers that were in the ResNet block path, and showed this new structure was able to have a more direct path in order for propagating information throughout the network, and you want to have a good path to propagate information all the way up, and then back up all the way down again. And so they showed that this new block was better for that and was able to give better performance. There's also a Wide Residual networks which this paper argued that while ResNets made networks much deeper as well as added these residual connections and their argument was that residuals are really the important factor. Having this residual construction, and not necessarily having extremely deep networks. And so what they did was they used wider residual blocks, and so what this means is just more filters in every conv layer. So before we might have F filters per layer and they use these factors of K and said well, every layer it's going to be F times K filters instead. And so, using these wider layers they showed that their 50 layer wide ResNet was able to out-perform the 152 layer original ResNet, and it also had the additional advantages of increasing with this, even with the same amount of parameters, tit's more computationally efficient because you can parallelize these with operations more easily. Right just convolutions with more neurons just spread across more kernels as opposed to depth that's more sequential, so it's more computationally efficient to increase your width. So here you can see this work is starting to trying to understand the contributions of width and depth and residual connections, and making some arguments for one way versus the other. And this other paper around the same time, I think maybe a little bit later, is ResNeXt, and so this is again, the creators of ResNet continuing to work on pushing the architecture. And here they also had this idea of okay, let's indeed tackle this width thing more but instead of just increasing the width of this residual block through more filters they have structure. And so within each residual block, multiple parallel pathways and they're going to call the total number of these pathways the cardinality. And so it's basically taking the one ResNet block with the bottlenecks and having it be relatively thinner, but having multiple of these done in parallel. And so here you can also see that this both have some relation to this idea of wide networks, as well as to has some connection to the inception module as well right where we have these parallel, these layers operating in parallel. And so now this ResNeXt has some flavor of that as well. So another approach towards improving ResNets was this idea called Stochastic Depth and in this work the motivation is well let's look more at this depth problem. Once you get deeper and deeper the typical problems that you're going to have vanishing gradients right. You're not able to, your gradients will get smaller and eventually vanish as you're trying to back propagate them over very long layers, or a large number of layers. And so what their motivation is well let's try to have short networks during training and they use this idea of dropping out a subset of the layers during training. And so for a subset of the layers they just drop out the weights and they just set it to identity connection, and now what you get is you have these shorter networks during training, you can pass back your gradients better. It's also a little more efficient, and then it's kind of like the drop out right. It has this sort of flavor that you've seen before. And then at test time you want to use the full deep network that you've trained. So these are some of the works that looking at the resident architecture, trying to understand different aspects of it and trying to improve ResNet training. And so there's also some works now that are going beyond ResNet that are saying well what are some non ResNet architectures that maybe can also work better, or comparable or better to ResNets. And so one idea is FractalNet, which came out pretty recently, and the argument in FractalNet is that while residual representations maybe are not actually necessary, so this goes back to what we were talking about earlier. What's the motivation of residual networks and it seems to make sense and there's, you know, good reasons for why this should help but in this paper they're saying that well here is a different architecture that we're introducing, there's no residual representations. We think that the key is more about transitioning effectively from shallow to deep networks, and so they have this fractal architecture which has if you look on the right here, these layers where they compose it in this fractal fashion. And so there's both shallow and deep pathways to your output. And so they have these different length pathways, they train them with dropping out sub paths, and so again it has this dropout kind of flavor, and then at test time they'll use the entire fractal network and they show that this was able to get very good performance. There's another idea called Densely Connected convolutional Networks, DenseNet, and this idea is now we have these blocks that are called dense blocks. And within each block each layer is going to be connected to every other layer after it, in this feed forward fashion. So within this block, your input to the block is also the input to every other conv layer, and as you compute each conv output, those outputs are now connected to every layer after and then, these are all concatenated as input to the conv layer, and they do some they have some other processes for reducing the dimensions and keeping efficient. And so their main takeaway from this, is that they argue that this is alleviating a vanishing gradient problem because you have all of these very dense connections. It strengthens feature propagation and then also encourages future use right because there are so many of these connections each feature map that you're learning is input in multiple later layers and being used multiple times. So these are just a couple of ideas that are you know alternatives or what can we do that's not ResNets and yet is still performing either comparably or better to ResNets and so this is another very active area of current research. You can see that a lot of this is looking at the way how different layers are connected to each other and how depth is managed in these networks. And so one last thing that I wanted to mention quickly, is just efficient networks. So this idea of efficiency and you saw that GoogleNet was a work that was looking into this direction of how can we have efficient networks which are important for you know a lot of practical usage both training as well as especially deployment and so this is another recent network that's called SqueezeNet which is looking at very efficient networks. They have these things called fire modules, which consists of a squeeze layer with a lot of one by one filters and then this feeds then into an expand layer with one by one and three by three filters, and they're showing that with this kind of architecture they're able to get AlexNet level accuracy on ImageNet, but with 50 times fewer parameters, and then you can further do network compression on this to get up to 500 times smaller than AlexNet and just have the whole network just be 0.5 megs. And so this is a direction of how do we have efficient networks model compression that we'll cover more in a lecture later, but just giving you a hint of that. OK so today in summary we've talked about different kinds of CNN Architectures. We looked in-depth at four of the main architectures that you'll see in wide usage. AlexNet, one of the early, very popular networks. VGG and GoogleNet which are still widely used. But ResNet is kind of taking over as the thing that you should be looking most when you can. We also looked at these other networks in a little bit more depth at a brief level overview. And so the takeaway that these models that are available they're in a lot of [mumbles] so you can use them when you need them. There's a trend toward extremely deep networks, but there's also significant research now around the design of how do we connect layers, skip connections, what is connected to what, and also using these to design your architecture to improve gradient flow. There's an even more recent trend towards examining what's the necessity of depth versus width, residual connections. Trade offs, what's actually helping matters, and so there's a lot of these recent works in this direction that you can look into some of the ones I pointed out if you are interested. And next time we'll talk about Recurrent neural networks. Thanks.
Stanford_Computer_Vision
Lecture_16_Adversarial_Examples_and_Adversarial_Training.txt
- Okay, sounds like it is. I'll be telling you about adversarial examples and adversarial training today. Thank you. As an overview, I will start off by telling you what adversarial examples are, and then I'll explain why they happen, why it's possible for them to exist. I'll talk a little bit about how adversarial examples pose real world security threats, that they can actually be used to compromise systems built on machine learning. I'll tell you what the defenses are so far, but mostly defenses are an open research problem that I hope some of you will move on to tackle. And then finally I'll tell you how to use adversarial examples to improve other machine learning algorithms even if you want to build a machine learning algorithm that won't face a real world adversary. Looking at the big picture and the context for this lecture, I think most of you are probably here because you've heard how incredibly powerful and successful machine learning is, that very many different tasks that could not be solved with software before are now solvable thanks to deep learning and convolutional networks and gradient descent. All of these technologies that are working really well. Until just a few years ago, these technologies didn't really work. In about 2013, we started to see that deep learning achieved human level performance at a lot of different tasks. We saw that convolutional nets could recognize objects and images and score about the same as people in those benchmarks, with the caveat that part of the reason that algorithms score as well as people is that people can't tell Alaskan Huskies from Siberian Huskies very well, but modulo the strangeness of the benchmarks deep learning caught up to about human level performance for object recognition in about 2013. That same year, we also saw that object recognition applied to human faces caught up to about human level. That suddenly we had computers that could recognize faces about as well as you or I could recognize faces of strangers. You can recognize the faces of your friends and family better than a computer, but when you're dealing with people that you haven't had a lot of experience with the computer caught up to us in about 2013. We also saw that computers caught up to humans for reading type written fonts in photos in about 2013. It even got the point that we could no longer use CAPTCHAs to tell whether a user of a webpage is human or not because the convolutional network is better at reading obfuscated text than a human is. So with this context today of deep learning working really well especially for computer vision it's a little bit unusual to think about the computer making a mistake. Before about 2013, nobody was ever surprised if the computer made a mistake. That was the rule not the exception, and so today's topic which is all about unusual mistakes that deep learning algorithms make this topic wasn't really a serious avenue of study until the algorithms started to work well most of the time, and now people study the way that they break now that that's actually the exception rather than the rule. An adversarial example is an example that has been carefully computed to be misclassified. In a lot of cases we're able to make the new image indistinguishable to a human observer from the original image. Here, I show you one where we start with a panda. On the left this is a panda that has not been modified in any way, and the convolutional network trained on the image in that dataset is able to recognize it as being a panda. One interesting thing is that the model doesn't have a whole lot of confidence in that decision. It assigns about 60% probability to this image being a panda. If we then compute exactly the way that we could modify the image to cause the convolutional network to make a mistake we find that the optimal direction to move all the pixels is given by this image in the middle. To a human it looks a lot like noise. It's not actually noise. It's carefully computed as a function of the parameters of the network. There's actually a lot of structure there. If we multiply that image of the structured attack by a very small coefficient and add it to the original panda we get an image that a human can't tell from the original panda. In fact, on this slide there is no difference between the panda on the left and the panda on the right. When we present the image to convolutional network we use 32-bit floating point values. The monitor here can only display eight bits of color resolution, and we have made a change that's just barely too small to affect the smallest of those eight bits, but it effects the other 24 of the 32-bit floating point representation, and that little tiny change is enough to fool the convolutional network into recognizing this image of a panda as being a gibbon. Another interesting thing is that it doesn't just change the class. It's not that we just barely found the decision boundary and just barely stepped across it. The convolutional network actually has much more confidence in its incorrect prediction, that the image on the right is a gibbon, than it had for the original being a panda. On the right, it believes that the image is a gibbon with 99.9% probability, so before it thought that there was about 1/3 chance that it was something other than a panda, and now it's about as certain as it can possibly be that it's a gibbon. As a little bit of history, people have studied ways of computing attacks to fool different machine learning models since at least about 2004, and maybe earlier. For a long time this was done in the context of fooling spam detectors. In about 2013, Battista Biggio found that you could fool neural networks in this way, and around the same time my colleague, Christian Szegedy, found that you could make this kind of attack against deep neural networks just by using an optimization algorithm to search on the input of the image. A lot of what I'll be telling you about today is my own follow-up work on this topic, but I've spent a lot of my career over the past few years understanding why these attacks are possible and why it's so easy to fool these convolutional networks. When my colleague, Christian, first discovered this phenomenon independently from Battista Biggio but around the same time, he found that it was actually a result of a visualization he was trying to make. He wasn't studying security. He wasn't studying how to fool a neural network. Instead, he had a convolutional network that could recognize objects very well, and he wants to understand how it worked, so he thought that maybe he could take an image of a scene, for example a picture of a ship, and he could gradually transform that image into something that the network would recognize as being an airplane. Over the course of that transformation, he could see how the features of the input change. You might expect that maybe the background 167 00:07:34,360 --> 00:07:37,692 would turn blue to look like the sky behind an airplane, or you might expect that the ship would grow wings to look more like an airplane. You could conclude from that that the convolution uses the blue sky or uses the wings to recognize airplanes. That's actually not really what happened at all. Each of these panels here shows an animation that you read left to right, top to bottom. Each panel is another step of gradient ascent on the log probability that the input is an airplane according to a convolutional net model, and then we follow the gradient on the input to the image. You're probably used to following the gradient on the parameters of a model. You can use the back propagation algorithm to compute the gradient on the input image using exactly the same procedure that you would use to compute the gradient on the parameters. In this animation of the ship in the upper left, we see five panels that all look basically the same. Gradient descent doesn't seem to have moved the image at all, but by the last panel the network is completely confident that this is an airplane. When you first code up this kind of experiment, especially if you don't know what's going to happen, it feels a little bit like you have a bug in your script and you're just displaying the same image over and over again. The first time I did it, I couldn't believe it was happening, and I had to open up the images in NumPy, and take the difference of them, and make sure that there was actually a non-zero difference in there, but there is. I show several different animations here of a ship, a car, a cat, and a truck. The only one where I actually see any change at all is the image of the cat. The color of the cat's face changes a little bit, and maybe it becomes a little bit more like the color of a metal airplane. Other than that, I don't see any changes in any of these animations, and I don't see anything very suggestive of an airplane. So gradient descent, rather than turning the input into an example of an airplane, has found an image that fools the network into thinking that the input is an airplane. And if we were malicious attackers we didn't even have to work very hard to figure out how to fool the network. We just asked the network to give us an image of an airplane, and it gave us something that fools it into thinking that the input is an airplane. When Christian first published this work, a lot of articles came out with titles like, The Flaw Looking At Every Deep Neural Network, or Deep Learning has Deep Flaws. It's important to remember that these vulnerabilities apply to essentially every machine learning algorithm that we've studied so far. Some of them like RBF networks and partisan density estimators are able to resist this effect somewhat, but even very simple machine learning algorithms are highly vulnerable to adversarial examples. In this image, I show an animation of what happens when we attack a linear model, so it's not a deep algorithm at all. It's just a shallow softmax model. You multiply by a matrix, you add a vector of bias terms, you apply the softmax function, and you've got your probability distribution over the 10 MNIST classes. At the upper left, I start with an image of a nine, and then as we move left to right, top to bottom, I gradually transform it to be a zero. Where I've drawn the yellow box, the model assigns high probability to it being a zero. I forget exactly what my threshold was for high probability, but I think it was around 0.9 or so. Then as we move to the second row, I transform it into a one, and the second yellow box indicates where we've successfully fooled the model into thinking it's a one with high probability. And then as you read the rest of the yellow boxes left to right, top to bottom, we go through the twos, threes, fours, and so on, until finally at the lower right we have a nine that has a yellow box around it, and it actually looks like a nine, but in this case the only reason it actually looks like a nine is that we started the whole process with a nine. We successfully swept through all 10 classes of MNIST without substantially changing the image of the digit in any way that would interfere with human recognition. This linear model was actually extremely easy to fool. Besides linear models, we've also seen that we can fool many different kinds of linear models including logistic regression and SVMs. We've also found that we can fool decision trees, and to a lesser extent, nearest neighbors classifiers. We wanted to explain exactly why this happens. Back in about 2014, after we'd published the original paper where we'd said that these problems exist, we were trying to figure out why they happen. When we wrote our first paper, we thought that basically this is a form of overfitting, that you have a very complicated deep neural network, it learns to fit the training set, its behavior on the test set is somewhat undefined, and then it makes random mistakes that an attacker can exploit. Let's walk through what that story looks like somewhat concretely. I have here a training set of three blue X's and three green O's. We want to make a classifier that can recognize X's and recognize O's. We have a very complicated classifier that can easily fit the training set, so we represent everywhere it believes X's should be with blobs of blue color, and I've drawn a blob of blue around all of the training set X's, so it correctly classifies the training set. It also has a blob of green mass showing where the O's are, and it successfully fits all of the green training set O's, but then because this is a very complicated function and it has just way more parameters than it actually needs to represent the training task, it throws little blobs of probability mass around the rest of space randomly. On the left there's a blob of green space that's kind of near the training set X's, and I've drawn a red X there to show that maybe this would be an adversarial example where we expect the classification to be X, but the model assigns O. On the right, I've shown that there's a red O where we have another adversarial example. We're very near the other O's. We might expect the model to assign this class to be an O, and yet because it's drawn blue mass there it's actually assigning it to be an X. If overfitting is really the story then each adversarial example is more or less the result of bad luck and also more or less unique. If we fit the model again or we fit a slightly different model we would expect to make different random mistakes on this points that are off the training set, but that was actually not what we found at all. We found that many different models would misclassify the same adversarial examples, and they would assign the same class to them. We also found that if we took the difference between an original example and an adversarial example then we had a direction in input space and we could add that same offset vector to any clean example, and we would almost always get an adversarial example as a result. So we started to realize that there was systematic effect going on here, not just a random effect. That led us to another idea which is that adversarial examples might actually be more like underfitting rather than overfitting. They might actually come from the model being too linear. Here I draw the same task again where we have the same manifold of O's and the same line of X's, and this time I fit a linear model to the data set rather than fitting a high capacity, non-linear model to it. We see that we get a dividing hyperplane running in between the two classes. This hyperplane doesn't really capture the true structure of the classes. The O's are clearly arranged in a C-shaped manifold. If we keep walking past the end of the O's, we've crossed the decision boundary and we've drawn a red O where even though we're very near the decision boundary and near other O's we believe that it is now an X. Similarly we can take steps that go from near X's to just over the line that are classified as O's. Another thing that's somewhat unusual about this plot is that if we look at the lower left or upper right corners these corners are very confidently classified as being X's on the lower left or O's on the upper right even though we've never seen any data over there at all. The linear model family forces the model to have very high confidence in these regions that are very far from the decision boundary. We've seen that linear models can actually assign really unusual confidence as you move very far from the decision boundary, even if there isn't any data there, but are deep neural networks actually anything like linear models? Could linear models actually explain anything about how it is that deep neural nets fail? It turns out that modern deep neural nets are actually very piecewise linear, so rather than being a single linear function they are piecewise linear with maybe not that many linear pieces. If we use rectified linear units then the mapping from the input image to the output logits is literally a piecewise linear function. By the logits I mean the un-normalized log probabilities before we apply the softmax op at the output of the model. There are other neural networks like maxout networks that are also literally piecewise linear. And then there are several that become very close to it. Before rectified linear units became popular most people used to use sigmoid units of one form or another either logistic sigmoid or hyperbolic tangent units. These sigmoidal units have to be carefully tuned, especially at initialization so that you spend most of your time near the center of the sigmoid where the sigmoid is approximately linear. Then finally, the LSTM, a kind of recurrent network that is one of the most popular recurrent networks today, uses addition from one time step to the next in order to accumulate and remember information over time. Addition is a particularly simple form of linearity, so we can see that the interaction from a very distant time step in the past and the present is highly linear within an LSTM. Now to be clear, I'm speaking about the mapping from the input of the model to the output of the model. That's what I'm saying is close to being linear or is piecewise linear with relatively few pieces. The mapping from the parameters of the network to the output of the network is non-linear because the weight matrices at each layer of the network are multiplied together. So we actually get extremely non-linear reactions between parameters and the output. That's what makes training a neural network so difficult. But the mapping from the input to the output is much more linear and predictable, and it means that optimization problems that aim to optimize the input to the model are much easier than optimization problems that aim to optimize the parameters. If we go and look for this happening in practice we can take a convolutional network and trace out a one-dimensional path through its input space. So what we're doing here is we're choosing a clean example. It's an image of a white car on a red background, and we are choosing a direction that will travel through space. We are going to have a coefficient epsilon that we multiply by this direction. When epsilon is negative 30, like at the left end of the plot, we're subtracting off a lot of this unit vector direction. When epsilon is zero, like in the middle of the plot, we're visiting the original image from the data set, and when epsilon is positive 30, like at the right end of the plot, we're adding this direction onto the input. In the panel on the left, I show you an animation where we move from epsilon equals negative 30 as up to epsilon equals positive 30. You read the animation left to right, top to bottom, and everywhere that there's a yellow box the input has correctly recognized as being a car. On the upper left, you see that it looks mostly blue. On the lower right, it's hard to tell what's going on. It's kind of reddish and so on. In the middle row, just after where the yellow boxes end you can see pretty clearly that it's a car on a red background, though the image is small on these slides. What's interesting to look at here is the logits that the model outputs. This is a deep convolutional rectified linear unit network. Because it uses rectified linear units, we know that the output is a piecewise linear function of the input to the model. The main question we're asking by making this plot is how many different pieces does this piecewise linear function have if we look at one particular cross section. You might think that maybe a deep net is going to represent some extremely wiggly complicated function with lots and lots of linear pieces no matter which cross section you look in. Or we might find that it has more or less two pieces for each function we look at. Each of the different curves on this plot is the logits for a different class. We see that out at the tails of the plot that the frog class is the most likely, and the frog class basically looks like a big v-shaped function. The logits for the frog class become very high when epsilon is negative 30 or positive 30, and they drop down and become a little bit negative when epsilon is zero. The car class, listed as automobile here, it's actually high in the middle, and the car is correctly recognized. As we sweep out to very negative epsilon, the logits for the car class do increase, but they don't increase nearly as quickly as the logits for the frog class. So, we've found a direction that's associated with the frog class and as we follow it out to a relatively large perturbation, we find that the model extrapolates linearly and begins to make a very unreasonable prediction that the frog class is extremely likely just because we've moved for a long time in this direction that was locally associated with the frog class being more likely. When we actually go and construct adversarial examples, we need to remember that we're able to get quite a large perturbation without changing the image very much as far as a human being is concerned. So here I show you a handwritten digit three, and I'm going to change it in several different ways, and all of these changes have the same L2 norm perturbation. In the top row, I'm going to change the three into a seven just by looking for the nearest seven in the training set. The difference between those two is this image that looks a little bit like the seven wrapped in some black lines. So here white pixels in the middle image in the perturbation column, the white pixels represent adding something and black pixels represent subtracting something as you move from the left column to the right column. So when we take the three and we apply this perturbation that transforms it into a seven, we can measure the L2 norm of that perturbation. And it turns out to have an L2 norm of 3.96. That gives you kind of a reference for how big these perturbations can be. In the middle row, we apply a perturbation of exactly the same size, but with the direction chosen randomly. In this case we don't actually change the class of the three at all, we just get some random noise that didn't really change the class. A human could still easily read it as being a three. And then finally at the very bottom row, we take the three and we just erase a piece of it with a perturbation of the same norm and we turn it into something that doesn't have any class at all. It's not a three, it's not a seven, it's just a defective input. All of these changes can happen with the same L2 norm perturbation. And actually a lot of the time with adversarial examples, you make perturbations that have an even larger L2 norm. What's going on is that there are several different pixels in the image, and so small changes to individual pixels can add up to relatively large vectors. For larger datasets like ImageNet, where there's even more pixels, you can make very small changes to each pixel that travel very far in vector space as measured by the L2 norm. That means that you can actually make changes that are almost imperceptible but actually move you really far and get a large dot product with the coefficients of the linear function that the model represents. It also means that when we're constructing adversarial examples, we need to make sure that the adversarial example procedure isn't able to do what happened in the top row of this slide here. So in the top row of this slide, we took the three and we actually just changed it into a seven. So when the model says that the image in the upper right is a seven, it's not a mistake. We actually just changed the input class. When we build adversarial examples, we want to make sure that we're measuring real mistakes. If we're experimenters studying how easy a network is to fool, we want to make sure that we're actually fooling it and not just changing the input class. And if we're an attacker, we actually want to make sure that we're causing misbehavior in the system. To do that, when we build adversarial examples, we use the maxnorm to constrain the perturbation. Basically this says that no pixel can change by more than some amount epsilon. So the L2 norm can get really big, but you can't concentrate all the changes for that L2 norm to erase pieces of the digit, like in the bottom row here we erased the top of a three. One very fast way to build an adversarial example is just to take the gradient of the cost that you used to train the network with respect to the input, and then take the sign of that gradient. The sign is essentially enforcing the maxnorm constraint. You're only allowed to change the input by up to epsilon at each pixel, so if you just take the sign it tells you whether you want to add epsilon or subtract epsilon in order to hurt the network. You can view this as taking the observation that the network is more or less linear, as we showed on this slide, and using that to motivate building a first order Taylor series approximation of the neural network's cost. And then subject to that Taylor series approximation, we want to maximize the cost following this maxnorm constraint. And that gives us this technique that we call the fast gradient sign method. If you want to just get your hands dirty and start making adversarial examples really quickly, or if you have an algorithm where you want to train on adversarial examples in the inner loop of learning, this method will make adversarial examples for you very, very quickly. In practice you should also use other methods, like Nicholas Carlini's attack based on multiple steps of the Adam optimizer, to make sure that you have a very strong attack that you bring out when you think you have a model that might be more powerful. A lot of the time people find that they can defeat the fast gradient sign method and think that they've built a successful defense, but then when you bring out a more powerful method that takes longer to evaluate, they find that they can't overcome the more computationally expensive attack. I've told you that adversarial examples happen because the model is very linear. And then I told you that we could use this linearity assumption to build this attack, the fast gradient sign method. This method, when applied to a regular neural network that doesn't have any special defenses, will get over a 99% attack success rate. So that seems to confirm, somewhat, this hypothesis that adversarial examples come from the model being far too linear and extrapolating in linear fashions when it shouldn't. Well we can actually go looking for some more evidence. My friend David Warde-Farley and I built these maps of the decision boundaries of neural networks. And we found that they are consistent with the linearity hypothesis. So the FGSM is that attack method that I described in the previous slide, where we take the sign of the gradient. We'd like to build a map of a two-dimensional cross section of input space and show which classes are assigned to the data at each point. In the grid on the right, each different cell, each little square within the grid, is a map of a CIFAR-10 classifier's decision boundary, with each cell corresponding to a different CIFAR-10 testing sample. On the left I show you a little legend where you can understand what each cell means. The very center of each cell corresponds to the original example from the CIFAR-10 dataset with no modification. As we move left to right in the cell, we're moving in the direction of the fast gradient sign method attack. So just the sign of the gradient. As we move up and down within the cell, we're moving in a random direction that's orthogonal to the fast gradient sign method direction. So we get to see a cross section, a 2D cross section of CIFAR-10 decision space. At each pixel within this map, we plot a color that tells us which class is assigned there. We use white pixels to indicate that the correct class was chosen, and then we used different colors to represent all of the other incorrect classes. You can see that in nearly all of the grid cells on the right, roughly the left half of the image is white. So roughly the left half of the image has been correctly classified. As we move to the right, we see that there is usually a different color on the right half. And the boundaries between these regions are approximately linear. What's going on here is that the fast gradient sign method has identified a direction where if we get a large dot product with that direction we can get an adversarial example. And from this we can see that adversarial examples live more or less in linear subspaces. When we first discovered adversarial examples, we thought that they might live in little tiny pockets. In the first paper we actually speculated that maybe they're a little bit like the rational numbers, hiding out finely tiled among the real numbers, with nearly every real number being near a rational number. We thought that because we were able to find an adversarial example corresponding to every clean example that we loaded into the network. After doing this further analysis, we found that what's happening is that every real example is near one of these linear decision boundaries where you cross over into an adversarial subspace. And once you're in that adversarial subspace, all the other points nearby are also adversarial examples that will be misclassified. This has security implications because it means you only need to get the direction right. You don't need to find an exact coordinate in space. You just need to find a direction that has a large dot product with the sign of the gradient. And once you move more or less approximately in that direction, you can fool the model. We also made another cross section where after using the left-right axis as the fast gradient sign method, we looked for a second direction that has high dot product with the gradient so we could make both axes adversarial. And in this case you see that we get linear decision boundaries. They're now oriented diagonally rather than vertically, but you can see that there's actually this two-dimensional subspace of adversarial examples that we can cross into. Finally it's important to remember that adversarial examples are not noise. You can add a lot of noise to an adversarial example and it will stay adversarial. You can add a lot of noise to a clean example and it will stay clean. Here we make random cross sections where both axes are randomly chosen directions. And you see that on CIFAR-10, most of the cells are completely white, meaning that they're correctly classified to start with, and when you add noise they stay correctly classified. We also see that the model makes some mistakes because this is the test set. And generally if a test example starts out misclassified, adding the noise doesn't change it. There are a few exceptions where, if you look in the third row, third column, noise actually can make the model misclassify the example for especially large noise values. And there's even some where, in the top row there's one example you can see where the model is misclassifying the test example to start with but then noise can change it to be correctly classified. For the most part, noise has very little effect on the classification decision compared to adversarial examples. What's going on here is that in high dimensional spaces, if you choose some reference vector and then you choose a random vector in that high dimensional space, the random vector will, on average, have zero dot product with the reference vector. So if you think about making a first order Taylor series approximation of your cost, and thinking about how your Taylor series approximation predicts that random vectors will change your cost. You see that random vectors on average have no effect on the cost. But adversarial examples are chosen to maximize it. In these plots we looked in two dimensions. More recently, Florian Tramer here at Stanford got interested in finding out just how many dimensions there are to these subspaces where the adversarial examples lie in a thick contiguous region. And we came up with an algorithm together where you actually look for several different orthogonal vectors that all have a large dot product with the gradient. By looking in several different orthogonal directions simultaneously, we can map out this kind of polytope where many different adversarial examples live. We found out that this adversarial region has on average about 25 dimensions. If you look at different examples you'll find different numbers of adversarial dimensions. But on average on MNIST we found it was about 25. So what's interesting here is the dimensionality actually tells you something about how likely you are to find an adversarial example by generating random noise. If every direction were adversarial, then any change would cause a misclassification. If most of the directions were adversarial, then random directions would end up being adversarial just by accident most of the time. And then if there was only one adversarial direction, you'd almost never find that direction just by adding random noise. When there's 25 you have a chance of doing it sometimes. Another interesting thing is that different models will often misclassify the same adversarial examples. The subspace dimensionality of the adversarial subspace relates to that transfer property. The larger the dimensionality of the subspace, the more likely it is that the subspaces for two models will intersect. So if you have two different models that have a very large adversarial subspace, you know that you can probably transfer adversarial examples from one to the other. But if the adversarial subspace is very small, then unless there's some kind of really systematic effect forcing them to share exactly the same subspace, it seems less likely that you'll be able to transfer examples just due to the subspaces randomly aligning. A lot of the time in the adversarial example research community, we refer back to the story of Clever Hans. This comes from an essay by Bob Sturm called Clever Hans, Clever Algorithms. Because Clever Hans is a pretty good metaphor for what's happening with machine learning algorithms. So Clever Hans was a horse that lived in the early 1900s. His owner trained him to do arithmetic problems. So you could ask him, "Clever Hans, "what's two plus one?" And he would answer by tapping his hoof. And after the third tap, everybody would start cheering and clapping and looking excited because he'd actually done an arithmetic problem. Well it turned out that he hadn't actually learned to do arithmetic. But it was actually pretty hard to figure out what was going on. His owner was not trying to defraud anybody, his owner actually believed he could do arithmetic. And presumably Clever Hans himself was not trying to trick anybody. But eventually a psychologist examined him and found that if he was put in a room alone without an audience, and the person asking the questions wore a mask, he couldn't figure out when to stop tapping. You'd ask him, "Clever Hans, "what's one plus one?" And he'd just [knocking] keep staring at your face, waiting for you to give him some sign that he was done tapping. So everybody in this situation was trying to do the right thing. Clever Hans was trying to do whatever it took to get the apple that his owner would give him when he answered an arithmetic problem. His owner did his best to train him correctly with real arithmetic questions and real rewards for correct answers. And what happened was that Clever Hans inadvertently focused on the wrong cue. He found this cue of people's social reactions that could reliably help him solve the problem, but then it didn't generalize to a test set where you intentionally took that cue away. It did generalize to a naturally occurring test set, where he had an audience. So that's more or less what's happening with machine learning algorithms. They've found these very linear patterns that can fit the training data, and these linear patterns even generalize to the test data. They've learned to handle any example that comes from the same distribution as their training data. But then if you shift the distribution that you test them on, if a malicious adversary actually creates examples that are intended to fool them, they're very easily fooled. In fact we find that modern machine learning algorithms are wrong almost everywhere. We tend to think of them as being correct most of the time, because when we run them on naturally occurring inputs they achieve very high accuracy percentages. But if we look instead of as the percentage of samples from an IID test set, if we look at the percentage of the space in RN that is correctly classified, we find that they misclassify almost everything and they behave reasonably only on a very thin manifold surrounding the data that we train them on. In this plot, I show you several different examples of Gaussian noise that I've run through a CIFAR-10 classifier. Everywhere that there is a pink box, the classifier thinks that there is something rather than nothing. I'll come back to what that means in a second. Everywhere that there is a yellow box, one step of the fast gradient sign method was able to persuade the model that it was looking specifically at an airplane. I chose the airplane class because it was the one with the lowest success rate. It had about a 25% success rate. That means an attacker would need four chances to get noise recognized as an airplane on this model. An interesting thing, and appropriate enough given the story of Clever Hans, is that this model found that about 70% of RN was classified as a horse. So I mentioned that this model will say that noise is something rather than nothing. And it's actually kind of important to think about how we evaluate that. If you have a softmax classifier, it has to give you a distribution over the n different classes that you train it on. So there's a few ways that you can argue that the model is telling you that there's something rather than nothing. One is you can say, if it assigns something like 90% to one particular class, that seems to be voting for that class being there. We'd much rather see it give us something like a uniform distribution saying this noise doesn't look like anything in the training set so it's equally likely to be a horse or a car. And that's not what the model does. It'll say, this is very definitely a horse. Another thing that you can do is you can replace the last layer of the model. For example, you can use a sigmoid output for each class. And then the model is actually capable of telling you that any subset of classes is present. It could actually tell you that an image is both a horse and a car. And what we would like it to do for the noise is tell us that none of the classes is present, that all of the sigmoids should have a value of less than 1/2. And 1/2 isn't even particularly a low threshold. We could reasonably expect that all of the sigmoids would be less than 0.01 for such a defective input as this. But what we find instead is that the sigmoids tend to have at least one class present just when we run Gaussian noise of sufficient norm through the model. We've also found that we can do adversarial examples for reinforcement learning. And there's a video for this. I'll upload the slides after the talk and you can follow the link. Unfortunately I wasn't able to get the WiFi to work so I can't show you the video animated. But I can describe basically what's going on from this still here. There's a game Seaquest on Atari where you can train reinforcement learning agents to play that game. And you can take the raw input pixels and you can take the fast gradient sign method or other attacks that use other norms besides the max norm, and compute perturbations that are intended to change the action that the policy would select. So the reinforcement learning policy, you can think of it as just being like a classifier that looks at a frame. And instead of categorizing the input into a particular category, it gives you a softmax distribution over actions to take. So if we just take that and say that the most likely action should have its accuracy be decreased by the adversary. Sorry, to have its probability be decreased by the adversary, you'll get these perturbations of input frames that you can then apply and cause the agent to play different actions than it would have otherwise. And using this you can make the agent play Seaquest very, very badly. It's maybe not the most interesting possible thing. What we'd really like is an environment where there are many different reward functions available for us to study. So for example, if you had a robot that was intended to cook scrambled eggs, and you had a reward function measuring how well it's cooking scrambled eggs, and you had another reward function measuring how well it's cooking chocolate cake, it would be really interesting if we could make adversarial examples that cause the robot to make a chocolate cake when the user intended for it to make scrambled eggs. That's because it's very difficult to succeed at something and it's relatively straightforward to make a system fail. So right now, adversarial examples for RL are very good at showing that we can make RL agents fail. But we haven't yet been able to hijack them and make them do a complicated task that's different from what their owner intended. Seems like it's one of the next steps in adversarial example research though. If we look at high-dimension linear models, we can actually see that a lot of this is very simple and straightforward. Here we have a logistic regression model that classifies sevens and threes. So the whole model can be described just by a weight vector and a single scalar bias term. We don't really need to see the bias term for this exercise. If you look on the left I've plotted the weights that we used to discriminate sevens and threes. The weights should look a little bit like the difference between the average seven and the average three. And then down at the bottom we've taken the sign of the weights. So the gradient for a logistic regression model is going to be proportional to the weights. And then the sign of the weights gives you essentially the sign of the gradient. So we can do the fast gradient sign method to attack this model just by looking at its weights. In the examples in the panel that's the second column from the left we can see clean examples. And then on the right we've just added or subtracted this image of the sign of the weights off of them. To you and me as human observers, the sign of the weights is just like garbage that's in the background, and we more or less filter it out. It doesn't look particularly interesting to us. It doesn't grab our attention. To the logistic regression model this image of the sign of the weights is the most salient thing that could ever appear in the image. When it's positive it looks like the world's most quintessential seven. When it's negative it looks like the world's most quintessential three. And so the model makes its decision almost entirely based on this perturbation we added to the image, rather than on the background. You could also take this same procedure, and my colleague Andrej at OpenAI showed how you can modify the image on ImageNet using this same approach, and turn this goldfish into a daisy. Because ImageNet is much higher dimensional, you don't need to use quite as large of a coefficient on the image of the weights. So we can make a more persuasive fooling attack. You can see that this same image of the weights, when applied to any different input image, will actually reliably cause a misclassification. What's going on is that there are many different classes, and it means that if you choose the weights for any particular class, it's very unlikely that a new test image will belong to that class. So on ImageNet, if we're using the weights for the daisy class, and there are 1,000 different classes, then we have about a 99.9% chance that a test image will not be a daisy. If we then go ahead and add the weights for the daisy class to that image, then we get a daisy, and because that's not the correct class, it's a misclassification. So there's a paper at CVPR this year called Universal Adversarial Perturbations that expands a lot more on this observation that we had going back in 2014. But basically these weight vectors, when applied to many different images, can cause misclassification in all of them. I've spent a lot of time telling you that these linear models are just terrible, and at some point you've probably been hoping I would give you some sort of a control experiment to convince you that there's another model that's not terrible. So it turns out that some quadratic models actually perform really well. In particular a shallow RBF network is able to resist adversarial perturbations very well. Earlier I showed you an animation where I took a nine and I turned it into a zero, one, two, and so on, without really changing its appearance at all. And I was able to fool a linear softmax regression classifier. Here I've got an RBF network where it outputs a separate probability of each class being absent or present, and that probability is given by e to the negative square of the difference between a template image and the input image. And if we actually follow the gradient of this classifier, it does actually turn the image into a zero, a one, a two, a three, and so on, and we can actually recognize those changes. The problem is, this classifier does not get very good accuracy on the training set. It's a shallow model. It's basically just a template matcher. It is literally a template matcher. And if you try to make it more sophisticated by making it deeper, it turns out that the gradient of these RBF units is zero, or very near zero, throughout most of RN. So they're extremely difficult to train, even with batch normalization and methods like that. I haven't managed to train a deep RBF network yet. But I think if somebody comes up with better hyperparameters or a new, more powerful optimization algorithm, it might be possible to solve the adversarial example problem by training a deep RBF network where the model is so nonlinear and has such wide flat areas that the adversary is not able to push the cost uphill just by making small changes to the model's input. One of the things that's the most alarming about adversarial examples is that they generalize from one dataset to another and one model to another. Here I've trained two different models on two different training sets. The training sets are tiny in both cases. It's just MNIST three versus seven classification, and this is really just for the purpose of making a slide. If you train a logistic regression model on the digits shown in the left panel, you get the weights shown on the left in the lower panel. If you train a logistic regression model on the digits shown in the upper right, you get the weights shown on the right in the lower panel. So you've got two different training sets and we learn weight vectors that look very similar to each other. That's just because machine learning algorithms generalize. You want them to learn a function that's somewhat independent of the data that you train them on. It shouldn't matter which particular training examples you choose. If you want to generalize from the training set to the test set, you've also got to expect that different training sets will give you more or less the same result. And that means that because they've learned more or less similar functions, they're vulnerable to similar adversarial examples. An adversary can compute an image that fools one and use it to fool the other. In fact we can actually go ahead and measure the transfer rate between several different machine learning techniques, not just different data sets. Nicolas Papernot and his collaborators have spent a lot of time exploring this transferability effect. And they found that for example, logistic regression makes adversarial examples that transfer to decision trees with 87.4% probability. Wherever you see dark squares in this matrix, that shows that there's a high amount of transfer. That means that it's very possible for an attacker using the model on the left to create adversarial examples for the model on the right. The procedure overall is that, suppose the attacker wants to fool a model that they don't actually have access to. They don't know the architecture that's used to train the model. They may not even know which algorithm is being used. They may not know whether they're attacking a decision tree or a deep neural net. And they also don't know the parameters of the model that they're going to attack. So what they can do is train their own model that they'll use to build the attack. There's two different ways you can train your own model. One is you can label your own training set for the same task that you want to attack. Say that somebody is using an ImageNet classifier, and for whatever reason you don't have access to ImageNet, you can take your own photos and label them, train your own object recognizer. It's going to share adversarial examples with an ImageNet model. The other thing you can do is, say that you can't afford to gather your own training set. What you can do instead is if you can get limited access to the model where you just have the ability to send inputs to the model and observe its outputs, then you can send those inputs, observe the outputs, and use those as your training set. This'll work even if the output that you get from the target model is only the class label that it chooses. A lot of people read this and assume that you need to have access to all the probability values it outputs. But even just the class labels are sufficient. So once you've used one of these two methods, either gather your own training set or observing the outputs of a target model, you can train your own model and then make adversarial examples for your model. Those adversarial examples are very likely to transfer and affect the target model. So you can then go and send those out and fool it, even if you didn't have access to it directly. We've also measured the transferability across different data sets, and for most models we find that they're kind of in an intermediate zone where different data sets will result in a transfer rate of, like, 60% to 80%. There's a few models like SVMs that are very data dependent because SVMs end up focusing on a very small subset of the training data to form their final decision boundary. But most models that we care about are somewhere in the intermediate zone. Now that's just assuming that you rely on the transfer happening naturally. You make an adversarial example and you hope that it will transfer to your target. What if you do something to stack the deck in your favor and improve the odds that you'll get your adversarial examples to transfer? Dawn Song's group at UC Berkeley studied this. They found that if they take an ensemble of different models and they use gradient descent to search for an adversarial example that will fool every member of their ensemble, then it's extremely likely that it will transfer and fool a new machine learning model. So if you have an ensemble of five models, you can get it to the point where there's essentially a 100% chance that you'll fool a sixth model out of the set of models that they compared. They looked at things like ResNets of different depths, VGG, and GoogLeNet. So in the labels for each of the different rows you can see that they made ensembles that lacked each of these different models, and then they would test it on the different target models. So like if you make an ensemble that omits GoogLeNet, you have only about a 5% chance of GoogLeNet correctly classifying the adversarial example you make for that ensemble. If you make an ensemble that omits ResNet-152, in their experiments they found that there was a 0% chance of ResNet-152 resisting that attack. That probably indicates they should have run some more adversarial examples until they found a non-zero success rate, but it does show that the attack is very powerful. And then when you go look into intentionally cause the transfer effect, you can really make it quite strong. A lot of people often ask me if the human brain is vulnerable to adversarial examples. And for this lecture I can't use copyrighted material, but there's some really hilarious things on the Internet if you go looking for, like, the fake CAPTCHA with images of Mark Hamill, you'll find something that my perception system definitely can't handle. So here's another one that's actually published with a license where I was confident I'm allowed to use it. You can look at this image of different circles here, and they appear to be intertwined spirals. But in fact they are concentric circles. The orientation of the edges of the squares is interfering with the edge detectors in your brain, making it look like the circles are spiraling. So you can think of these optical illusions as being adversarial examples in the human brain. What's interesting is that we don't seem to share many adversarial examples in common with machine learning models. Adversarial examples transfer extremely reliably between different machine learning models, especially if you use that ensemble trick that was developed at UC Berkeley. But those adversarial examples don't fool us. It tells us that we must be using a very different algorithm or model family than current convolutional networks. We don't really know what the difference is yet, but it would be very interesting to figure that out. It seems to suggest that studying adversarial examples could tell us how to significantly improve our existing machine learning models. Even if you don't care about having an adversary, we might figure out something or other about how to make machine learning algorithms deal with ambiguity and unexpected inputs more like a human does. If we actually want to go out and do attacks in practice, there's started to be a body of research on this subject. Nicolas Papernot showed that he could use the transfer effect to fool classifiers hosted by MetaMind, Amazon, and Google. So these are all just different machine learning APIs where you can upload a dataset and the API will train the model for you. And then you don't actually know, in most cases, which model is trained for you. You don't have access to its weights or anything like that. So Nicolas would train his own copy of the model using the API, and then build a model on his own personal desktop where he could fool the API hosted model. Later, Berkeley showed you could fool Clarifai in this way. Yeah? - [Man] What did you mean when you said machine having adversarial models don't generally fool us? Because I thought that was part of the point that we generally do machine-generated adversarial models where just a few pixels change. - Oh, so if we look at, for example, like this picture of the panda. To us it looks like a panda. To most machine learning models it looks like a gibbon. And so this change isn't interfering with our brains, but it fools reliably with lots of different machine learning models. I saw somebody actually took this image of the perturbation out of our paper, and they pasted it on their Facebook profile picture to see if it could interfere with Facebook recognizing them. And they said that it did. I don't think that Facebook has a gibbon tag though, so we don't know if they managed to make it think that they were a gibbon. And one of the other things that you can do that's of fairly high practical significance is you can actually fool malware detectors. Catherine Gross at the University of Saarland wrote a paper about this. And there's starting to be a few others. There's a model called MalGAN that actually uses a GAN to generate adversarial examples for malware detectors. Another thing that matters a lot if you are interested in using these attacks in the real world and defending against them in the real world is that a lot of the time you don't actually have access to the digital input to a model. If you're interested in the perception system for a self-driving car or a robot, you probably don't get to actually write to the buffer on the robot itself. You just get to show the robot objects that it can see through a camera lens. So my colleague Alexey Kurakin and Samy Bengio and I wrote a paper where we studied if we can actually fool an object recognition system running on a phone, where it perceives the world through a camera. Our methodology was really straightforward. We just printed out several pictures of adversarial examples. And we found that the object recognition system run by the camera was fooled by them. The system on the camera is actually different from the model that we used to generate the adversarial examples. So we're showing not just transfer across the changes that happen when you use the camera, we're also showing that those transfer across the model that you use. So the attacker could conceivably fool a system that's deployed in a physical agent, even if they don't have access to the model on that agent and even if they can't interface directly with the agent but just subtly modify objects that it can see in its environment. Yeah? - [Man] Why does the, for the low quality camera image noise not affect the adversarial example? Because that's what one would expect. - Yeah, so I think a lot of that comes back to the maps that I showed earlier. If you cross over the boundary into the realm of adversarial examples, they occupy a pretty wide space and they're very densely packed in there. So if you jostle around a little bit, you're not going to recover from the adversarial attack. If the camera noise, somehow or other, was aligned with the negative gradient of the cost, then the camera could take a gradient descent step downhill and rescue you from the uphill step that the adversary took. But probably the camera's taking more or less something that you could model as a random direction. Like clearly when you use the camera more than once it's going to do the same thing each time, but from the point of view of how that direction relates to the image classification problem, it's more or less a random variable that you sample once. And it seems unlikely to align exactly with the normal to this class boundary. There's a lot of different defenses that we'd like to build. And it's a little bit disappointing that I'm mostly here to tell you about attacks. I'd like to tell you how to make your systems more robust. But basically every attack we've tried has failed pretty badly. And in fact, even when people have published that they successfully defended. Well, there's been several papers on arXiv over the last several months. Nicholas Carlini at Berkeley just released a paper where he shows that 10 of those defenses are broken. So this is a really, really hard problem. You can't just make it go away by using traditional regularization techniques. Particular, generative models are not enough to solve the problem. A lot of people say, "Oh the problem that's going on here "is you don't know anything about the distribution "over the input pixels. "If you could just tell "whether the input is realistic or not "then you'd be able to resist it." It turns out that what's going on here is what matters more than getting the right distributions over the inputs x, is getting the right posterior distribution over the class of labels y given inputs x. So just using a generative model is not enough to solve the problem. I think a very carefully designed generative model could possibly do it. Here I show two different modes of a bimodal distribution, and we have two different generative models that try to capture these modes. On the left we have a mixture of two Gaussians. On the right we have a mixture of two Laplacians. You can not really tell the difference visually between the distribution they impose over x, and the difference in the likelihood they assign to the training data is negligible. But the posterior distribution they assign over classes is extremely different. On the left we get a logistic regression classifier that has very high confidence out in the tails of the distribution where there is never any training data. On the right, with the Laplacian distribution, we level off to more or less 50-50. Yeah? [speaker drowned out] The issue is that it's a nonstationary distribution. So if you train it to recognize one kind of adversarial example, then it will become vulnerable to another kind that's designed to fool its detector. That's one of the category of defenses that Nicholas broke in his latest paper that he put out. So here basically the choice of exactly the family of generative model has a big effect in whether the posterior becomes deterministic or uniform, as the model extrapolates. And if we could design a really rich, deep generative model that can generate realistic ImageNet images and also correctly calculate its posterior distribution, then maybe something like this approach could work. But at the moment it's really difficult to get any of those probabilistic calculations correct. And what usually happens is, somewhere or other we make an approximation that causes the posterior distribution to extrapolate very linearly again. It's been a difficult engineering challenge to build generative models that actually capture these distributions accurately. The universal approximator theorem tells us that whatever shape we would like our classification function to have, a neural net that's big enough ought to be able to represent it. It's an open question whether we can train the neural net to have that function, but we know that we should be able to at least give the right shape. So so far we've been getting neural nets that give us these very linear decision functions, and we'd like to get something that looks a little bit more like a step function. So what if we actually just train on adversarial examples? For every input x in the training set, we also say we want you to train x plus an attack to map to the same class label as the original. It turns out that this sort of works. You can generally resist the same kind of attack that you train on. And an important consideration is making sure that you could run your attack very quickly so that you can train on lots of examples. So here the green curve at the very top, the one that doesn't really descend much at all, that's the test set error on adversarial examples if you train on clean examples only. The cyan curve that descends more or less diagonally through the middle of the plot, that's the tester on adversarial examples if you train on adversarial examples. You can see that it does actually reduce significantly. It gets down to a little bit less than 1% error. And the important thing to keep in mind here is that this is fast gradient sign method adversarial examples. It's much harder to resist iterative multi-step adversarial examples where you run an optimizer for a long time searching for a vulnerability. And another thing to keep in mind is that we're testing on the same kind of adversarial examples that we train on. It's harder to generalize from one optimization algorithm to another. By comparison, if you look at what happens on clean examples, the blue curve shows what happens on the clean test set error rate if you train only on clean examples. The red curve shows what happens if you train on both clean and adversarial examples. We see that the red curve actually drops lower than the blue curve. So on this task, training on adversarial examples actually helped us to do the original task better. This is because in the original task we were overfitting. Training on adversarial examples is good regularizer. If you're overfitting it can make you overfit less. If you're underfitting it'll just make you underfit worse. Other kinds of models besides deep neural nets don't benefit as much from adversarial training. So when we started this whole topic of study we thought that deep neural nets might be uniquely vulnerable to adversarial examples. But it turns out that actually they're one of the few models that has a clear path to resisting them. Linear models are just always going to be linear. They don't have much hope of resisting adversarial examples. Deep neural nets can be trained to be nonlinear, and so it seems like there's a path to a solution for them. Even with adversarial training, we still find that we aren't able to make models where if you optimize the input to belong to different classes, you get examples in those classes. Here I start with a CIFAR-10 truck and I turn it into each of the 10 different CIFAR-10 classes. Toward the middle of the plot you can see that the truck has started to look a little bit like a bird. But the bird class is the only one that we've come anywhere near hitting. So even with adversarial training, we're still very far from solving this problem. When we do adversarial training, we rely on having labels for all the examples. We have an image that's labeled as a bird. We make a perturbation that's designed to decrease the probability of the bird class, and we train the model that the image should still be a bird. But what if you don't have labels? It turns out that you can actually train without labels. You ask the model to predict the label of the first image. So if you've trained for a little while and your model isn't perfect yet, it might say, oh, maybe this is a bird, maybe it's a plane. There's some blue sky there, I'm not sure which of these two classes it is. Then we make an adversarial perturbation that's intended to change the guess and we just try to make it say, oh this is a truck, or something like that. It's not whatever you believed it was before. You can then train it to say that the distribution of our classes should still be the same as it was before, but this should still be considered probably a bird or a plane. This technique is called virtual adversarial training, and it was invented by Takeru Miyato. He was my Intern at Google after he did this work. At Google we invited him to come and apply his invention to text classification, because this ability to learn from unlabeled examples makes it possible to do semi-supervised learning where you learn from both unlabeled and labeled examples. And there's quite a lot of unlabeled text in the world. So we were able to bring down the error rate on several different text classification tasks by using this virtual adversarial training. Finally, there's a lot of problems where we'd like to use neural nets to guide optimization procedures. If we want to make a very, very fast car, we could imagine a neural net that looks at the blueprints for a car and predicts how fast it will go. If we could then optimize with respect to the input of the neural net and find the blueprint that it predicts would go the fastest, we could build an incredibly fast car. Unfortunately, what we get right now is not a blueprint for a fast car. We get an adversarial example that the model thinks is going to be very fast. If we're able to solve the adversarial example problem, we'll be able to solve this model-based optimization problem. I like to call model-based optimization the universal engineering machine. If we're able to do model-based optimization, we'll be able to write down a function that describes a thing that doesn't exist yet but we wish that we had. And then gradient descent and neural nets will figure out how to build it for us. We can use that to design new genes and new molecules for medicinal drugs, and new circuits to make GPUs run faster and things like that. So I think overall, solving this problem could unlock a lot of potential technological advances. In conclusion, attacking machine learning models is extremely easy, and defending them is extremely difficult. If you use adversarial training you can get a little bit of a defense, but there's still many caveats associated with that defense. Adversarial training and virtual adversarial training also make it possible to regularize your model and even learn from unlabeled data so you can do better on regular test examples, even if you're not concerned about facing an adversary. And finally, if we're able to solve all of these problems, we'll be able to build a black box model-based optimization system that can solve all kinds of engineering problems that are holding us back in many different fields. I think I have a few minutes left for questions. [audience applauds] [speaker drowned out] Yeah. Oh, so, there's some determinism to the choice of those 50 directions. Oh right, yeah. So repeating the questions. I've said that the same perturbation can fool many different models or the same perturbation can be applied to many different clean examples. I've also said that the subspace of adversarial perturbations is only about 50 dimensional, even if the input dimension is 3,000 dimensional. So how is it that these subspaces intersect? The reason is that the choice of the subspace directions is not completely random. It's generally going to be something like pointing from one class centroid to another class centroid. And if you look at that vector and visualize it as an image, it might not be meaningful to a human just because humans aren't very good at imagining what class centroids look like. And we're really bad at imagining differences between centroids. But there is more or less this systematic effect that causes different models to learn similar linear functions, just because they're trying to solve the same task. [speaker drowned out] Yeah, so the question is, is it possible to identify which layer contributes the most to this issue? One thing is that if you, the last layer is somewhat important. Because, say that you made a feature extractor that's completely robust to adversarial perturbations and can shrink them to be very, very small, and then the last layer is still linear. Then it has all the problems that are typically associated with linear models. And generally you can do adversarial training where you perturb all the different layers, all the hidden layers as well as the input. In this lecture I only described perturbing the input because it seems like that's where most of the benefit comes from. The one thing that you can't do with adversarial training is perturb the very last layer before the softmax, because that linear layer at the end has no way of learning to resist the perturbations. Doing adversarial training at that layer usually just breaks the whole process. But other than that, it seems very problem dependent. There's a paper by Sara Sabour and her collaborators called Adversarial Manipulation of Deep Representations, where they design adversarial examples that are intended to fool different layers of the net. They report some things about, like, how large of a perturbation is needed at the input to get different sizes of perturbation at different hidden layers. I suspect that if you trained the model to resist perturbations at one layer, then another layer would become more vulnerable and it would be like a moving target. [speaker drowned out] Yes, so the question is, how many adversarial examples are needed to improve the misclassification rate? Some of our plots we include learning curves. Or some of our papers we include learning curves, so you can actually see, like in this one here. Every time we do an epoch we've generated the same number of adversarial examples as there are training examples. So every epoch here is 50,000 adversarial examples. You can see that adversarial training is a very data hungry process. You need to make new adversarial examples every time you update the weights. And they're constantly changing in reaction to whatever the model has learned most recently. [speaker drowned out] Oh, the model-based optimization, yeah. Yeah, so the question is just to elaborate further on this problem. So most of the time that we have a machine learning model, it's something like a classifier or a regression model where we give it an input from the test set and it gives us an output. And usually that input is randomly occurring and comes from the same distribution as the training set. We usually just run the model, get its prediction, and then we're done with it. Sometimes we have feedback loops, like for recommender systems. If you work at Netflix and you recommend a movie to a viewer, then they're more likely to watch that movie and then rate it, and then there's going to be more ratings of it in your training set so you'll recommend it to more people in the future. So there's this feedback loop from the output of your model to the input. Most of the time when we build machine vision systems, there's no feedback loop from their output to their input. If we imagine a setting where we start using an optimization algorithm to find inputs that maximize some property of the output, like if we have a model that looks at the blueprints of a car and outputs the expected speed of the car, then we could use gradient ascent to look for the blueprints that correspond to the fastest possible car. Or for example if we're designing a medicine, we could look for the molecular structure that we think is most likely to cure some form of cancer, or the least likely to cause some kind of liver toxicity effect. The problem is that once we start using optimization to look for these inputs that maximize the output of the model, the input is no longer an independent sample from the same distribution as we used at the training set time. The model is now guiding the process that generates the data. So we end up finding essentially adversarial examples. Instead of the model telling us how we can improve the input, what we usually find in practice is that we've got an input that fools the model into thinking that the input corresponds to something great. So we'd find molecules that are very toxic but the model thinks they're very non-toxic. Or we'd find cars that are very slow but the model thinks are very fast. [speaker drowned out] Yeah, so the question is, here the frog class is boosted by going in either the positive or negative adversarial direction. And in some of the other slides, like these maps, you don't get that effect where subtracting epsilon off eventually boosts the adversarial class. Part of what's going on is I think I'm using larger epsilon here. And so you might eventually see that effect if I'd made these maps wider. I made the maps narrower because it's like quadratic time to build a 2D map and it's linear time to build a 1D cross section. So I just didn't afford the GPU time to make the maps quite as wide. I also think that this might just be a weird effect that happened randomly on this one example. It's not something that I remember being used to seeing a lot of the time. Most things that I observe don't happen perfectly consistently. But if they happen, like, 80% of the time then I'll put them in my slide. A lot of what we're doing is trying trying to figure out more or less what's going on, and so if we find that something happens 80% of the time, then I consider it to be the dominant phenomenon that we're trying to explain. And after we've got a better explanation for that then I might start to try to explain some of the weirder things that happen, like the frog happening with negative epsilon. [speaker drowned out] I didn't fully understand the question. It's about the dimensionality of the adversarial? Oh, okay. So the question is, how is the dimension of the adversarial subspace related to the dimension of the input? And my answer is somewhat embarrassing, which is that we've only run this method on two datasets, so we actually don't have a good idea yet. But I think it's something interesting to study. If I remember correctly, my coauthors open sourced our code. So you could probably run it on ImageNet without too much trouble. My contribution to that paper was in the week that I was unemployed between working at OpenAI and working at Google, so I had access to no GPUS and I ran that experiment on my laptop on CPU, so it's only really small datasets. [chuckles] [speaker drowned out] Oh, so the question is, do we end up perturbing clean examples to low confidence adversarial examples? Yeah, in practice we usually find that we can get very high confidence on the output examples. One thing in high dimensions that's a little bit unintuitive is that just getting the sign right on very many of the input pixels is enough to get a really strong response. So the angle between the weight vector matters a lot more than the exact coordinates in high dimensional systems. Does that make enough sense? Yeah, okay. - [Man] So we're actually going to [mumbles]. So if you guys need to leave, that's fine. But let's thank our speaker one more time for getting-- [audience applauds]
Stanford_Computer_Vision
Lecture_8_Deep_Learning_Software.txt
- Hello? Okay, it's after 12, so I want to get started. So today, lecture eight, we're going to talk about deep learning software. This is a super exciting topic because it changes a lot every year. But also means it's a lot of work to give this lecture 'cause it changes a lot every year. But as usual, a couple administrative notes before we dive into the material. So as a reminder the project proposals for your course projects were due on Tuesday. So hopefully you all turned that in, and hopefully you all have a somewhat good idea of what kind of projects you want to work on for the class. So we're in the process of assigning TA's to projects based on what the project area is and the expertise of the TA's. So we'll have some more information about that in the next couple days I think. We're also in the process of grading assignment one, so stay tuned and we'll get those grades back to you as soon as we can. Another reminder is that assignment two has been out for a while. That's going to be due next week, a week from today, Thursday. And again, when working on assignment two, remember to stop your Google Cloud instances when you're not working to try to preserve your credits. And another bit of confusion, I just wanted to re-emphasize is that for assignment two you really only need to use GPU instances for the last notebook. For all of the several notebooks it's just in Python and Numpy so you don't need any GPUs for those questions. So again, conserve your credits, only use GPUs when you need them. And the final reminder is that the midterm is coming up. It's kind of hard to believe we're there already, but the midterm will be in class on Tuesday, five nine. So the midterm will be more theoretical. It'll be sort of pen and paper working through different kinds of, slightly more theoretical questions to check your understanding of the material that we've covered so far. And I think we'll probably post at least a short sort of sample of the types of questions to expect. Question? [student's words obscured due to lack of microphone] Oh yeah, question is whether it's open-book, so we're going to say closed note, closed book. So just, Yeah, yeah, so that's what we've done in the past is just closed note, closed book, relatively just like want to check that you understand the intuition behind most of the stuff we've presented. So, a quick recap as a reminder of what we were talking about last time. Last time we talked about fancier optimization algorithms for deep learning models including SGD Momentum, Nesterov, RMSProp and Adam. And we saw that these relatively small tweaks on top of vanilla SGD, are relatively easy to implement but can make your networks converge a bit faster. We also talked about regularization, especially dropout. So remember dropout, you're kind of randomly setting parts of the network to zero during the forward pass, and then you kind of marginalize out over that noise in the back at test time. And we saw that this was kind of a general pattern across many different types of regularization in deep learning, where you might add some kind of noise during training, but then marginalize out that noise at test time so it's not stochastic at test time. We also talked about transfer learning where you can maybe download big networks that were pre-trained on some dataset and then fine tune them for your own problem. And this is one way that you can attack a lot of problems in deep learning, even if you don't have a huge dataset of your own. So today we're going to shift gears a little bit and talk about some of the nuts and bolts about writing software and how the hardware works. And a little bit, diving into a lot of details about what the software looks like that you actually use to train these things in practice. So we'll talk a little bit about CPUs and GPUs and then we'll talk about several of the major deep learning frameworks that are out there in use these days. So first, we've sort of mentioned this off hand a bunch of different times, that computers have CPUs, computers have GPUs. Deep learning uses GPUs, but we weren't really too explicit up to this point about what exactly these things are and why one might be better than another for different tasks. So, who's built a computer before? Just kind of show of hands. So, maybe about a third of you, half of you, somewhere around that ballpark. So this is a shot of my computer at home that I built. And you can see that there's a lot of stuff going on inside the computer, maybe, hopefully you know what most of these parts are. And the CPU is the Central Processing Unit. That's this little chip hidden under this cooling fan right here near the top of the case. And the CPU is actually relatively small piece. It's a relatively small thing inside the case. It's not taking up a lot of space. And the GPUs are these two big monster things that are taking up a gigantic amount of space in the case. They have their own cooling, they're taking a lot of power. They're quite large. So, just in terms of how much power they're using, in terms of how big they are, the GPUs are kind of physically imposing and taking up a lot of space in the case. So the question is what are these things and why are they so important for deep learning? Well, the GPU is called a graphics card, or Graphics Processing Unit. And these were really developed, originally for rendering computer graphics, and especially around games and that sort of thing. So another show of hands, who plays video games at home sometimes, from time to time on their computer? Yeah, so again, maybe about half, good fraction. So for those of you who've played video games before and who've built your own computers, you probably have your own opinions on this debate. [laughs] So this is one of those big debates in computer science. You know, there's like Intel versus AMD, NVIDIA versus AMD for graphics cards. It's up there with Vim versus Emacs for text editor. And pretty much any gamer has their own opinions on which of these two sides they prefer for their own cards. And in deep learning we kind of have mostly picked one side of this fight, and that's NVIDIA. So if you guys have AMD cards, you might be in a little bit more trouble if you want to use those for deep learning. And really, NVIDIA's been pushing a lot for deep learning in the last several years. It's been kind of a large focus of some of their strategy. And they put in a lot effort into engineering sort of good solutions to make their hardware better suited for deep learning. So most people in deep learning when we talk about GPUs, we're pretty much exclusively talking about NVIDIA GPUs. Maybe in the future this'll change a little bit, and there might be new players coming up, but at least for now NVIDIA is pretty dominant. So to give you an idea of like what is the difference between a CPU and a GPU, I've kind of made a little spread sheet here. On the top we have two of the kind of top end Intel consumer CPUs, and on the bottom we have two of NVIDIA's sort of current top end consumer GPUs. And there's a couple general trends to notice here. Both GPUs and CPUs are kind of a general purpose computing machine where they can execute programs and do sort of arbitrary instructions, but they're qualitatively pretty different. So CPUs tend to have just a few cores, for consumer desktop CPUs these days, they might have something like four or six or maybe up to 10 cores. With hyperthreading technology that means they can run, the hardware can physically run, like maybe eight or up to 20 threads concurrently. So the CPU can maybe do 20 things in parallel at once. So that's just not a gigantic number, but those threads for a CPU are pretty powerful. They can actually do a lot of things, they're very fast. Every CPU instruction can actually do quite a lot of stuff. And they can all work pretty independently. For GPUs it's a little bit different. So for GPUs we see that these sort of common top end consumer GPUs have thousands of cores. So the NVIDIA Titan XP which is the current top of the line consumer GPU has 3840 cores. So that's a crazy number. That's like way more than the 10 cores that you'll get for a similarly priced CPU. The downside of a GPU is that each of those cores, one, it runs at a much slower clock speed. And two they really can't do quite as much. You can't really compare CPU cores and GPU cores apples to apples. The GPU cores can't really operate very independently. They all kind of need to work together and sort of paralyze one task across many cores rather than each core totally doing its own thing. So you can't really compare these numbers directly. But it should give you the sense that due to the large number of cores GPUs can sort of, are really good for parallel things where you need to do a lot of things all at the same time, but those things are all pretty much the same flavor. Another thing to point out between CPUs and GPUs is this idea of memory. Right, so CPUs have some cache on the CPU, but that's relatively small and the majority of the memory for your CPU is pulling from your system memory, the RAM, which will maybe be like eight, 12, 16, 32 gigabytes of RAM on a typical consumer desktop these days. Whereas GPUs actually have their own RAM built into the chip. There's a pretty large bottleneck communicating between the RAM in your system and the GPU, so the GPUs typically have their own relatively large block of memory within the card itself. And for the Titan XP, which again is maybe the current top of the line consumer card, this thing has 12 gigabytes of memory local to the GPU. GPUs also have their own caching system where there are sort of multiple hierarchies of caching between the 12 gigabytes of GPU memory and the actual GPU cores. And that's somewhat similar to the caching hierarchy that you might see in a CPU. So, CPUs are kind of good for general purpose processing. They can do a lot of different things. And GPUs are maybe more specialized for these highly paralyzable algorithms. So the prototypical algorithm of something that works really really well and is like perfectly suited to a GPU is matrix multiplication. So remember in matrix multiplication on the left we've got like a matrix composed of a bunch of rows. We multiply that on the right by another matrix composed of a bunch of columns and then this produces another, a final matrix where each element in the output matrix is a dot product between one of the rows and one of the columns of the two input matrices. And these dot products are all independent. Like you could imagine, for this output matrix you could split it up completely and have each of those different elements of the output matrix all being computed in parallel and they all sort of are running the same computation which is taking a dot product of these two vectors. But exactly where they're reading that data from is from different places in the two input matrices. So you could imagine that for a GPU you can just like blast this out and have all of this elements of the output matrix all computed in parallel and that could make this thing computer super super fast on GPU. So that's kind of the prototypical type of problem that like where a GPU is really well suited, where a CPU might have to go in and step through sequentially and compute each of these elements one by one. That picture is a little bit of a caricature because CPUs these days have multiple cores, they can do vectorized instructions as well, but still, for these like massively parallel problems GPUs tend to have much better throughput. Especially when these matrices get really really big. And by the way, convolution is kind of the same kind of story. Where you know in convolution we have this input tensor, we have this weight tensor and then every point in the output tensor after a convolution is again some inner product between some part of the weights and some part of the input. And you can imagine that a GPU could really paralyze this computation, split it all up across the many cores and compute it very quickly. So that's kind of the general flavor of the types of problems where GPUs give you a huge speed advantage over CPUs. So you can actually write programs that run directly on GPUs. So NVIDIA has this CUDA abstraction that lets you write code that kind of looks like C, but executes directly on the GPUs. But CUDA code is really really tricky. It's actually really tough to write CUDA code that's performant and actually squeezes all the juice out of these GPUs. You have to be very careful managing the memory hierarchy and making sure you don't have cache misses and branch mispredictions and all that sort of stuff. So it's actually really really hard to write performant CUDA code on your own. So as a result NVIDIA has released a lot of libraries that implement common computational primitives that are very very highly optimized for GPUs. So for example NVIDIA has a cuBLAS library that implements different kinds of matrix multiplications and different matrix operations that are super optimized, run really well on GPU, get very close to sort of theoretical peak hardware utilization. Similarly they have a cuDNN library which implements things like convolution, forward and backward passes, batch normalization, recurrent networks, all these kinds of computational primitives that we need in deep learning. NVIDIA has gone in there and released their own binaries that compute these primitives very efficiently on NVIDIA hardware. So in practice, you tend not to end up writing your own CUDA code for deep learning. You typically are just mostly calling into existing code that other people have written. Much of which is the stuff which has been heavily optimized by NVIDIA already. There's another sort of language called OpenCL which is a bit more general. Runs on more than just NVIDIA GPUs, can run on AMD hardware, can run on CPUs, but OpenCL, nobody's really spent a really large amount of effort and energy trying to get optimized deep learning primitives for OpenCL, so it tends to be a lot less performant the super optimized versions in CUDA. So maybe in the future we might see a bit of a more open standard and we might see this across many different more types of platforms, but at least for now, NVIDIA's kind of the main game in town for deep learning. So you can check, there's a lot of different resources for learning about how you can do GPU programming yourself. It's kind of fun. It's sort of a different paradigm of writing code because it's this massively parallel architecture, but that's a bit beyond the scope of this course. And again, you don't really need to write your own CUDA code much in practice for deep learning. And in fact, I've never written my own CUDA code for any research project, so, but it is kind of useful to know like how it works and what are the basic ideas even if you're not writing it yourself. So if you want to look at kind of CPU GPU performance in practice, I did some benchmarks last summer comparing a decent Intel CPU against a bunch of different GPUs that were sort of near top of the line at that time. And these were my own benchmarks that you can find more details on GitHub, but my findings were that for things like VGG 16 and 19, ResNets, various ResNets, then you typically see something like a 65 to 75 times speed up when running the exact same computation on a top of the line GPU, in this case a Pascal Titan X, versus a top of the line, well, not quite top of the line CPU, which in this case was an Intel E5 processor. Although, I'd like to make one sort of caveat here is that you always need to be super careful whenever you're reading any kind of benchmarks about deep learning, because it's super easy to be unfair between different things. And you kind of need to know a lot of the details about what exactly is being benchmarked in order to know whether or not the comparison is fair. So in this case I'll come right out and tell you that probably this comparison is a little bit unfair to CPU because I didn't spend a lot of effort trying to squeeze the maximal performance out of CPUs. I probably could have tuned the blast libraries better for the CPU performance. And I probably could have gotten these numbers a bit better. This was sort of out of the box performance between just installing Torch, running it on a CPU, just installing Torch running it on a GPU. So this is kind of out of the box performance, but it's not really like peak, possible, theoretical throughput on the CPU. But that being said, I think there are still pretty substantial speed ups to be had here. Another kind of interesting outcome from this benchmarking was comparing these optimized cuDNN libraries from NVIDIA for convolution and whatnot versus sort of more naive CUDA that had been hand written out in the open source community. And you can see that if you compare the same networks on the same hardware with the same deep learning framework and the only difference is swapping out these cuDNN versus sort of hand written, less optimized CUDA you can see something like nearly a three X speed up across the board when you switch from the relatively simple CUDA to these like super optimized cuDNN implementations. So in general, whenever you're writing code on GPU, you should probably almost always like just make sure you're using cuDNN because you're leaving probably a three X performance boost on the table if you're not calling into cuDNN for your stuff. So another problem that comes up in practice, when you're training these things is that you know, your model is maybe sitting on the GPU, the weights of the model are in that 12 gigabytes of local storage on the GPU, but your big dataset is sitting over on the right on a hard drive or an SSD or something like that. So if you're not careful you can actually bottleneck your training by just trying to read the data off the disk. 'Cause the GPU is super fast, it can compute forward and backward quite fast, but if you're reading sequentially off a spinning disk, you can actually bottleneck your training quite, and that can be really bad and slow you down. So some solutions here are that like you know if your dataset's really small, sometimes you might just read the whole dataset into RAM. Or even if your dataset isn't so small, but you have a giant server with a ton of RAM, you might do that anyway. You can also make sure you're using an SSD instead of a hard drive, that can help a lot with read throughput. Another common strategy is to use multiple threads on the CPU that are pre-fetching data off RAM or off disk, buffering it in memory, in RAM so that then you can continue feeding that buffer data down to the GPU with good performance. This is a little bit painful to set up, but again like, these GPU's are so fast that if you're not really careful with trying to feed them data as quickly as possible, just reading the data can sometimes bottleneck the whole training process. So that's something to be aware of. So that's kind of the brief introduction to like sort of GPU CPU hardware in practice when it comes to deep learning. And then I wanted to switch gears a little bit and talk about the software side of things. The various deep learning frameworks that people are using in practice. But I guess before I move on, is there any sort of questions about CPU GPU? Yeah, question? [student's words obscured due to lack of microphone] Yeah, so the question is what can you sort of, what can you do mechanically when you're coding to avoid these problems? Probably the biggest thing you can do in software is set up sort of pre-fetching on the CPU. Like you couldn't like, sort of a naive thing would be you have this sequential process where you first read data off disk, wait for the data, wait for the minibatch to be read, then feed the minibatch to the GPU, then go forward and backward on the GPU, then read another minibatch and sort of do this all in sequence. And if you actually have multiple, like instead you might have CPU threads running in the background that are fetching data off the disk such that while the, you can sort of interleave all of these things. Like the GPU is computing, the CPU background threads are feeding data off disk and your main thread is kind of waiting for these things to, just doing a bit of synchronization between these things so they're all happening in parallel. And thankfully if you're using some of these deep learning frameworks that we're about to talk about, then some of this work has already been done for you 'cause it's a little bit painful. So the landscape of deep learning frameworks is super fast moving. So last year when I gave this lecture I talked mostly about Caffe, Torch, Theano and TensorFlow. And when I last gave this talk, again more than a year ago, TensorFlow was relatively new. It had not seen super widespread adoption yet at that time. But now I think in the last year TensorFlow has gotten much more popular. It's probably the main framework of choice for many people. So that's a big change. We've also seen a ton of new frameworks sort of popping up like mushrooms in the last year. So in particular Caffe2 and PyTorch are new frameworks from Facebook that I think are pretty interesting. There's also a ton of other frameworks. Paddle, Baidu has Paddle, Microsoft has CNTK, Amazon is mostly using MXNet and there's a ton of other frameworks as well, but I'm less familiar with, and really don't have time to get into. But one interesting thing to point out from this picture is that kind of the first generation of deep learning frameworks that really saw wide adoption were built in academia. So Caffe was from Berkeley, Torch was developed originally NYU and also in collaboration with Facebook. And Theana was mostly build at the University of Montreal. But these kind of next generation deep learning frameworks all originated in industry. So Caffe2 is from Facebook, PyTorch is from Facebook. TensorFlow is from Google. So it's kind of an interesting shift that we've seen in the landscape over the last couple of years is that these ideas have really moved a lot from academia into industry. And now industry is kind of giving us these big powerful nice frameworks to work with. So today I wanted to mostly talk about PyTorch and TensorFlow 'cause I personally think that those are probably the ones you should be focusing on for a lot of research type problems these days. I'll also talk a bit about Caffe and Caffe2. But probably a little bit less emphasis on those. And before we move any farther, I thought I should make my own biases a little bit more explicit. So I have mostly, I've worked with Torch mostly for the last several years. And I've used it quite a lot, I like it a lot. And then in the last year I've mostly switched to PyTorch as my main research framework. So I have a little bit less experience with some of these others, especially TensorFlow, but I'll still try to do my best to give you a fair picture and a decent overview of these things. So, remember that in the last several lectures we've hammered this idea of computational graphs in sort of over and over. That whenever you're doing deep learning, you want to think about building some computational graph that computes whatever function that you want to compute. So in the case of a linear classifier you'll combine your data X and your weights W with a matrix multiply. You'll do some kind of hinge loss to maybe have, compute your loss. You'll have some regularization term and you imagine stitching together all these different operations into some graph structure. Remember that these graph structures can get pretty complex in the case of a big neural net, now there's many different layers, many different activations. Many different weights spread all around in a pretty complex graph. And as you move to things like neural turing machines then you can get these really crazy computational graphs that you can't even really draw because they're so big and messy. So the point of deep learning frameworks is really, there's really kind of three main reasons why you might want to use one of these deep learning frameworks rather than just writing your own code. So the first would be that these frameworks enable you to easily build and work with these big hairy computational graphs without kind of worrying about a lot of those bookkeeping details yourself. Another major idea is that, whenever we're working in deep learning we always need to compute gradients. We're always computing some loss, we're always computer gradient of our weight with respect to the loss. And we'd like to make this automatically computing gradient, you don't want to have to write that code yourself. You want that framework to handle all these back propagation details for you so you can just think about writing down the forward pass of your network and have the backward pass sort of come out for free without any additional work. And finally you want all this stuff to run efficiently on GPUs so you don't have to worry too much about these low level hardware details about cuBLAS and cuDNN and CUDA and moving data between the CPU and GPU memory. You kind of want all those messy details to be taken care of for you. So those are kind of some of the major reasons why you might choose to use frameworks rather than writing your own stuff from scratch. So as kind of a concrete example of a computational graph we can maybe write down this super simple thing. Where we have three inputs, X, Y, and Z. We're going to combine X and Y to produce A. Then we're going to combine A and Z to produce B and then finally we're going to do some maybe summing out operation on B to give some scaler final result C. So you've probably written enough Numpy code at this point to realize that it's super easy to write down, to implement this computational graph, or rather to implement this bit of computation in Numpy, right? You can just kind of write down in Numpy that you want to generate some random data, you want to multiply two things, you want to add two things, you want to sum out a couple things. And it's really easy to do this in Numpy. But then the question is like suppose that we want to compute the gradient of C with respect to X, Y, and Z. So, if you're working in Numpy, you kind of need to write out this backward pass yourself. And you've gotten a lot of practice with this on the homeworks, but it can be kind of a pain and a little bit annoying and messy once you get to really big complicated things. The other problem with Numpy is that it doesn't run on the GPU. So Numpy is definitely CPU only. And you're never going to be able to experience or take advantage of these GPU accelerated speedups if you're stuck working in Numpy. And it's, again, it's a pain to have to compute your own gradients in all these situations. So, kind of the goal of most deep learning frameworks these days is to let you write code in the forward pass that looks very similar to Numpy, but lets you run it on the GPU and lets you automatically compute gradients. And that's kind of the big picture goal of most of these frameworks. So if you imagine looking at, if we look at an example in TensorFlow of the exact same computational graph, we now see that in this forward pass, you write this code that ends up looking very very similar to the Numpy forward pass where you're kind of doing these multiplication and these addition operations. But now TensorFlow has this magic line that just computes all the gradients for you. So now you don't have go in and write your own backward pass and that's much more convenient. The other nice thing about TensorFlow is you can really just, like with one line you can switch all this computation between CPU and GPU. So here, if you just add this with statement before you're doing this forward pass, you just can explicitly tell the framework, hey I want to run this code on the CPU. But now if we just change that with statement a little bit with just with a one character change in this case, changing that C to a G, now the code runs on GPU. And now in this little code snippet, we've solved these two problems. We're running our code on the GPU and we're having the framework compute all the gradients for us, so that's really nice. And PyTorch kind looks almost exactly the same. So again, in PyTorch you kind of write down, you define some variables, you have some forward pass and the forward pass again looks very similar to like, in this case identical to the Numpy code. And then again, you can just use PyTorch to compute gradients, all your gradients with just one line. And now in PyTorch again, it's really easy to switch to GPU, you just need to cast all your stuff to the CUDA data type before you rung your computation and now everything runs transparently on the GPU for you. So if you kind of just look at these three examples, these three snippets of code side by side, the Numpy, the TensorFlow and the PyTorch you see that the TensorFlow and the PyTorch code in the forward pass looks almost exactly like Numpy which is great 'cause Numpy has a beautiful API, it's really easy to work with. But we can compute gradients automatically and we can run the GPU automatically. So after that kind of introduction, I wanted to dive in and talk in a little bit more detail about kind of what's going on inside this TensorFlow example. So as a running example throughout the rest of the lecture, I'm going to use the training a two-layer fully connected ReLU network on random data as kind of a running example throughout the rest of the examples here. And we're going to train this thing with an L2 Euclidean loss on random data. So this is kind of a silly network, it's not really doing anything useful, but it does give you the, it's relatively small, self contained, the code fits on the slide without being too small, and it lets you demonstrate kind of a lot of the useful ideas inside these frameworks. So here on the right, oh, and then another note, I'm kind of assuming that Numpy and TensorFlow have already been imported in all these code snippets. So in TensorFlow you would typically divide your computation into two major stages. First, we're going to write some code that defines our computational graph, and that's this red code up in the top half. And then after you define your graph, you're going to run the graph over and over again and actually feed data into the graph to perform whatever computation you want it to perform. So this is the really, this is kind of the big common pattern in TensorFlow. You'll first have a bunch of code that builds the graph and then you'll go and run the graph and reuse it many many times. So if you kind of dive into the code of building the graph in this case. Up at the top you see that we're defining this X, Y, w1 and w2, and we're creating these tf.placeholder objects. So these are going to be input nodes to the graph. These are going to be sort of entry points to the graph where when we run the graph, we're going to feed in data and put them in through these input slots in our computational graph. So this is not actually like allocating any memory right now. We're just sort of setting up these input slots to the graph. Then we're going to use those input slots which are now kind of like these symbolic variables and we're going to perform different TensorFlow operations on these symbolic variables in order to set up what computation we want to run on those variables. So in this case we're doing a matrix multiplication between X and w1, we're doing some tf.maximum to do a ReLU nonlinearity and then we're doing another matrix multiplication to compute our output predictions. And then we're again using a sort of basic Tensor operations to compute our Euclidean distance, our L2 loss between our prediction and the target Y. Another thing to point out here is that these lines of code are not actually computing anything. There's no data in the system right now. We're just building up this computational graph data structure telling TensorFlow which operations we want to eventually run once we put in real data. So this is just building the graph, this is not actually doing anything. Then we have this magical line where after we've computed our loss with these symbolic operations, then we can just ask TensorFlow to compute the gradient of the loss with respect to w1 and w2 in this one magical, beautiful line. And this avoids you writing all your own backprop code that you had to do in the assignments. But again there's no actual computation happening here. This is just sort of adding extra operations to the computational graph where now the computational graph has these additional operations which will end up computing these gradients for you. So now at this point we've computed our computational graph, we have this big graph in this graph data structure in memory that knows what operations we want to perform to compute the loss in gradients. And now we enter a TensorFlow session to actually run this graph and feed it with data. So then, once we've entered the session, then we actually need to construct some concrete values that will be fed to the graph. So TensorFlow just expects to receive data from Numpy arrays in most cases. So here we're just creating concrete actual values for X, Y, w1 and w2 using Numpy and then storing these in some dictionary. And now here is where we're actually running the graph. So you can see that we're calling a session.run to actually execute some part of the graph. The first argument loss, tells us which part of the graph do we actually want as output. And that, so we actually want the graph, in this case we need to tell it that we actually want to compute loss and grad1 and grad w2 and we need to pass in with this feed dict parameter the actual concrete values that will be fed to the graph. And then after, in this one line, it's going and running the graph and then computing those values for loss grad1 to grad w2 and then returning the actual concrete values for those in Numpy arrays again. So now after you unpack this output in the second line, you get Numpy arrays, or you get Numpy arrays with the loss and the gradients. So then you can go and do whatever you want with these values. So then, this has only run sort of one forward and backward pass through our graph, and it only takes a couple extra lines if we actually want to train the network. So here we're, now we're running the graph many times in a loop so we're doing a four loop and in each iteration of the loop, we're calling session.run asking it to compute the loss and the gradients. And now we're doing a manual gradient discent step using those computed gradients to now update our current values of the weights. So if you actually run this code and plot the losses, then you'll see that the loss goes down and the network is training and this is working pretty well. So this is kind of like a super bare bones example of training a fully connected network in TensorFlow. But there's a problem here. So here, remember that on the forward pass, every time we execute this graph, we're actually feeding in the weights. We have the weights as Numpy arrays and we're explicitly feeding them into the graph. And now when the graph finishes executing it's going to give us these gradients. And remember the gradients are the same size as the weights. So this means that every time we're running the graph here, we're copying the weights from Numpy arrays into TensorFlow then getting the gradients and then copying the gradients from TensorFlow back out to Numpy arrays. So if you're just running on CPU, this is maybe not a huge deal, but remember we talked about CPU GPU bottleneck and how it's very expensive actually to copy data between CPU memory and GPU memory. So if your network is very large and your weights and gradients were very big, then doing something like this would be super expensive and super slow because we'd be copying all kinds of data back and forth between the CPU and the GPU at every time step. So that's bad, we don't want to do that. We need to fix that. So, obviously TensorFlow has some solution to this. And the idea is that now we want our weights, w1 and w2, rather than being placeholders where we're going to, where we expect to feed them in to the network on every forward pass, instead we define them as variables. So a variable is something is a value that lives inside the computational graph and it's going to persist inside the computational graph across different times when you run the same graph. So now instead of declaring these w1 and w2 as placeholders, instead we just construct them as variables. But now since they live inside the graph, we also need to tell TensorFlow how they should be initialized, right? Because in the previous case we were feeding in their values from outside the graph, so we initialized them in Numpy, but now because these things live inside the graph, TensorFlow is responsible for initializing them. So we need to pass in a tf.randomnormal operation, which again is not actually initializing them when we run this line, this is just telling TensorFlow how we want them to be initialized. So it's a little bit of confusing misdirection going on here. And now, remember in the previous example we were actually updating the weights outside of the computational graph. We, in the previous example, we were computing the gradients and then using them to update the weights as Numpy arrays and then feeding in the updated weights at the next time step. But now because we want these weights to live inside the graph, this operation of updating the weights needs to also be an operation inside the computational graph. So now we used this assign function which mutates these variables inside the computational graph and now the mutated value will persist across multiple runs of the same graph. So now when we run this graph and when we train the network, now we need to run the graph once with a little bit of special incantation to tell TensorFlow to set up these variables that are going to live inside the graph. And then once we've done that initialization, now we can run the graph over and over again. And here, we're now only feeding in the data and labels X and Y and the weights are living inside the graph. And here we've asked the network to, we've asked TensorFlow to compute the loss for us. And then you might think that this would train the network, but there's actually a bug here. So, if you actually run this code, and you plot the loss, it doesn't train. So that's bad, it's confusing, like what's going on? We wrote this assign code, we ran the thing, like we computed the loss and the gradients and our loss is flat, what's going on? Any ideas? [student's words obscured due to lack of microphone] Yeah so one hypothesis is that maybe we're accidentally re-initializing the w's every time we call the graph. That's a good hypothesis, that's actually not the problem in this case. [student's words obscured due to lack of microphone] Yeah, so the answer is that we actually need to explicitly tell TensorFlow that we want to run these new w1 and new w2 operations. So we've built up this big computational graph data structure in memory and now when we call run, we only told TensorFlow that we wanted to compute loss. And if you look at the dependencies among these different operations inside the graph, you see that in order to compute loss we don't actually need to perform this update operation. So TensorFlow is smart and it only computes the parts of the graph that are necessary for computing the output that you asked it to compute. So that's kind of a nice thing because it means it's only doing as much work as it needs to, but in situations like this it can be a little bit confusing and lead to behavior that you didn't expect. So the solution in this case is that we actually need to explicitly tell TensorFlow to perform those update operations. So one thing we could do, which is what was suggested is we could add new w1 and new w2 as outputs and just tell TensorFlow that we want to produce these values as outputs. But that's a problem too because the values, those new w1, new w2 values are again these big tensors. So now if we tell TensorFlow we want those as output, we're going to again get this copying behavior between CPU and GPU at ever iteration. So that's bad, we don't want that. So there's a little trick you can do instead. Which is that we add kind of a dummy node to the graph. With these fake data dependencies and we just say that this dummy node updates, has these data dependencies of new w1 and new w2. And now when we actually run the graph, we tell it to compute both the loss and this dummy node. And this dummy node doesn't actually return any value it just returns none, but because of this dependency that we've put into the node it ensures that when we run the updates value, we actually also run these update operations. So, question? [student's words obscured due to lack of microphone] Is there a reason why we didn't put X and Y into the graph? And that it stayed as Numpy. So in this example we're reusing X and Y on every, we're reusing the same X and Y on every iteration. So you're right, we could have just also stuck those in the graph, but in a more realistic scenario, X and Y will be minibatches of data so those will actually change at every iteration and we will want to feed different values for those at every iteration. So in this case, they could have stayed in the graph, but in most cases they will change, so we don't want them to live in the graph. Oh, another question? [student's words obscured due to lack of microphone] Yeah, so we've told it, we had put into TensorFlow that the outputs we want are loss and updates. Updates is not actually a real value. So when updates evaluates it just returns none. But because of this dependency we've told it that updates depends on these assign operations. But these assign operations live inside the computational graph and all live inside GPU memory. So then we're doing these update operations entirely on the GPU and we're no longer copying the updated values back out of the graph. [student's words obscured due to lack of microphone] So the question is does tf.group return none? So this gets into the trickiness of TensorFlow. So tf.group returns some crazy TensorFlow value. It sort of returns some like internal TensorFlow node operation that we need to continue building the graph. But when you execute the graph, and when you tell, inside the session.run, when we told it we want it to compute the concrete value from updates, then that returns none. So whenever you're working with TensorFlow you have this funny indirection between building the graph and the actual output values during building the graph is some funny weird object, and then you actually get a concrete value when you run the graph. So here after you run updates, then the output is none. Does that clear it up a little bit? [student's words obscured due to lack of microphone] So the question is why is loss a value and why is updates none? That's just the way that updates works. So loss is a value when we compute, when we tell TensorFlow we want to run a tensor, then we get the concrete value. Updates is this kind of special other data type that does not return a value, it instead returns none. So it's kind of some TensorFlow magic that's going on there. Maybe we can talk offline if you're still confused. [student's words obscured due to lack of microphone] Yeah, yeah, that behavior is coming from the group method. So now, we kind of have this weird pattern where we wanted to do these different assign operations, we have to use this funny tf.group thing. That's kind of a pain, so thankfully TensorFlow gives you some convenience operations that kind of do that kind of stuff for you. And that's called an optimizer. So here we're using a tf.train.GradientDescentOptimizer and we're telling it what learning rate we want to use. And you can imagine that there's, there's RMSprop, there's all kinds of different optimization algorithms here. And now we call optimizer.minimize of loss and now this is a pretty magical, this is a pretty magical thing, because now this call is aware that these variables w1 and w2 are marked as trainable by default, so then internally, inside this optimizer.minimize it's going in and adding nodes to the graph which will compute gradient of loss with respect to w1 and w2 and then it's also performing that update operation for you and it's doing the grouping operation for you and it's doing the assigns. It's like doing a lot of magical stuff inside there. But then it ends up giving you this magical updates value which, if you dig through the code they're actually using tf.group so it looks very similar internally to what we saw before. And now when we run the graph inside our loop we do the same pattern of telling it to compute loss and updates. And every time we tell the graph to compute updates, then it'll actually go and update the graph. Question? [student's words obscured due to lack of microphone] Yeah, so what is the tf.GlobalVariablesInitializer? So that's initializing w1 and w2 because these are variables which live inside the graph. So we need to, when we saw this, when we create the tf.variable we have this tf.randomnormal which is this initialization so the tf.GlobalVariablesInitializer is causing the tf.randomnormal to actually run and generate concrete values to initialize those variables. [student's words obscured due to lack of microphone] Sorry, what was the question? [student's words obscured due to lack of microphone] So it knows that a placeholder is going to be fed outside of the graph and a variable is something that lives inside the graph. So I don't know all the details about how it decides, what exactly it decides to run with that call. I think you'd need to dig through the code to figure that out, or maybe it's documented somewhere. So but now we've kind of got this, again we've got this full example of training a network in TensorFlow and we're kind of adding bells and whistles to make it a little bit more convenient. So we can also here, in the previous example we were computing the loss explicitly using our own tensor operations, TensorFlow you can always do that, you can use basic tensor operations to compute just about anything you want. But TensorFlow also gives you a bunch of convenience functions that compute these common neural network things for you. So in this case we can use tf.losses.mean_squared_error and it just does the L2 loss for us so we don't have to compute it ourself in terms of basic tensor operations. So another kind of weirdness here is that it was kind of annoying that we had to explicitly define our inputs and define our weights and then like chain them together in the forward pass using a matrix multiply. And in this example we've actually not put biases in the layer because that would be kind of an extra, then we'd have to initialize biases, we'd have to get them in the right shape, we'd have to broadcast the biases against the output of the matrix multiply and you can see that that would kind of be a lot of code. It would be kind of annoying write. And once you get to like convolutions and batch normalizations and other types of layers this kind of basic way of working, of having these variables, having these inputs and outputs and combining them all together with basic computational graph operations could be a little bit unwieldy and it could be really annoying to make sure you initialize the weights with the right shapes and all that sort of stuff. So as a result, there's a bunch of sort of higher level libraries that wrap around TensorFlow and handle some of these details for you. So one example that ships with TensorFlow, is this tf.layers inside. So now in this code example you can see that our code is only explicitly declaring the X and the Y which are the placeholders for the data and the labels. And now we say that H=tf.layers.dense, we give it the input X and we tell it units=H. This is again kind of a magical line because inside this line, it's kind of setting up w1 and b1, the bias, it's setting up variables for those with the right shapes that are kind of inside the graph but a little bit hidden from us. And it's using this xavier initializer object to set up an initialization strategy for those. So before we were doing that explicitly ourselves with the tf.randomnormal business, but now here it's kind of handling some of those details for us and it's just spitting out an H, which is again the same sort of H that we saw in the previous layer, it's just doing some of those details for us. And you can see here, we're also passing an activation=tf.nn.relu so it's even doing the activation, the relu activation function inside this layer for us. So it's taking care of a lot of these architectural details for us. Question? [student's words obscured due to lack of microphone] Question is does the xavier initializer default to particular distribution? I'm sure it has some default, I'm not sure what it is. I think you'll have to look at the documentation. But it seems to be a reasonable strategy, I guess. And in fact if you run this code, it converges much faster than the previous one because the initialization is better. And you can see that we're using two calls to tf.layers and this lets us build our model without doing all these explicit bookkeeping details ourself. So this is maybe a little bit more convenient. But tf.contrib.layer is really not the only game in town. There's like a lot of different higher level libraries that people build on top of TensorFlow. And it's kind of due to this basic impotence mis-match where the computational graph is relatively low level thing, but when we're working with neural networks we have this concept of layers and weights and some layers have weights associated with them, and we typically think at a slightly higher level of abstraction than this raw computational graph. So that's what these various packages are trying to help you out and let you work at this higher layer of abstraction. So another very popular package that you may have seen before is Keras. Keras is a very beautiful, nice API that sits on top of TensorFlow and handles sort of building up these computational graph for you up in the back end. By the way, Keras also supports Theano as a back end, so that's also kind of nice. And in this example you can see we build the model as a sequence of layers. We build some optimizer object and we call model.compile and this does a lot of magic in the back end to build the graph. And now we can call model.fit and that does the whole training procedure for us magically. So I don't know all the details of how this works, but I know Keras is very popular, so you might consider using it if you're talking about TensorFlow. Question? [student's words obscured due to lack of microphone] Yeah, so the question is like why there's no explicit CPU, GPU going on here. So I've kind of left that out to keep the code clean. But you saw at the beginning examples it was pretty easy to flop all these things between CPU and GPU and there was either some global flag or some different data type or some with statement and it's usually relatively simple and just about one line to swap in each case. But exactly what that line looks like differs a bit depending on the situation. So there's actually like this whole large set of higher level TensorFlow wrappers that you might see out there in the wild. And it seems that like even people within Google can't really agree on which one is the right one to use. So Keras and TFLearn are third party libraries that are out there on the internet by other people. But there's these three different ones, tf.layers, TF-Slim and tf.contrib.learn that all ship with TensorFlow, that are all kind of doing a slightly different version of this higher level wrapper thing. There's another framework also from Google, but not shipping with TensorFlow called Pretty Tensor that does the same sort of thing. And I guess none of these were good enough for DeepMind, because they went ahead a couple weeks ago and wrote and released their very own high level TensorFlow wrapper called Sonnet. So I wouldn't begrudge you if you were kind of confused by all these things. There's a lot of different choices. They don't always play nicely with each other. But you have a lot of options, so that's good. TensorFlow has pretrained models. There's some examples in TF-Slim, and in Keras. 'Cause remember retrained models are super important when you're training your own things. There's also this idea of Tensorboard where you can load up your, I don't want to get into details, but Tensorboard you can add sort of instrumentation to your code and then plot losses and things as you go through the training process. TensorFlow also let's you run distributed where you can break up a computational graph run on different machines. That's super cool but I think probably not anyone outside of Google is really using that to great success these days, but if you do want to run distributed stuff probably TensorFlow is the main game in town for that. A side note is that a lot of the design of TensorFlow is kind of spiritually inspired by this earlier framework called Theano from Montreal. I don't want to go through the details here, just if you go through these slides on your own, you can see that the code for Theano ends up looking very similar to TensorFlow. Where we define some variables, we do some forward pass, we compute some gradients, and we compile some function, then we run the function over and over to train the network. So it kind of looks a lot like TensorFlow. So we still have a lot to get through, so I'm going to move on to PyTorch and maybe take questions at the end. So, PyTorch from Facebook is kind of different from TensorFlow in that we have sort of three explicit different layers of abstraction inside PyTorch. So PyTorch has this tensor object which is just like a Numpy array. It's just an imperative array, it doesn't know anything about deep learning, but it can run with GPU. We have this variable object which is a node in a computational graph which builds up computational graphs, lets you compute gradients, that sort of thing. And we have a module object which is a neural network layer that you can compose together these modules to build big networks. So if you kind of want to think about rough equivalents between PyTorch and TensorFlow you can think of the PyTorch tensor as fulfilling the same role as the Numpy array in TensorFlow. The PyTorch variable is similar to the TensorFlow tensor or variable or placeholder, which are all sort of nodes in a computational graph. And now the PyTorch module is kind of equivalent to these higher level things from tf.slim or tf.layers or sonnet or these other higher level frameworks. So right away one thing to notice about PyTorch is that because it ships with this high level abstraction and like one really nice higher level abstraction called modules on its own, there's sort of less choice involved. Just stick with nnmodules and you'll be good to go. You don't need to worry about which higher level wrapper to use. So PyTorch tensors, as I said, are just like Numpy arrays so here on the right we've done an entire two layer network using entirely PyTorch tensors. One thing to note is that we're not importing Numpy here at all anymore. We're just doing all these operations using PyTorch tensors. And this code looks exactly like the two layer net code that you wrote in Numpy on the first homework. So you set up some random data, you use some operations to compute the forward pass. And then we're explicitly viewing the backward pass ourself. Just sort of backhopping through the network, through the operations, just as you did on homework one. And now we're doing a manual update of the weights using a learning rate and using our computed gradients. But the major difference between the PyTorch tensor and Numpy arrays is that they run on GPU so all you have to do to make this code run on GPU is use a different data type. Rather than using torch.FloatTensor, you do torch.cuda.FloatTensor, cast all of your tensors to this new datatype and everything runs magically on the GPU. You should think of PyTorch tensors as just Numpy plus GPU. That's exactly what it is, nothing specific to deep learning. So the next layer of abstraction in PyTorch is the variable. So this is, once we moved from tensors to variables now we're building computational graphs and we're able to take gradients automatically and everything like that. So here, if X is a variable, then x.data is a tensor and x.grad is another variable containing the gradients of the loss with respect to that tensor. So x.grad.data is an actual tensor containing those gradients. And PyTorch tensors and variables have the exact same API. So any code that worked on PyTorch tensors you can just make them variables instead and run the same code, except now you're building up a computational graph rather than just doing these imperative operations. So here when we create these variables each call to the variable constructor wraps a PyTorch tensor and then also gives a flag whether or not we want to compute gradients with respect to this variable. And now in the forward pass it looks exactly like it did before in the variable in the case with tensors because they have the same API. So now we're computing our predictions, we're computing our loss in kind of this imperative kind of way. And then we call loss.backwards and now all these gradients come out for us. And then we can make a gradient update step on our weights using the gradients that are now present in the w1.grad.data. So this ends up looking quite like the Numpy case, except all the gradients come for free. One thing to note that's kind of different between PyTorch and TensorFlow is that in a TensorFlow case we were building up this explicit graph, then running the graph many times. Here in PyTorch, instead we're building up a new graph every time we do a forward pass. And this makes the code look a bit cleaner. And it has some other implications that we'll get to in a bit. So in PyTorch you can define your own new autograd functions by defining the forward and backward in terms of tensors. This ends up looking kind of like the module layers code that you write for homework two. Where you can implement forward and backward using tensor operations and then stick these things inside computational graph. So here we're defining our own relu and then we can actually go in and use our own relu operation and now stick it inside our computational graph and define our own operations this way. But most of the time you will probably not need to define your own autograd operations. Most of the times the operations you need will mostly be already implemented for you. So in TensorFlow we saw, if we can move to something like Keras or TF.Learn and this gives us a higher level API to work with, rather than this raw computational graphs. The equivalent in PyTorch is the nn package. Where it provides these high level wrappers for working with these things. But unlike TensorFlow there's only one of them. And it works pretty well, so just use that if you're using PyTorch. So here, this ends up kind of looking like Keras where we define our model as some sequence of layers. Our linear and relu operations. And we use some loss function defined in the nn package that's our mean squared error loss. And now inside each iteration of our loop we can run data forward through the model to get our predictions. We can run the predictions forward through the loss function to get our scale or loss, then we can call loss.backward, get all our gradients for free and then loop over the parameters of the models and do our explicit gradient descent step to update the models. And again we see that we're sort of building up this new computational graph every time we do a forward pass. And just like we saw in TensorFlow, PyTorch provides these optimizer operations that kind of abstract away this updating logic and implement fancier update rules like Adam and whatnot. So here we're constructing an optimizer object telling it that we want it to optimize over the parameters of the model. Giving it some learning rate under the hyper parameters. And now after we compute our gradients we can just call optimizer.step and it updates all the parameters of the model for us right here. So another common thing you'll do in PyTorch a lot is define your own nn modules. So typically you'll write your own class which defines you entire model as a single new nn module class. And a module is just kind of a neural network layer that can contain either other other modules or trainable weights or other other kinds of state. So in this case we can redo the two layer net example by defining our own nn module class. So now here in the initializer of the class we're assigning this linear1 and linear2. We're constructing these new module objects and then store them inside of our own class. And now in the forward pass we can use both our own internal modules as well as arbitrary autograd operations on variables to compute the output of our network. So here we receive the, inside this forward method here, the input acts as a variable, then we pass the variable to our self.linear1 for the first layer. We use an autograd op clamp to complete the relu, we pass the output of that to the second linear and then that gives us our output. And now the rest of this code for training this thing looks pretty much the same. Where we build an optimizer and loop over and on ever iteration feed data to the model, compute the gradients with loss.backwards, call optimizer.step. So this is like relatively characteristic of what you might see in a lot of PyTorch type training scenarios. Where you define your own class, defining your own model that contains other modules and whatnot and then you have some explicit training loop like this that runs it and updates it. One kind of nice quality of life thing that you have in PyTorch is a dataloader. So a dataloader can handle building minibatches for you. It can handle some of the multi-threading that we talked about for you, where it can actually use multiple threads in the background to build many batches for you and stream off disk. So here a dataloader wraps a dataset and provides some of these abstractions for you. And in practice when you want to run your own data, you typically will write your own dataset class which knows how to read your particular type of data off whatever source you want and then wrap it in a data loader and train with that. So, here we can see that now we're iterating over the dataloader object and at every iteration this is yielding minibatches of data. And it's internally handling the shuffling of the data and multithreaded dataloading and all this sort of stuff for you. So this is kind of a completely PyTorch example and a lot of PyTorch training code ends up looking something like this. PyTorch provides pretrained models. And this is probably the slickest pretrained model experience I've ever seen. You just say torchvision.models.alexnet pretained=true. That'll go down in the background, download the pretrained weights for you if you don't already have them, and then it's right there, you're good to go. So this is super easy to use. PyTorch also has, there's also a package called Visdom that lets you visualize some of these loss statistics somewhat similar to Tensorboard. So that's kind of nice, I haven't actually gotten a chance to play around with this myself so I can't really speak to how useful it is, but one of the major differences between Tensorboard and Visdom is that Tensorboard actually lets you visualize the structure of the computational graph. Which is really cool, a really useful debugging strategy. And Visdom does not have that functionality yet. But I've never really used this myself so I can't really speak to its utility. As a bit of an aside, PyTorch is kind of an evolution of, kind of a newer updated version of an older framework called Torch which I worked with a lot in the last couple of years. And I don't want to go through the details here, but PyTorch is pretty much better in a lot of ways than the old Lua Torch, but they actually share a lot of the same back end C code for computing with tensors and GPU operations on tensors and whatnot. So if you look through this Torch example, some of it ends up looking kind of similar to PyTorch, some of it's a bit different. Maybe you can step through this offline. But kind of the high level differences between Torch and PyTorch are that Torch is actually in Lua, not Python, unlike these other things. So learning Lua is a bit of a turn off for some people. Torch doesn't have autograd. Torch is also older, so it's more stable, less susceptible to bugs, there's maybe more example code for Torch. They're about the same speeds, that's not really a concern. But in PyTorch it's in Python which is great, you've got autograd which makes it a lot simpler to write complex models. In Lua Torch you end up writing a lot of your own back prop code sometimes, so that's a little bit annoying. But PyTorch is newer, there's less existing code, it's still subject to change. So it's a little bit more of an adventure. But at least for me, I kind of prefer, I don't really see much reason for myself to use Torch over PyTorch anymore at this time. So I'm pretty much using PyTorch exclusively for all my work these days. We talked about this a little bit about this idea of static versus dynamic graphs. And this is one of the main distinguishing features between PyTorch and TensorFlow. So we saw in TensorFlow you have these two stages of operation where first you build up this computational graph, then you run the computational graph over and over again many many times reusing that same graph. That's called a static computational graph 'cause there's only one of them. And we saw PyTorch is quite different where we're actually building up this new computational graph, this new fresh thing on every forward pass. That's called a dynamic computational graph. For kind of simple cases, with kind of feed forward neural networks, it doesn't really make a huge difference, the code ends up kind of similarly and they work kind of similarly, but I do want to talk a bit about some of the implications of static versus dynamic. And what are the tradeoffs of those two. So one kind of nice idea with static graphs is that because we're kind of building up one computational graph once, and then reusing it many times, the framework might have the opportunity to go in and do optimizations on that graph. And kind of fuse some operations, reorder some operations, figure out the most efficient way to operate that graph so it can be really efficient. And because we're going to reuse that graph many times, maybe that optimization process is expensive up front, but we can amortize that cost with the speedups that we've gotten when we run the graph many many times. So as kind of a concrete example, maybe if you write some graph which has convolution and relu operations kind of one after another, you might imagine that some fancy graph optimizer could go in and actually output, like emit custom code which has fused operations, fusing the convolution and the relu so now it's computing the same thing as the code you wrote, but now might be able to be executed more efficiently. So I'm not too sure on exactly what the state in practice of TensorFlow graph optimization is right now, but at least in principle, this is one place where static graph really, you can have the potential for doing this optimization in static graphs where maybe it would be not so tractable for dynamic graphs. Another kind of subtle point about static versus dynamic is this idea of serialization. So with a static graph you can imagine that you write this code that builds up the graph and then once you've built the graph, you have this data structure in memory that represents the entire structure of your network. And now you could take that data structure and just serialize it to disk. And now you've got the whole structure of your network saved in some file. And then you could later rear load that thing and then run that computational graph without access to the original code that built it. So this would be kind of nice in a deployment scenario. You might imagine that you might want to train your network in Python because it's maybe easier to work with, but then after you serialize that network and then you could deploy it now in maybe a C++ environment where you don't need to use the original code that built the graph. So that's kind of a nice advantage of static graphs. Whereas with a dynamic graph, because we're interleaving these processes of graph building and graph execution, you kind of need the original code at all times if you want to reuse that model in the future. On the other hand, some advantages for dynamic graphs are that it kind of makes, it just makes your code a lot cleaner and a lot easier in a lot of scenarios. So for example, suppose that we want to do some conditional operation where depending on the value of some variable Z, we want to do different operations to compute Y. Where if Z is positive, we want to use one weight matrix, if Z is negative we want to use a different weight matrix. And we just want to switch off between these two alternatives. In PyTorch because we're using dynamic graphs, it's super simple. Your code kind of looks exactly like you would expect, exactly what you would do in Numpy. You can just use normal Python control flow to handle this thing. And now because we're building up the graph each time, each time we perform this operation will take one of the two paths and build up maybe a different graph on each forward pass, but for any graph that we do end up building up, we can back propagate through it just fine. And the code is very clean, easy to work with. Now in TensorFlow the situations is a little bit more complicated because we build the graph once, this control flow operator kind of needs to be an explicit operator in the TensorFlow graph. And now, so them you can see that we have this tf.cond call which is kind of like a TensorFlow version of an if statement, but now it's baked into the computational graph rather than using sort of Python control flow. And the problem is that because we only build the graph once, all the potential paths of control flow that our program might flow through need to be baked into the graph at the time we construct it before we ever run it. So that means that any kind of control flow operators that you want to have need to be not Python control flow operators, you need to use some kind of magic, special tensor flow operations to do control flow. In this case this tf.cond. Another kind of similar situation happens if you want to have loops. So suppose that we want to compute some kind of recurrent relationships where maybe Y T is equal to Y T minus one plus X T times some weight matrix W and depending on each time we do this, every time we compute this, we might have a different sized sequence of data. And no matter the length of our sequence of data, we just want to compute this same recurrence relation no matter the size of the input sequence. So in PyTorch this is super easy. We can just kind of use a normal for loop in Python to just loop over the number of times that we want to unroll and now depending on the size of the input data, our computational graph will end up as different sizes, but that's fine, we can just back propagate through each one, one at a time. Now in TensorFlow this becomes a little bit uglier. And again, because we need to construct the graph all at once up front, this control flow looping construct again needs to be an explicit node in the TensorFlow graph. So I hope you remember your functional programming because you'll have to use those kinds of operators to implement looping constructs in TensorFlow. So in this case, for this particular recurrence relationship you can use a foldl operation and pass in, sort of implement this particular loop in terms of a foldl. But what this basically means is that you have this sense that TensorFlow is almost building its own entire programming language, using the language of computational graphs. And any kind of control flow operator, or any kind of data structure needs to be rolled into the computational graph so you can't really utilize all your favorite paradigms for working imperatively in Python. You kind of need to relearn a whole separate set of control flow operators. And if you want to do any kinds of control flow inside your computational graph using TensorFlow. So at least for me, I find that kind of confusing, a little bit hard to wrap my head around sometimes, and I kind of like that using PyTorch dynamic graphs, you can just use your favorite imperative programming constructs and it all works just fine. By the way, there actually is some very new library called TensorFlow Fold which is another one of these layers on top of TensorFlow that lets you implement dynamic graphs, you kind of write your own code using TensorFlow Fold that looks kind of like a dynamic graph operation and then TensorFlow Fold does some magic for you and somehow implements that in terms of the static TensorFlow graphs. This is a super new paper that's being presented at ICLR this week in France. So I haven't had the chance to like dive in and play with this yet. But my initial impression was that it does add some amount of dynamic graphs to TensorFlow but it is still a bit more awkward to work with than the sort of native dynamic graphs you have in PyTorch. So then, I thought it might be nice to motivate like why would we care about dynamic graphs in general? So one option is recurrent networks. So you can see that for something like image captioning we use a recurrent network which operates over sequences of different lengths. In this case, the sentence that we want to generate as a caption is a sequence and that sequence can vary depending on our input data. So now you can see that we have this dynamism in the thing where depending on the size of the sentence, our computational graph might need to have more or fewer elements. So that's one kind of common application of dynamic graphs. For those of you who took CS224N last quarter, you saw this idea of recursive networks where sometimes in natural language processing you might, for example, compute a parsed tree of a sentence and then you want to have a neural network kind of operate recursively up this parse tree. So having a neural network that kind of works, it's not just a sequential sequence of layers, but instead it's kind of working over some graph or tree structure instead where now each data point might have a different graph or tree structure so the structure of the computational graph then kind of mirrors the structure of the input data. And it could vary from data point to data point. So this type of thing seems kind of complicated and hairy to implement using TensorFlow, but in PyTorch you can just kind of use like normal Python control flow and it'll work out just fine. Another bit of more researchy application is this really cool idea that I like called neuromodule networks for visual question answering. So here the idea is that we want to ask some questions about images where we maybe input this image of cats and dogs, there's some question, what color is the cat, and then internally the system can read the question and that has these different specialized neural network modules for performing operations like asking for colors and finding cats. And then depending on the text of the question, it can compile this custom architecture for answering the question. And now if we asked a different question, like are there more cats than dogs? Now we have maybe the same basic set of modules for doing things like finding cats and dogs and counting, but they're arranged in a different order. So we get this dynamism again where different data points might give rise to different computational graphs. But this is a bit more of a researchy thing and maybe not so main stream right now. But as kind of a bigger point, I think that there's a lot of cool, creative applications that people could do with dynamic computational graphs and maybe there aren't so many right now, just because it's been so painful to work with them. So I think that there's a lot of opportunity for doing cool, creative things with dynamic computational graphs. And maybe if you come up with cool ideas, we'll feature it in lecture next year. So I wanted to talk very briefly about Caffe which is this framework from Berkeley. Which Caffe is somewhat different from the other deep learning frameworks where you in many cases you can actually train networks without writing any code yourself. You kind of just call into these pre-existing binaries, set up some configuration files and in many cases you can train on data without writing any of your own code. So, you may be first, you convert your data into some format like HDF5 or LMDB and there exists some scripts inside Caffe that can just convert like folders of images and text files into these formats for you. You need to define, now instead of writing code to define the structure of your computational graph, instead you edit some text file called a prototxt which sets up the structure of the computational graph. Here the structure is that we read from some input HDF5 file, we perform some inner product, we compute some loss and the whole structure of the graph is set up in this text file. One kind of downside here is that these files can get really ugly for very large networks. So for something like the 152 layer ResNet model, which by the way was trained in Caffe originally, then this prototxt file ends up almost 7000 lines long. So people are not writing these by hand. People will sometimes will like write python scripts to generate these prototxt files. [laughter] Then you're kind in the realm of rolling your own computational graph abstraction. That's probably not a good idea, but I've seen that before. Then, rather than having some optimizer object, instead there's some solver, you define some solver things inside another prototxt. This defines your learning rate, your optimization algorithm and whatnot. And then once you do all these things, you can just run the Caffe binary with the train command and it all happens magically. Cafee has a model zoo with a bunch of pretrained models, that's pretty useful. Caffe has a Python interface but it's not super well documented. You kind of need to read the source code of the python interface to see what it can do, so that's kind of annoying. But it does work. So, kind of my general thing about Caffe is that it's maybe good for feed forward models, it's maybe good for production scenarios, because it doesn't depend on Python. But probably for research these days, I've seen Caffe being used maybe a little bit less. Although I think it is still pretty commonly used in industry again for production. I promise one slide, one or two slides on Caffe 2. So Caffe 2 is the successor to Caffe which is from Facebook. It's super new, it was only released a week ago. [laughter] So I really haven't had the time to form a super educated opinion about Caffe 2 yet, but it uses static graphs kind of similar to TensorFlow. Kind of like Caffe one the core is written in C++ and they have some Python interface. The difference is that now you no longer need to write your own Python scripts to generate prototxt files. You can kind of define your computational graph structure all in Python, kind of looking with an API that looks kind of like TensorFlow. But then you can spit out, you can serialize this computational graph structure to a prototxt file. And then once your model is trained and whatnot, then we get this benefit that we talked about of static graphs where you can, you don't need the original training code now in order to deploy a trained model. So one interesting thing is that you've seen Google maybe has one major deep running framework, which is TensorFlow, where Facebook has these two, PyTorch and Caffe 2. So these are kind of different philosophies. Google's kind of trying to build one framework to rule them all that maybe works for every possible scenario for deep learning. This is kind of nice because it consolidates all efforts onto one framework. It means you only need to learn one thing and it'll work across many different scenarios including like distributed systems, production, deployment, mobile, research, everything. Only need to learn one framework to do all these things. Whereas Facebook is taking a bit of a different approach. Where PyTorch is really more specialized, more geared towards research so in terms of writing research code and quickly iterating on your ideas, that's super easy in PyTorch, but for things like running in production, running on mobile devices, PyTorch doesn't have a lot of great support. Instead, Caffe 2 is kind of geared toward those more production oriented use cases. So my kind of general study, my general, overall advice about like which framework to use for which problems is kind of that both, I think TensorFlow is a pretty safe bet for just about any project that you want to start new, right? Because it is sort of one framework to rule them all, it can be used for just about any circumstance. However, you probably need to pair it with a higher level wrapper and if you want dynamic graphs, you're maybe out of luck. Some of the code ends up looking a little bit uglier in my opinion, but maybe that's kind of a cosmetic detail and it doesn't really matter that much. I personally think PyTorch is really great for research. If you're focused on just writing research code, I think PyTorch is a great choice. But it's a bit newer, has less community support, less code out there, so it could be a bit of an adventure. If you want more of a well trodden path, TensorFlow might be a better choice. If you're interested in production deployment, you should probably look at Caffe, Caffe 2 or TensorFlow. And if you're really focused on mobile deployment, I think TensorFlow and Caffe 2 both have some built in support for that. So it's kind of unfortunately, there's not just like one global best framework, it kind of depends on what you're actually trying to do, what applications you anticipate but theses are kind of my general advice on those things. So next time we'll talk about some case studies about various CNN architectures.
Stanford_Computer_Vision
Lecture_15_Efficient_Methods_and_Hardware_for_Deep_Learning.txt
- Hello everyone, welcome to CS231. I'm Song Han. Today I'm going to give a guest lecture on the efficient methods and hardware for deep learning. So I'm a fifth year PhD candidate here at Stanford, advised by Professor Bill Dally. So, in this course we have seen a lot of convolution neural networks, recurrent neural networks, or even since last time, the reinforcement learning. They are spanning a lot of applications. For example, the self-=driving car, machine translation, AlphaGo and Smart Robots. And it's changing our lives, but there is a recent trend that in order to achieve such high accuracy, the models are getting larger and larger. For example for ImageNet recognition, the winner from 2012 to 2015, the model size increased by 16X. And just in one year, for Baidu's deep speech just in one year, the training operations, the number of training operations increased by 10X. So such large model creates lots of problems, for example the model size becomes larger and larger so it's difficult for them to be deployed either on those for example, on the mobile phones. If the item is larger than 100 megabytes, you cannot download until you connect to Wi-Fi. So those product managers and for example Baidu, Facebook, they are very sensitive to the size of the binary size of their model. And also for example, the self-driving car, you can only do those on over-the-air update for the model if the model is too large, it's also difficult. And the second challenge for those large models is that the training speed is extremely slow. For example, the ResNet152, which is only a few, less than 1% actually, more accurate than ResNet101. Takes 1.5 weeks to train on four Maxwell M40 GPUs for example. Which greatly limits either we are doing homework or if the researcher's designing new models is getting pretty slow. And the third challenge for those bulky model is the energy efficiency. For example, the AlphaGo beating Lee Sedol last year, took 2000 CPUs and 300 GPUs, which cost $3,000 just to pay for the electric bill, which is insane. So either on those embedded devices, those models are draining your battery power for on data-center increases the total cost of ownership of maintaining a large data-center. For example, Google in their blog, they mentioned if all the users using the Google Voice Search for just three minutes, they have to double their data-center. So that's a large cost. So reducing such cost is very important. And let's see where is actually the energy consumed. The large model means lots of memory access. You have to access, load those models from the memory means more energy. If you look at how much energy is consumed by loading the memory versus how much is consumed by multiplications and add those arithmetic operations, the memory access is more than two or three orders of magnitude, more energy consuming than those arithmetic operations. So how to make deep learning more efficient. So we have to improve energy efficiency by this Algorithm and Hardware Co-Design. So this is the previous way, which is our hardware. For example, we have some benchmarks say Spec 2006 and then run those benchmarks and tune your CPU architectures for those benchmarks. Now what we should do is to open up the box to see what can we do from algorithm side first and see what is the optimum question mark processing unit. That breaks the boundary between the algorithm hardware to improve the overall efficiency. So today's talk, I'm going to have the following agenda. We are going to cover four aspects: The algorithm hardware and inference and training. So they form a small two by two matrix, so includes the algorithm for efficient inference, hardware for efficient inference and the algorithm for efficient training, and lastly, the hardware for efficient training. For example, I'm going to cover the TPU, I'm going to cover the Volta. But before I cover those things, let's have three slides for Hardware 101. A brief introduction of the families of hardware in such a tree. So in general, we can have roughly two branches. One is general purpose hardware. It can do any applications versus the specialized hardware, which is tuned for a specific kind of applications, a domain of applications. So the general purpose hardware includes, the CPU or the GPU, and their difference is that CPU is latency oriented, single threaded. It's like a big elephant. While the GPU is throughput oriented. It has many small though weak threads, but there are thousands of such small weak cores. Like a group of small ants, where there are so many ants. And specialized hardware, roughly there are FPGAs and ASICs. So FPGA stand for Field Programmable Gate Array. So it is programmable, hardware programmable so its logic can be changed. So it's cheaper for you to try new ideas and do prototype, but it's less efficient. It's in the middle between the general purpose and pure ASIC. So ASIC stands for Application Specific Integrated Circuit. It has a fixed logic, just designed for a certain application. For example deep learning. And Google's TPU is a kind of ASIC and the neural networks we train on, the earlier GPUs is here. And another slide for Hardware 101 is the number representations. So in this slide, I'm going to convey you the idea that all the numbers in computer are not represented by a real number. It's not a real number, but they are actually discrete. Even for those floating point with your 32 Bit. Floating point numbers, their resolution is not perfect. It's not continuous, but it's discrete. So for example FP32, meaning using a 32 bit to represent a floating point number. So there are three components in the representation. The sign bit, the exponent bit, the mantissa, and the number it represents is shown by minus 1 to the S times 1.M times 2 to the exponent. So similar there is FP16, using a 16 bit to represent a floating point number. In particular, I'm going to introduce Int8, where the core TPU use, using an integer to represent a fixed point number. So we have a certain number of bits for the integer. Followed by a radix point, if we put different layers. And lastly, the fractional bits. So why do we prefer those eight bit, or 16 bit rather than those traditional like the 32 bit floating point. That's the cost. So, I generated the figure from 45 nanometer technology about the energy cost versus the area cost for different operations. In particular, let's see here, go you from 32 bit to 16 bit, we have about four times reduction in energy and also about four times reduction in the area. Area means money. Every millimeter square takes money to take out a chip So it's very beneficial for hardware design to go from 32 bit to 16 bit. That's why you hear NVIDIA from Pascal Architecture, they said they're starting to support FP16. That's the reason why it's so beneficial. For example, previous battery level could last four hours, now it becomes 16 hours. That's what it means to reduce the energy cost by four times. But here still, there's a problem of large energy costs for reading the memory. And let's see how can we deal with this memory reference so expensive, how do we deal with this problem better? So let's switch gear and come to our topic directly. So let's first introduce algorithm for efficient inference. So I'm going to cover six topics, this is a really long slide. So I'm going to relatively fast. So the first idea I'm going to talk about is pruning. Pruning the neural networks. For example, this is original neural network. So what I'm trying to do is, can we remove some of the weight and still have the same accuracy? It's like pruning a tree, get rid of those redundant connections. This is first proposed by Professor Yann LeCun back in 1989, and I revisited this problem, 26 years later, on those modern deep neural nets to see how it works. So not all parameters are useful actually. For example, in this case, if you want to fit a single line, but you're using a quadratic term, apparently the 0.01 is a redundant parameter. So I'm going to train the connectivity first and then prune some of the connections. And then train the remaining weights, and through this process, it regulates. And as a result, I can reduce the number of connections, and annex that from 16 million parameters to only six million parameters, which is 10 times less the computation. So this is the accuracy. So the x-axis is how much parameters to prune away and the y-axis is the accuracy you have. So we want to have less parameters, but we also want to have the same accuracy as before. We don't want to sacrifice accuracy, For example at 80%, we locked zero away left 80% of the parameters, but accuracy jumped by 4%. That's intolerable. But the good thing is that if we retrain the remaining weights, the accuracy can fully recover here. And if we do this process iteratively by pruning and retraining, pruning and retraining, we can fully recover the accuracy not until we are prune away 90% of the parameters. So if you go back to home and try it on your Ipad or notebook, just zero away 50% of the parameters say you went on your homework, you will astonishingly find that accuracy actually doesn't hurt. So we just mentioned convolution neural nets, how about RNNs and LSTMs, so I tried with this neural talk. Again, pruning away 90% of the rates doesn't hurt the blue score. And here are some visualizations. For example, the original picture, the neural talk says a basketball player in a white uniform is playing with a ball. Versus pruning away 90% it says, a basketball player in a white uniform is playing with a basketball. And on and so on. But if you're too aggressive, say you prune away 95% of the weights, the network is going to get drunk. It says, a man in a red shirt and white and black shirt is running through a field. So there's really a limit, a threshold, you have to take care of during the pruning. So interestingly, after I did the work, did some resource and research and find actually the same pruning procedure actually happens to human brain as well. So when we were born, there are about 50 trillion synapses in the brain. And at one year old, this number surged into 1,000 trillion. And as we become adolescent, it becomes smaller actually, 500 trillion in the end, according to the study by Nature. So this is very interesting. And also, the pruning changed the weight distribution because we are removing those small connections and after we retrain them, that's why it becomes soft in the end. Yeah, question. - [Student] Are you trying to mean that it terms of your mixed weights during the training will be just set at zero and just start from scratch? And these start from the things that are at zero. - Yeah. So the question is, how do we deal with those zero connections? So we force them to be zero in all the other iterations. Question? - [Student] How do you pick which rates to drop? - Yeah so very simple. Small weights, drop it, sort it. If it's small, just-- - [Student] Any threshold that I decide? - Exactly, yeah. So the next idea, weight sharing. So now we have, remember our end goal is to remove connections so that we can have less memory footprint so that we can have more energy efficient deployment. Now we have less number of parameters by pruning. We want to have less number of bits per parameter so they're multiplied together they get a small model. So the idea is like this. Not all numbers, not all the weights has to be the exact number. For example, 2.09, 2.12 or all these four weights, you just put them using 2.0 to represent them. That's enough. Otherwise too accurate number is just leads to overfitting. So the idea is I can cluster the weights if they are similar, just using a centroid to represent the number instead of using the full precision weight. So that every time I do the inference, I just do inference on this single number. For example, this is a four by four weight matrix in a certain layer. And what I'm going to do is do k-means clustering by having the similar weight sharing the same centroid. For example, 2.09, 2.12, I store index of three pointing to here. So that, the good thing is we need to only store the two bit index rather than the 32 bit, floating point number. That's 16 times saving. And how do we train such neural network? They are binded together, so after we get the gradient, we color them in the same pattern as the weight and then we do a group by operation by having all the in that weights with the same index grouped together. And then we do a reduction by summing them up. And then multiplied by the learning rate subtracted from the original centroid. That's one iteration of the SGD for such weight shared neural network. So remember previously, after pruning this is what the weight distribution like and after weight sharing, they become discrete. There are only 16 different values here, meaning we can use four bits to represent each number. And by training on such weight shared neural network, training on such extremely shared neural network, these weights can adjust. It is the subtle changes that compensated for the loss of accuracy. So let's see, this is the number of bits we give it, this is the accuracy for convolution layers. Not until four bits, does the accuracy begin to drop and for those fully connected layers, very astonishingly, it's not until two bits, only four number, does the accuracy begins to drop. And this result is per layer. So we have covered two methods, pruning and weight sharing. What if we combine these two methods together. Do they work well? So by combining those methods, this is the compression ratio with the smaller on the left. And this is the accuracy. We can combine it together and make the model about 3% of its original size without hurting the accuracy at all. Compared with the each working individual data by 10%, accuracy begins to drop. And compared with the cheap SVD method, this has a better compression ratio. And final idea is we can apply the Huffman Coding to use more number of bits for those infrequent numbers, infrequently appearing weights and less number of bits for those more frequently appearing weights. So by combining these three methods, pruning, weight sharing, and also Huffman Coding, we can compress the neural networks, state-of-the-art neural networks, ranging from 10x to 49x without hurting the prediction accuracy. Sometimes a little bit better. But maybe that is noise. So the next question is, these models are just pre-trained models by say Google, Microsoft. Can we make a compact model, a pump compact model to begin with? Even before such compression? So SqueezeNet, you may have already worked with this neural network model in a homework. So the idea is we are having a squeeze layer here to shield at the three by three convolution with fewer number of channels. So that's where squeeze comes from. And here we have two branches, rather than four branches as in the inception model. So as a result, the model is extremely compact. It doesn't have any fully connected layers. Everything is fully convolutional. The last layer is a global pooling. So what if we apply deep compression algorithm on such already compact model will it be getting even smaller? So this is AlexNet after compression, this is SqueezeNet. Even before compression, it's 50x smaller than AlexNet, but has the same accuracy. After compression 510x smaller, but the same accuracy only less than half a megabyte. This means it's very easy to fit such a small model on the cache, which is literally tens of megabyte SRAM. So what does it mean? It's possible to achieve speed up. So this is the speedup, I measured if all these fully connected layers only for now, on the CPU, GPU, and the mobile GPU, before pruning and after pruning the weights, and on average, I observed a 3x speedup in a CPU, about 3X speedup on the GPU, and roughly 5x speedup on the mobile GPU, which is a TK1. And so is the energy efficiency. In an average improvement from 3x to 6x on a CPU, GPU, and mobile GPU. And these ideas are used in these companies. Having talked about when pruning and when sharing, which is a non-linear quantization method and we're going to talk about quantization, which is, why do they use in the TPU design? All the TPU designs use at only eight bit for inference. And the way, how they can use that is because of the quantization. And let's see how does it work. So quantization has this complicated figure, but the intuition is very simple. You run the neural network and train it with the normal floating point numbers. And quantize the weight and activations by gather the statistics for each layer. For example, what is the maximum number, minimum number, and how many bits are enough to represent this dynamic range. Then you use that number of bits for the integer part and the rest of the eight bit or seven bit for the other part of the 8 bit representation. And also we can fine tune in the floating point format. Or we can also use feed forward with fixed point and back propagation with update with the floating point number. There are lots of different ideas to have better accuracy. And this is the result, for how many number of bits versus what is the accuracy. For example, using a fixed, 8 bit, the accuracy for GoogleNet doesn't drop significantly. And for VGG-16, it also remains pretty well for the accuracy. While circling down to a six bit, the accuracy begins to drop pretty dramatically. Next idea, low rank approximation. It turned out that for a convolution layer, you can break it into two convolution layers. One convolution here, followed by a one by one convolution. So that it's like you break a complicated problem into two separate small problems. This is for convolution layer. As we can see, achieving about 2x speedup, there's almost no loss of accuracy. And achieving a speedup of 5x, roughly a 6% loss of accuracy. And this also works for fully connected layers. The simplest idea is using the SVD to break it into one matrix into two matrices. And follow this idea, this paper proposes to use the Tensor Tree to break down one fully connected layer into a tree, lots of fully connected layers. That's why it's called a tree. So going even more crazy, can we use only two weights or three weights to represent a neural network? A ternary weight or a binary weight. We already seen this distribution before, after pruning. There's some positive weights and negative weights. Can we just use three numbers, just use one, minus one, zero to represent the neural network. This is our recent paper clear that we maintain a full precision weight during training time, but at inference time, we only keep the scaling factor and the ternary weight. So during inference, we only need three weights. That's very efficient and making the model very small. This is the proportion of the positive zero and negative weights, they can change during the training. So is their absolute value. And this is the visualization of kernels by this trained ternary quantization. We can see some of them are a corner detector like here. And also here. Some of them are maybe edge detector. For example, this filter some of them are corner detector like here this filter. Actually we don't need such fine grain resolution. Just three weights are enough. So this is the validation accuracy on ImageNet with AlexNet. So the threshline is the baseline accuracy with floating point 32. And the red line is our result. Pretty much the same accuracy converged compared with the full precision weights. Last idea, Winograd Transformation. So this about how do we implement deep neural nets, how do we implement the convolutions. So this is the conventional direct convolution implementation method. The slide credited to Julien, a friend from Nvidia. So originally, we just do the element wise do a dot product for those nine elements in the filter and nine elements in the image and then sum it up. For example, for every output we need nine times C number of multiplication and adds. Winograd Convolution is another method, equivalent method. It's not lost, it's an equivalent method proposed at first through this paper, Fast Algorithms for Convolution Neural Networks. That instead of directly doing the convolution, move it one by one, at first it transforms the input feature map to another feature map. Which contains only the weight, contains only 1, 0.5, 2 that can efficiently implement it with shift. And also transform the filter into a four by four tensor. So what we are going to do here is sum over c and do an element-wise element-wise product. So there are only 16 multiplications happening here. And then we do a inverse transform to get four outputs. So the transform and the inverse transform can be amortized and the multiplications, whether it can ignored. So in order to get four output, we need nine times channel times four, which is 36 times channel. Multiplications originally for the direct convolution but now we need 16 times C of our output So that is 2.25x less number of multiplications to perform the exact same multiplication. And here is a speedup. 2.25x, so theoretically, 2.25x speedup and in real, from cuDNN 5 they incorporated such Winograd Convolution algorithm. This is on the VGG net I believe, the speedup is roughly 1.7 to 2x speedup. Pretty significant. And after cuDNN 5, the cuDNN begins to use the Winograd Convolution algorithm. Okay, so far we have covered those efficient algorithms for efficient inference. We covered pruning, weight sharing, quantization, and also Winograd binary and ternary. So now let's see what is the optimal hardware for those efficient inference? And what is a Google TPU? So there are a wide range of domain specific architectures or ASICS for deep neural networks. They have a common goal is to minimize the memory access to save power. For example the Eyeriss from MIT by using the RS Dataflow to minimize the off chip direct access. And DaDiannao from China Academy of Science, buffered all the weights on chip DRAM instead of having to go to off-chip DRAM. So the TPU from Google is using eight bit integer to represent the numbers. And at Stanford I proposed the EIE architecture that support those compressed and sparse deep neural network inference. So this is what the TPU looks like. It's actually smartly, can be put into the disk drive up to four cards per server. And this is the high-level architecture for the Google TPU. Don't be overwhelmed, it's actually, the kernel part here, is this giant matrix multiplication unit. So it's a 256 by 256 matrix multiplication unit. So in one single cycle, it can perform 64 kilo those number of multiplication and accumulate operations. So running 700 Megahertz, the throughput is 92 Teraops per second because it's actually integer operation. So we just about 25x as GPU and more than 100x at the CPU. And notice, TPU has a really large software-managed on-chip buffer. It is 24 megabytes. The cache for the CPU the L3 cache is already 16 megabytes. This is 24 megabytes which is pretty large. And it's powered by two DDR3 DRAM channels. So this is a little weak because the bandwidth is only 30 gigabytes per second compared with the most recent GPU that HBM, 900 Gigabytes per second. The DDR4 is released in 2014, so that makes sense because the design is a little during that day, used the DDR3. But if you're using DDR4 or even high-bandwidth memory, the performance can be even boosted. So this is a comparison about Google's TPU compared with the CPU, GPU of this K80 GPU by the way, and the TPU. So the area is pretty much smaller, like half the size of a CPU and GPU and the power consumption is roughly 75 watts. And see this number, the peak teraops per second is much higher than the CPU and GPU is, about 90 teraops per second, which is pretty high. So here is a workload. Thanks to David sharing the slide. This is the workload at Google. They did a benchmark on these TPUs. So it's a little interesting that convolution neural nets only account for 5% of data-center workload. Most of them is multilayer perception, those fully connected layers. About 61% maybe for ads, I'm not sure. And about 29% of the workload in data-center is the Long Short Term Memory. For example, speech recognition, or machine translation, I suspect. Remember just now we have seen there are 90 teraops per second. But what actually number of teraops per second can be achieved? This is a basic tool to measure the bottleneck of a computer system. Whether you are bottlenecked by the arithmetic or you are bottlenecked by the memory bandwidth. It's like if you have a bucket, the lowest part of the bucket determines how much water we can hold in the bucket. So in this region, you are bottlenecked by the memory bandwidth. So the x-axis is the arithmetic intensity. Which is number of floating point operations per byte the ratio between the computation and memory of bandwidth overhead. So the y-axis, is the actual attainable performance. Here is the peak performance for example. When you do a lot of operation after you fetch a single piece of data, if you can do a lot of operation on top of it, then you are bottlenecked by the arithmetic. But after you fetch a lot of data from the memory, but you just do a tiny little bit of arithmetic, then you will be bottlenecked by the memory bandwidth. So how much you can fetch from the memory determines how much real performance you can get. And remember there is a ratio. When it is one here, this region it happens to be the same as the turning point is the actual memory bandwidth of your system. So let's see what is the life for the TPU. The TPU's peak performance is really high, about 90 Tops per second. For those convolution nets, they are pretty much saturating the peak performance. But there are lot of neural networks that has a utlitization less than 10%, meaning that 90 T-ops per second is actually achieves about three to 12 T-ops per second in real case. But why is it like that? The reason is, in order to have those real-time guarantee that the user not wait for too long, you cannot batch a lot of user's images or speech voice data at the same time. So as a result, for those fully connect layers, they have very little reuse, so they are bottlenecked by the memory bandwidth. For those convolution neural nets, for example this one, this blue one, that achieve 86, which is CNN0. The ratio between the ops and the number of memory is the highest. It's pretty high, more than 2,000 compared with other multilayer perceptron or long short term memory the ratio is pretty low. So this figure compares, this is the TPU and this one is the CPU, this is the GPU. Here is memory bandwidth, the peak memory bandwidth at a ratio of one here. So TPU has the highest memory bandwidth. And here is where are these neural networks lie on this curve. So the asterisk is for the TPU. It's still higher than other dots, but if you're not comfortable with this log scale figure, this is what it's like putting it in linear roofline. So pretty much everything disappeared except for the TPU results. So still, all these lines, although they are higher than the CPU and GPU, it's still way below the theoretical peak operations per second. So as I mentioned before, it is really bottlenecked by the low latency requirement so that it can have a large batch size. That's why you have low operations per byte. And how do you solve this problem? You want to have less number of memory footprint so that it can reduce the memory bandwidth requirement. One solution is to compress the model and the challenge is how do we build a hardware that can do inference directly on the compressed model? So I'm going to introduce my design of EIE, the Efficient Inference Engine, which deals with those sparse and the compressed model to save the memory bandwidth. And the rule of thumb, like we mentioned before is taking out one bit of sparsity first. Anything times zero is zero. So don't store it, don't compute on it. And second idea is, you don't need that much full precision, but you can approximate it. So by taking advantage of the sparse weight, we get about a 10x saving in the computation, 5x less memory footprint. The 2x difference is due to index overhead. And by taking advantage of the sparse activation, meaning after bandwidth, if activation is zero, then ignore it. You save another 3x of computation. And then by such weight sharing mechanism, you can use four bits to represent each weight rather than 32 bit. That's another eight times saving in the memory footprint. So this is physically, logically how the weights are stored. A four by eight matrix, and this is how physically they are stored. Only the non-zero weights are stored. So you don't need to store those zeroes. You'll save the bandwidth fetching those zeroes. And also I'm using the relative index to further save the number of memory overhead. So in the computation like this figure shows, we are running the multiplication only on non-zero. If it's zero, then skip it. Only broadcast it to the non-zero weights and if it is zero, skip it. If it's a non-zero, do the multiplication. In another cycle, do the multiplication. So the idea is anything multiplied by zero is zero. So this is a little complicated, I'm going to go very quickly. I'm going to have a lookup table that decode the four bit weight into the 16 bit weight and using the four bit relative index passed through address accumulator to get the 16 bit absolute index. And this is what the hardware architecture like in the high level. You can feel free to refer to my paper for detail. Okay speedup. So using such efficient hardware architecture and also model compression, this is the original result we have seen for CPU, GPU, mobile GPU. Now EIE is here. 189 times faster than the CPU and about 13 times faster than the GPU. So this is the energy efficiency on the log scale, it's about 24,000x more energy efficient than a CPU and about 3000x more energy efficient than a GPU. It means for example, previously if your battery can last for one hour, now it can last for 3000 hours for example. So if you say, ASIC is always better than CPUs and GPUs because it's customized hardware. So this is comparing EIE with the peer ASIC, for example DaDianNao and the TrueNorth. It has a better throughput, better energy efficiency by order of magnitude, compared with other ASICs. Not to mention that CPU, GPU and FPGAs. So we have covered half of the journey. We mentioned inference, we pretty much covered everything for inference. Now we are going to switch gear and talk about training. How do we train neural networks efficiently, how do we train it faster? So again, we are starting with algorithm first, efficient algorithms followed by the hardware for efficient training. So for efficient training algorithms, I'm going to mention four topics. The first one is parallelization, and then mixed precision training, which was just released about one month ago and at NVIDIA GTC, so it's fresh knowledge. And then model distillation, followed by my work on Dense-Sparse-Dense training, or better Regularization technique. So let's start with parallelization. So this figure shows, anyone in the hardware community. Most are very familiar with this figure. So as time goes by, what is the trend? For the number of transistors is keeping increasing. But the single threaded performance is getting plateaued in recent years. And also the frequency is getting plateaued in recent years. Because of the power constraint, to stop not scaling. And interesting thing is the number of cores is increasing. So what we really need to do is parallelization. How do we parallelize the problem to take advantage of parallel processing? Actually there are a lot of opportunities for parallelism in deep neural networks. For example, we can do data parallel. For example, feeding two images into the same model and run them at the same time. This doesn't affect latency for a single input. It doesn't make it shorter, but it makes batch size larger basically if you have four machines our effective batch size becomes four times as before. So it requires the coordinated weight update. For example, this is a paper from Google. There is a parameter server as a master and a couple of slaves running their own piece of training data and update the gradient to the parameter server and get the updated weight for them individually, that's how data parallelism is handled. Another idea is there could be a model parallelism. You can sublet your model and handle it to different processors or different threads. For example, there's this image, you want to run convolution on this image that is six dimension for loop. What you can do is you can cut the input image by two by two blocks so that each thread, or each processor handles one fourth of the image. Although there's a small halo here in between you have to take care of. And also, you can parallelize by the output or input feature map. And for those fully connect layers, how do we parallelize the model? It's even simpler. You can cut the model into half and hand it to different threads. And the third idea, you can even do hyper-parameter parallel. For example, you can tune your learning rate, your weight decay for different machines for those coarse-grained parallelism. So there are so many alternatives you have to tune. Small summary of the parallelism. There are lots of parallelisms in deep neural networks. For example, with data parallelism, you can run multiple training images, but you cannot have unlimited number of processors because you are limited by batch size. If it's too large, stochastic gradient descent becomes gradient descent, that's not good. You can also run the model parallelism. Split the model, either by cutting the image or cutting the convolution weights. Either cutting the image or cutting the fully connected layers. So it's very easy to get 16 to 64 GPUs training one model in parallel, having very good speedup. Almost linear speedup. Okay, next interesting thing, mixed precision with FP16 or FP32. So remember in the beginning of this lecture, I had a chart showing the energy and area overhead for a 16 bit versus a 32 bit. Going from 32 bit to 16 bit, you save about 4x the energy and 4x the area. So can we train a deep neural network with such low precision with floating point 16 bit rather than 32 bit? It turns out we can do that partially. By partially, I mean we need FP32 in some places. And where are those places? So we can do the multiplication in 16 bit as input. And then we have to do the summation in 32 bit accumulation. And then convert the result to 32 bit to store the weight. So that's where the mixed precision comes from. So for example, we have a master weight stored in floating point 32, we down converted it to floating point 16 and then we do the feed forward with 16 bit weight, 16 bit activation, we get a 16 bit activation here in the end when we are doing back propagation of the computation is also done with floating point 16 bit. Very interesting here, for the weights we get a floating point 16 bit gradient here for the weight. But when we are doing the update, so W plus learning rate times the gradient, that operation has to be done in 32 bit. That's where the mixed precision is coming from. And see there are two colors, which here is 16 bit, here is the 32 bit. That's where the mixed precision comes from. So does such low precision sacrifice your prediction accuracy for your model? So this is the figure from NVIDIA just released a couple of weeks ago actually. Thanks to Paulius giving me the slide. The convergence between floating point 32 versus the multi tensor up, which is basically the mixed precision training, are actually pretty much the same for convergence. If you zoom it in a little bit, they are pretty much the same. And for ResNet, the mixed precision sometimes behaves a little better than the full precision weight. Maybe because of noise. But in the end, after you train the model, this is the result of AlexNet, Inception V3, and ResNet-50 with FP32 versus FP16 mixed precision training. The accuracy is pretty much the same for these two methods. A little bit worse, but not by too much. So having talked about the mixed precision training, the next idea is to train with model distillation. For example, you can have multiple neural networks, Googlenet, Vggnet, Resnet for example. And the question is, can we take advantage of these different models? Of course we can do model ensemble, can we utilitze them as teacher, to teach a small junior neural network to have it perform as good as the senior neural network. So this is the idea. You have multiple large powerful senior neural networks to teach this student model. And hopefully it can get better results. And the idea to do that is, instead of using this hard label, for example for car, dog, cat, the probability for dog is 100%, but the output of the geometric ensemble of those large teacher neural networks maybe the dog has 90% and the cat is about 10%, and the magic happens here. You want to have a softened result label here. For example, the dog is 30%, the cat is 20%. Still the dog is higher than the cat. So the prediction is still correct, but it uses this soft label to train the student neural network rather than use this hard label to train the student neural network. And mathematically, you control how much do you make it soft by this temperature during the soft max controlling by this temperature. And the result is that, starting with the trained model that classifies 58.9% of the test frames correctly, the new model converges to 57%. Only train on 3% of the data. So that's the magic for model distillation using this soft label. And the last idea is my recent paper using a better regularization to train deep neural nets. We have seen these two figures before. We pruned the neural network, having less number of weights, but have the same accuracy. Now what I did is to recover and to retrain those weights shown in red and make everything train out together to increase the model capacity after it is trained at a low dimensional space. It's like you learn the trunk first and then gradually add those leaves and learn everything together. It turns out, on ImageNet it performs relatively about 1% to 4% absolute improvement of accuracy. And is also general purpose, works on long-short term memory and also recurrent neural nets collaborated with Baidu. So I also open sourced this special training model on the DSD Model Zoo, where there are trained, all these models, GoogleNet, VGG, ResNet, and also SqueezeNet, and also AlexNet. So if you are interested, feel free to check out this Model Zoo and compare it with the Caffe Model Zoo. Here's some examples on dense-spare-dense training helps with image capture. For example, this is a very challenging figure. The original baseline of neural talk says a boy in a red shirt is climbing a rock wall. And the sparse model says a young girl is jumping off a tree, probably mistaking the hair with either the rock or the tree. But then sparse-dense training by using this kind of regularization on a low dimensional space, it says a young girl in a pink shirt is swinging on a swing. And there are a lot of examples due to the limit of time, I will not go over them one by one. For example, a group of people are standing in front of a building, there's no building. A group of people are walking in the park. Feel free to check out the paper and see more interesting results. Okay finally, we come to hardware for efficient training. How to we take advantage of the algorithms we just mentioned. For example, parallelism, mixed precision, how are the hardware designed to actually take advantage of such features. First GPUs, this is the Nvidia PASCAL GPU, GP100, which was released last year. So it supports up to 20 Teraflops on FP16. It has 16 gigabytes of high bandwidth memory. 750 gigabytes per second. So remember, computation and memory bandwidth are the two factors determines your overall performance. Whichever is lower, it will suffer. So this is a really high bandwidth, 700 gigabytes compared with DDR3 is just 10 or 30 gigabytes per second. Consumes 300 Watts and it's done in 16 nanometer process and have a 160 gigabytes per second NV Link. So remember we have computation, we have memory, and the third thing is the communication. All three factors has to be balanced in order to achieve a good performance. So this is very powerful, but even more exciting, just about a month ago, Jensen released the newest architecture called the Volta GPUs. And let's see what is inside the Volta GPU. Just released less than a month ago, so it has 15 of FP32 teraflops and what is new here, there is 120 Tensor T-OPS, so specifically designed for deep learning. And we'll later cover what is the tensor core. And what is this 120 coming from. And rather than 750 gigabytes per second, this year, the HBM2, they are using 900 gigabytes per second memory bandwidth. Very exciting. And 12 nanometer process has a die size of more than 800 millimeters square. A really large chip and supported by 300 gigabytes per second NVLink. So what's new in Volta, the most interesting thing for us for deep learning, is this thing called Tensor Core. So what is a Tensor Core? Tensor Core is actually an instruction that can do the four by four matrix times a four by four matrix. The fused FMA stands Fused Multiplication and Add in this mixed precision operation. Just in one single clock cycle. So let's discern for a little bit what does this mean. So mixed precision is exactly as we mentioned in the last chapter, so we are having FP16 for the multiplication, but for accumulation, we are doing it with FP32. That's where the mixed precision comes from. So let's say how many operations, if it's four by four by four, it's 64 multiplications then just in one single cycle. That's 12x increase in the speedup of the Volta compared with the Pascal, which is released just less year. So this is the result for matrix multiplication on different sizes. The speedup of Volta over Pascal is roughly 3x faster doing these matrix multiplications. What we care more is not only matrix multiplication but actually running the deep neural nets. So both for training and for inference. And for training on ResNet-50, by taking advantage of this Tensor Core in this V100, it is 2.4x faster than the P100 using FP32. So on the right hand side, it compares the inference speedup, given a 7 microsecond latency requirement. What is the number of images per second it can process? It has a measurement of throughput. Again, the V100 over P100, by taking advantage of the Tensor Core, is 3.7 faster than the P100. So this figure gives roughly an idea, what is a Tensor Core, what is an integer unit, what is a floating point unit. So this whole figure is a single SM stream multiprocessor. So SM is partitioned into four processing blocks. One, two, three, four, right? And in each block there are eight FP64 cores here and 16 FP32 and 16 INT32 cores here, units here. And then there are two of the new mixed precision Tensor cores specifically designed for deep learning. And also there are the one warp scheduler, dispatch unit and Register File, as before. So what is new here is the Tensor core unit here. So here is a figure comparing the recent generations of Nvidia GPUs from Kepler to Maxwell to Pascal to Volta. We can see everything is keeping improving. For example, the boost clock has been increased from about 800 MHz to 1.4 GHz. And from the Volta generation there begins to have the Tensor core units here, which has never existed before. And before the Maxwell, the GPUs are using the GDDR5, and after the Pascal GPU, the HBM begins to came into place, the high-bandwidth memory. 750 gigabytes per second here. 900 gigabytes per second compared with DDR3, 30 gigabytes per second. And memory size actually didn't increase by too much, and the power consumption is actually also remaining roughly the same. But giving the increase of computation, you can fit them in the fixed power envelope that's still an exciting thing. And the manufacturing process is actually improving from 28 nanometer, 16 nanometer, all the way to 12 nanometer. And the chip area are also increasing to 800 millimeter-squared, that's really huge. So, you may be interested in the comparison of the GPU with the TPU, right? So how do they compare with each other? So in the original TPU paper, TPU actually designed roughly in the year of 2015, and this is comparison of the Pascal P40 GPU released in 2016. So, TPU, the power consumption is lower, is larger on chip memory of 24 megabytes, really large on-chip SRAM managed by the software. And then both of them support INT8 operations, while the inferences per second given a 10 nanometer latency the comparison for TPU is 1X. For the P40 it's about 2X. So, just last week, in the Google I/O, a new nuclear bomb is landed on the Earth. That is the Google Cloud TPU. So now TPU not only support inference, but also support training. So there is a very limited information we can get beyond this Google Blog. So their Cloud TPU delivers up to 180 teraflops to train and run machine learning models. And this is multiple Cloud TPU, making it into a TPU pod, which is built with 16 the second generation TPUs and delivers up to 11.5 teraflops of machine learning acceleration. So in the Google Blog, they mentioned that one of the large scale translation models, Google translation models, used to take a full day to train on 32 of best commercially-available GPUs, probably P40 or P100, maybe. And now it trains to the same accuracy, just within one afternoon, with just 1/8 of a TPU pod, which is pretty exciting. Okay, so as a little wrap-up. We covered a lot of stuff, we've mentioned the four dimension space of algorithm and hardware, inference and training, we covered the algorithms for inference, for example, pruning and quantization, Winograd Convolution, binary, ternary, weight sharing, for example. And then the hardware for the efficient inference. For example, the TPU, that take advantage of INT8, integer 8. And also my design of EIE accelerator that take advantage of the sparsity, anything multiplied by zero is zero, so don't store it, don't compute on it. And also the efficient algorithm for training, for example, how do we do parallelization and the most recent research on how do we use mixed precision training by taking advantage of FP16 rather than FP32 to do training which is four times saving the energy and four times saving in the area, which doesn't quite sacrifice the accuracy you'll get from the training. And also Dense-Sparse-Dense training using better regularization sparse regularization, and also the teacher-student model. You have multiple teacher on your network and have a small student network that you can distill the knowledge from the teacher in your network by a temperature. And finally we covered the hardware for efficient training and introduced two nuclear bombs. One is the Volta GPU, the other is the TPU version two, the Cloud TPU and also the amazing Tensor cores in the newest generation of Nvidia GPUs. And we also revealed the progression of a wide range, the recent Nvidia GPUs from the Kepler K40, that's actually when I started my research, what we used in the beginning, all the way to and then K40, M40, and then Pascal and then finally the exciting Volta GPU. So every year there is a nuclear bomb in the spring. Okay, a little look ahead in the future. So in the future of the city we can imagine there are a lot of AI applications using smart society, smart care, IOT devices, smart retail, for example, the Amazon Go, and also smart home, a lot of scenarios. And it poses a lot of challenges on the hardware design that requires the low latency, privacy, mobility and energy efficiency. You don't want your battery to drain very quickly. So it's both challenging and very exciting era for the code design for both the machine learning deep neural network model architectures and also the hardware architecture. So we have moved from PC era to mobile era. Now we are in the AI-First era, and hope you are as excited as I am for this kind of brain-inspired cognitive computing research. Thank you for your attention, I'm glad to take questions. [applause] We have five minutes. Of course. - [Student] Can you commercialize the deep architecture? - The architecture, yeah, some of the ideas are pretty good. I think there's opportunity. Yeah. Yeah. The question is, what can we do to make the hardware better? Oh, right, the question is how do we, the challenges and what opportunity for those small embedded devices around deep neural network or in general AI algorithms. Yeah, so those are the algorithm I discussed in the beginning about inference. Here. These are the techniques that can enable such inference or AI running on embedded devices, by having less number of weights, fewer bits per weight, and also quantization, low rank approximation. The small matrix, same accuracy, even going to binary, or ternary weights having just two bits to do the computation rather than 16 or even 32 bit and also the Winograd Transformation. Those are also the enabling algorithms for those low-power embedded devices. Okay, the question is, if it's binary weight, the software developers may be not able to take advantage of it. There is a way to take advantage of binary weight. So in one register there are 32 bit. Now you can think of it as a 32-way parallelism. Each bit is a single operation. So say previously we have 10 ops per second. Now you get 330 ops per second. You can do this bitwise operations. For example, XOR operations. So one register file, one operation becomes 32 operation. So there is a paper called XORmad, they very amazing implemented on the Raspberry Pi using this feature to do real-time detection, very cool stuff. Yeah. Yeah, so the trade-off is always so the power area and performance in general, all the hardware design have to take into account the performance, the power, and also the area. When machine learning comes, there's a fourth figure of merit which is the accuracy. What is the accuracy? And there is a fifth one which is programmability. So how general is your hardware? For example, if Google just want to use that for AI and deep learning, it's totally fine that we can have a fully very specialized architecture just for deep learning to support convolution, multi-layered perception, long-short-term memory, but GPUS, you also want to have support for those scientific computing or graphics, AR and VR. So that's a difference, first of all. And TPU basically is a ASIC, right? It's a very fixed function but you can still program it with those coarse instructions so people from Google roughly designed those coarse granularity instruction. For example, one instruction just load the matrix, store a matrix, do convolutions, do matrix multiplications. Those coarse-grain instructions and they have a software-managed memory, also called a scratchpad. It's different from cache where it determines where to evict something from the cache, but now, since you know the computation pattern, there's no need to do out-of-order execution, to do branch prediction, no such things. Everything is determined, so you can take the multi of it and maintain a fully software-managed scratchpad to reduce the data movement and remember, data movement is the key for reducing the memory footprint and energy consumption. So, yeah. Mobilia and Nobana architectures actually I'm not quite familiar, didn't prepare those slides, so, comment it a little bit later, no. Oh, yeah, of course. Those are always and can certainly be applied to low-power embedded devices. If you're interested, I can show you a... Whoops. Some examples of, oops. Where is that? Of my previous projects running deep neural nets. For example, on a drone, this is using a Nvidia TK1 mobile GPU to do real-time tracking and detection. This is me playing my nunchaku. Filmed by a drone to do the detection and tracking. And also, this FPGA doing the deep neural network. It's pretty small. This large, doing the face-alignment and detecting the eyes, the nose and the mouth, at a pretty high framerate. Consuming only three watts. This is a project I did at Facebook doing the deep neural nets on the mobile phone to do image classification, for example, it says it's a laptop, or you can feed it with an image and it says it's a selfie, has person and the face, et cetera. So there's lots of opportunity for those embedded or mobile-deployment of deep neural nets. No, there is a team doing that, but I cannot comment too much, probably. There is a team at Google doing that sort of stuff, yeah. Okay, thanks, everyone. If you have any questions, feel free to drop me a e-mail.
Stanford_Computer_Vision
Lecture_4_Introduction_to_Neural_Networks.txt
[students murmuring] - Okay, so good afternoon everyone, let's get started. So hi, so for those of you who I haven't met yet, my name is Serena Yeung and I'm the third and final instructor for this class, and I'm also a PhD student in Fei-Fei's group. Okay, so today we're going to talk about backpropagation and neural networks, and so now we're really starting to get to some of the core material in this class. Before we begin, let's see, oh. So a few administrative details, so assignment one is due Thursday, April 20th, so a reminder, we shifted the date back by a little bit and it's going to be due 11:59 p.m. on Canvas. So you should start thinking about your projects, there are TA specialties listed on the Piazza website so if you have questions about a specific project topic you're thinking about, you can go and try and find the TAs that might be most relevant. And then also for Google Cloud, so all students are going to get $100 in credits to use for Google Cloud for their assignments and project, so you should be receiving an email for that this week, I think. A lot of you may have already, and then for those of you who haven't, they're going to come, should be by the end of this week. Okay so where we are, so far we've talked about how to define a classifier using a function f, parameterized by weights W, and this function f is going to take data x as input, and output a vector of scores for each of the classes that you want to classify. And so from here we can also define a loss function, so for example, the SVM loss function that we've talked about which basically quantifies how happy or unhappy we are with the scores that we've produced, right, and then we can use that to define a total loss term. So L here, which is a combination of this data term, combined with a regularization term that expresses how simple our model is, and we have a preference for simpler models, for better generalization. And so now we want to find the parameters W that correspond to our lowest loss, right? We want to minimize the loss function, and so to do that we want to find the gradient of L with respect to W. So last lecture we talked about how we can do this using optimization, and we're going to iteratively take steps in the direction of steepest descent, which is the negative of the gradient, in order to walk down this loss landscape and get to the point of lowest loss, right? And we saw how this gradient descent can basically take this trajectory, looking like this image on the right, getting to the bottom of your loss landscape. Oh! Okay, and so we also talked about different ways for computing a gradient, right? We can compute this numerically using finite difference approximation which is slow and approximate, but at the same time it's really easy to write out, you know you can always get the gradient this way. We also talked about how to use the analytic gradient and computing this is, it's fast and exact once you've gotten the expression for the analytic gradient, but at the same time you have to do all the math and the calculus to derive this, so it's also, you know, easy to make mistakes, right? So in practice what we want to do is we want to derive the analytic gradient and use this, but at the same time check our implementation using the numerical gradient to make sure that we've gotten all of our math right. So today we're going to talk about how to compute the analytic gradient for arbitrarily complex functions, using a framework that I'm going to call computational graphs. And so basically what a computational graph is, is that we can use this kind of graph in order to represent any function, where the nodes of the graph are steps of computation that we go through. So for example, in this example, the linear classifier that we've talked about, the inputs here are x and W, right, and then this multiplication node represents the matrix multiplier, the multiplication of the parameters W with our data x that we have, outputting our vector of scores. And then we have another computational node which represents our hinge loss, right, computing our data loss term, Li. And we also have this regularization term at the bottom right, so this node which computes our regularization term, and then our total loss here at the end, L, is the sum of the regularization term and the data term. And the advantage is that once we can express a function using a computational graph, then we can use a technique that we call backpropagation which is going to recursively use the chain rule in order to compute the gradient with respect to every variable in the computational graph, and so we're going to see how this is done. And this becomes very useful when we start working with really complex functions, so for example, convolutional neural networks that we're going to talk about later in this class. We have here the input image at the top, we have our loss at the bottom, and the input has to go through many layers of transformations in order to get all the way down to the loss function. And this can get even crazier with things like, the, you know, like a neural turing machine, which is another kind of deep learning model, and in this case you can see that the computational graph for this is really insane, and especially, we end up, you know, unrolling this over time. It's basically completely impractical if you want to compute the gradients for any of these intermediate variables. Okay, so how does backpropagation work? So we're going to start off with a simple example, where again, our goal is that we have a function. So in this case, f of x, y, z equals x plus y times z, and we want to find the gradients of the output of the function with respect to any of the variables. So the first step, always, is we want to take our function f, and we want to represent it using a computational graph. Right, so here our computational graph is on the right, and you can see that we have our, first we have the plus node, so x plus y, and then we have this multiplication node, right, for the second computation that we're doing. And then, now we're going to do a forward pass of this network, so given the values of the variables that we have, so here, x equals negative two, y equals five and z equals negative four, I'm going to fill these all in in our computational graph, and then here we can compute an intermediate value, so x plus y gives three, and then finally we pass it through again, through the last node, the multiplication, to get our final node of f equals negative 12. So here we want to give every intermediate variable a name. So here I've called this intermediate variable after the plus node q, and we have q equals x plus y, and then f equals q times z, using this intermediate node. And I've also written out here, the gradients of q with respect to x and y, which are just one because of the addition, and then the gradients of f with respect to q and z, which is z and q respectively because of the multiplication rule. And so what we want to find, is we want to find the gradients of f with respect to x, y and z. So what backprop is, it's a recursive application of the chain rule, so we're going to start at the back, the very end of the computational graph, and then we're going to work our way backwards and compute all the gradients along the way. So here if we start at the very end, right, we want to compute the gradient of the output with respect to the last variable, which is just f. And so this gradient is just one, it's trivial. So now, moving backwards, we want the gradient with respect to z, right, and we know that df over dz is equal to q. So the value of q is just three, and so we have here, df over dz equals three. And so next if we want to do df over dq, what is the value of that? What is df over dq? So we have here, df over dq is equal to z, right, and the value of z is negative four. So here we have df over dq is equal to negative four. Okay, so now continuing to move backwards to the graph, we want to find df over dy, right, but here in this case, the gradient with respect to y, y is not connected directly to f, right? It's connected through an intermediate node of z, and so the way we're going to do this is we can leverage the chain rule which says that df over dy can be written as df over dq, times dq over dy, and so the intuition of this is that in order to get to find the effect of y on f, this is actually equivalent to if we take the effect of q times q on f, which we already know, right? df over dq is equal to negative four, and we compound it with the effect of y on q, dq over dy. So what's dq over dy equal to in this case? - [Student] One. - One, right. Exactly. So dq over dy is equal to one, which means, you know, if we change y by a little bit, q is going to change by approximately the same amount right, this is the effect, and so what this is doing is this is saying, well if I change y by a little bit, the effect of y on q is going to be one, and then the effect of q on f is going to be approximately a factor of negative four, right? So then we multiply these together and we get that the effect of y on f is going to be negative four. Okay, so now if we want to do the same thing for the gradient with respect to x, right, we can do the, we can follow the same procedure, and so what is this going to be? [students speaking away from microphone] - I heard the same. Yeah exactly, so in this case we want to, again, apply the chain rule, right? We know the effect of q on f is negative four, and here again, since we have also the same addition node, dq over dx is equal to one, again, we have negative four times one, right, and the gradient with respect to x is going to be negative four. Okay, so what we're doing is, in backprop, is we basically have all of these nodes in our computational graph, but each node is only aware of its immediate surroundings, right? So we have, at each node, we have the local inputs that are connected to this node, the values that are flowing into the node, and then we also have the output that is directly outputted from this node. So here our local inputs are x and y, and the output is z. And at this node we also know the local gradient, right, we can compute the gradient of z with respect to x, and the gradient of z with respect to y, and these are usually really simple operations, right? Each node is going to be something like the addition or the multiplication that we had in that earlier example, which is something where we can just write down the gradient, and we don't have to, you know, go through very complex calculus in order to find this. - [Student] Can you go back and explain why more in the last slide was different than planning the first part of it using just normal calculus? - Yeah, so basically if we go back, hold on, let me... So if we go back here, we could exactly write out, find all of these using just calculus, so we could say, you know, we want df over dx, right, and we can probably expand out this expression and see that it's just going to be z, but we can do this for, in this case, because it's simple, but we'll see examples later on where once this becomes a really complicated expression, you don't want to have to use calculus to derive, right, the gradient for something, for a super-complicated expression, and instead, if you use this formalism and you break it down into these computational nodes, then you can only ever work with gradients of very simple computations, right, at the level of, you know, additions, multiplications, exponentials, things as simple as you want them, and then you just use the chain rule to multiply all these together, and get your, the value of your gradient without having to ever derive the entire expression. Does that make sense? [student murmuring] Okay, so we'll see an example of this later. And so, was there another question, yeah? [student speaking away from microphone] - [Student] What's the negative four next to the z representing? - Negative, okay yeah, so the negative four, these were the, the green values on top were all the values of the function as we passed it forward through the computational graph, right? So we said up here that x is equal to negative two, y is equal to five, and z equals negative four, so we filled in all of these values, and then we just wanted to compute the value of this function. Right, so we said this value of q is going to be x plus y, it's going to be negative two plus five, it is going to be three, and we have z is equal to negative four so we fill that in here, and then we multiplied q and z together, negative four times three in order to get the final value of f, right? And then the red values underneath were as we were filling in the gradients as we were working backwards. Okay. Okay, so right, so we said that, you know, we have these local, these nodes, and each node basically gets its local inputs coming in and the output that it sees directly passing on to the next node, and we also have these local gradients that we computed, right, the gradient of the immediate output of the node with respect to the inputs coming in. And so what happens during backprop is we have these, we'll start from the back of the graph, right, and then we work our way from the end all the way back to the beginning, and when we reach each node, at each node we have the upstream gradients coming back, right, with respect to the immediate output of the node. So by the time we reach this node in backprop, we've already computed the gradient of our final loss l, with respect to z, right? And so now what we want to find next is we want to find the gradients with respect to just before the node, to the values of x and y. And so as we saw earlier, we do this using the chain rule, right, we have from the chain rule, that the gradient of this loss function with respect to x is going to be the gradient with respect to z times, compounded by this gradient, local gradient of z with respect to x. Right, so in the chain rule we always take this upstream gradient coming down, and we multiply it by the local gradient in order to get the gradient with respect to the input. - [Student] So, sorry, is it, it's different because this would never work to get a general formula into the, or general symbolic formula for the gradient. It only works with instantaneous values, where you like. [student coughing] Or passing a little constant value as a symbolic. - So the question is whether this only works because we're working with the current values of the function, and so it works, right, given the current values of the function that we plug in, but we can write an expression for this, still in terms of the variables, right? So we'll see that gradient of L with respect to z is going to be some expression, and gradient of z with respect to x is going to be another expression, right? But we plug in these, we plug in the values of these numbers at the time in order to get the value of the gradient with respect to x. So what you could do is you could recursively plug in all of these expressions, right? Gradient with respect, z with respect to x is going to be a simple, simple expression, right? So in this case, if we have a multiplication node, gradient of z with respect to x is just going to be y, right, we know that, but the gradient of L with respect to z, this is probably a complex part of the graph in itself, right, so here's where we want to just, in this case, have this numerical, right? So as you said, basically this is going to be just a number coming down, right, a value, and then we just multiply it with the expression that we have for the local gradient. And I think this will be more clear when we go through a more complicated example in a few slides. Okay, so now the gradient of L with respect to y, we have exactly the same idea, where again, we use the chain rule, we have gradient of L with respect to z, times the gradient of z with respect to y, right, we use the chain rule, multiply these together and get our gradient. And then once we have these, we'll pass these on to the node directly before, or connected to this node. And so the main thing to take away from this is that at each node we just want to have our local gradient that we compute, just keep track of this, and then during backprop as we're receiving, you know, numerical values of gradients coming from upstream, we just take what that is, multiply it by the local gradient, and then this is what we then send back to the connected nodes, the next nodes going backwards, without having to care about anything else besides these immediate surroundings. So now we're going to go through another example, this time a little bit more complex, so we can see more why backprop is so useful. So in this case, our function is f of w and x, which is equal to one over one plus e to the negative of w-zero times x-zero plus w-one x-one, plus w-two, right? So again, the first step always is we want to write this out as a computational graph. So in this case we can see that in this graph, right, first we multiply together the w and x terms that we have, w-zero with x-zero, w-one with x-one, and w-two, then we add all of these together, right? Then we do, scale it by negative one, we take the exponential, we add one, and then finally we do one over this whole term. And then here I've also filled in values of these, so let's say given values that we have for the ws and xs, right, we can make a forward pass and basically compute what the value is at every stage of the computation. And here I've also written down here at the bottom the values, the expressions for some derivatives that are going to be helpful later on, so same as we did before with the simple example. Okay, so now then we're going to do backprop through here, right, so again, we're going to start at the very end of the graph, and so here again the gradient of the output with respect to the last variable is just one, it's just trivial, and so now moving backwards one step, right? So what's the gradient with respect to the input just before one over x? Well, so in this case, we know that the upstream gradient that we have coming down, right, is this red one, right? This is the upstream gradient that we have flowing down, and then now we need to find the local gradient, right, and the local gradient of this node, this node is one over x, right, so we have f of x equals one over x here in red, and the local gradient of this df over dx is equal to negative one over x-squared, right? So here we're going to take negative one over x-squared, and plug in the value of x that we had during this forward pass, 1.37, and so our final gradient with respect to this variable is going to be negative one over 1.37 squared times one equals negative 0.53. So moving back to the next node, we're going to go through the exact same process, right? So here, the gradient flowing from upstream is going to be negative 0.53, right, and here the local gradient, the node here is a plus one, and so now looking at our reference of derivatives at the bottom, we have that for a constant plus x, the local gradient is just one, right? So what's the gradient with respect to this variable using the chain rule? So it's going to be the upstream gradient of negative 0.53 times our local gradient of one, which is equal to negative 0.53. So let's keep moving backwards one more step. So here we have the exponential, right? So what's the upstream gradient coming down? [student speaking away from microphone] Right, so the upstream gradient is negative 0.53, what's the local gradient here? It's going to be the local gradient of e to the x, right? This is an exponential node, and so our chain rule is going to tell us that our gradient is going to be negative 0.53 times e to the power of x, which in this case is negative one, from our forward pass, and this is going to give us our final gradient of negative 0.2. Okay, so now one more node here, the next node is, that we reach, is going to be a multiplication with negative one, right? So here, what's the upstream gradient coming down? - [Student] Negative 0.2? - [Serena] Negative 0.2, right, and what's going to be the local gradient, can look at the reference sheet. It's going to be, what was it? I think I heard it. - [Student] That's minus one? - It's going to be minus one, exactly, yeah, because our local gradient says it's going to be, df over dx is a, right, and the value of a that we scaled x by is negative one here. So we have here that the gradient is negative one times negative 0.2, and so our gradient is 0.2. Okay, so now we've reached an addition node, and so in this case we have these two branches both connected to it, right? So what's the upstream gradient here? It's going to be 0.2, right, just as everything else, and here now the gradient with respect to each of these branches, it's an addition, right, and we saw from before in our simple example that when we have an addition node, the gradient with respect to each of the inputs to the addition is just going to be one, right? So here, our local gradient for looking at our top stream is going to be one times the upstream gradient of 0.2, which is going to give a total gradient of 0.2, right? And then we, for our bottom branch we'd do the same thing, right, our upstream gradient is 0.2, our local gradient is one again, and the total gradient is 0.2. So is everything clear about this? Okay. So we have a few more gradients to fill out, so moving back now we've reached w-zero and x-zero, and so here we have a multiplication node, right, so we saw the multiplication node from before, it just, the gradient with respect to one of the inputs just is the value of the other input. And so in this case, what's the gradient with respect to w-zero? - [Student] Minus 0.2. - Minus, I'm hearing minus 0.2, exactly. Yeah, so with respect to w-zero, we have our upstream gradient, 0.2, right, times our, this is the bottom one, times our value of x, which is negative one, we get negative 0.2 and we can do the same thing for our gradient with respect to x-zero. It's going to be 0.2 times the value of w-zero which is two, and we get 0.4. Okay, so here we've filled out most of these gradients, and so there was the question earlier about why this is simpler than just computing, deriving the analytic gradient, the expression with respect to any of these variables, right? And so you can see here, all we ever dealt with was expressions for local gradients that we had to write out, so once we had these expressions for local gradients, all we did was plug in the values for each of these that we have, and use the chain rule to numerically multiply this all the way backwards and get the gradients with respect to all of the variables. And so, you know, we can also fill out the gradients with respect to w-one and x-one here in exactly the same way, and so one thing that I want to note is that right when we're creating these computational graphs, we can define the computational nodes at any granularity that we want to. So in this case, we broke it down into the absolute simplest that we could, right, we broke it down into additions and multiplications, you know, it basically can't get any simpler than that, but in practice, right, we can group some of these nodes together into more complex nodes if we want. As long as we're able to write down the local gradient for that node, right? And so as an example, if we look at a sigmoid function, so I've defined the sigmoid function in the upper-right here, of a sigmoid of x is equal to one over one plus e to the negative x, and this is something that's a really common function that you'll see a lot in the rest of this class, and we can compute the gradient for this, we can write it out, and if we do actually go through the math of doing this analytically, we can get a nice expression at the end. So in this case it's equal to one minus sigma of x, so the output of this function times sigma of x, right? And so in cases where we have something like this, we could just take all the computations that we had in our graph that made up this sigmoid, and we could just replace it with one big node that's a sigmoid, right, because we do know the local gradient for this gate, it's this expression, d of the sigmoid of x over dx, right? So basically the important thing here is that you can, group any nodes that you want to make any sorts of a little bit more complex nodes, as long as you can write down the local gradient for this. And so all this is is basically a trade-off between, you know, how much math that you want to do in order to get a more, kind of concise and simpler graph, right, versus how simple you want each of your gradients to be, right? And then you can write out as complex of a computational graph that you want. Yeah, question? - [Student] This is a question on the graph itself, is there a reason that the first two multiplication nodes and the weights are not connected to a single addition node? - So they could also be connected into a single addition node, so the question was, is there a reason why w-zero and x-zero are not connected with w-two? All of these additions just connected together, and yeah, so the reason, the answer is that you can do that if you want, and in practice, maybe you would actually want to do that because this is still a very simple node, right? So in this case I just wrote this out into as simple as possible, where each node only had up to two inputs, but yeah, you could definitely do that. Any other questions about this? Okay, so the one thing that I really like about thinking about this like a computational graph is that I feel very comforted, right, like anytime I have to take a gradient, find gradients of something, even if the expression that I want to compute gradients of is really hairy, and really scary, you know, whether it's something like this sigmoid or something worse, I know that, you know, I could derive this if I want to, but really, if I just sit down and write it out in terms of a computational graph, I can go as simple as I need to to always be able to apply backprop and the chain rule, and be able to compute all the gradients that I need. And so this is something that you guys should think about when you're doing your homeworks, as basically, you know, anytime you're having trouble finding gradients of something just think about it as a computational graph, break it down into all of these parts, and then use the chain rule. Okay, and so, you know, so we talked about how we could group these set of nodes together into a sigmoid gate, and just to confirm, like, that this is actually exactly equivalent, we can plug this in, right? So we have that our input here to the sigmoid gate is going to be one, in green, and then we have that the output is going to be here, 0.73, right, and this'll work out if you plug it in to the sigmoid function. And so now if we want to do, if we want to take the gradient, and we want to treat this entire sigmoid as one node, now what we should do is we need to use this local gradient that we've derived up here, right? One minus sigmoid of x times the sigmoid of x. So if we plug this in, and here we know that the value of sigmoid of x was 0.73, so if we plug this value in we'll see that this, the value of this gradient is equal to 0.2, right, and so the value of this local gradient is 0.2, we multiply it by the x upstream gradient which is one, and we're going to get out exactly the same value of the gradient with respect to before the sigmoid gate, as if we broke it down into all of the smaller computations. Okay, and so as we're looking at what's happening, right, as we're taking these gradients going backwards through our computational graph, there's some patterns that you'll notice where there's some intuitive interpretation that we can give these, right? So we saw that the add gate is a gradient distributor right, when we passed through this addition gate here, which had two branches coming out of it, it took the gradient, the upstream gradient and it just distributed it, passed the exact same thing to both of the branches that were connected. So here's a couple more that we can think about. So what's a max gate look like? So we have a max gate here at the bottom, right, where the input's coming in are z and w, z has a value of two, w has a value of negative one, and then we took the max of this, which is two, right, and so we pass this down into the remainder of our computational graph. So now if we're taking the gradients with respect to this, the upstream gradient is, let's say two coming back, right, and what does this local gradient look like? So anyone, yes? - [Student] It'll be zero for one, and one for the other? - Right. [student speaking away from microphone] Exactly, so the answer that was given is that z will have a gradient of two, w will have a value, a gradient of zero, and so one of these is going to get the full value of the gradient just passed back, and routed to that variable, and then the other one will have a gradient of zero, and so, so we can think of this as kind of a gradient router, right, so, whereas the addition node passed back the same gradient to both branches coming in, the max gate will just take the gradient and route it to one of the branches, and this makes sense because if we look at our forward pass, what's happening is that only the value that was the maximum got passed down to the rest of the computational graph, right? So it's the only value that actually affected our function computation at the end, and so it makes sense that when we're passing our gradients back, we just want to adjust what, you know, flow it through that branch of the computation. Okay, and so another one, what's a multiplication gate, which we saw earlier, is there any interpretation of this? [student speaking away from microphone] Okay, so the answer that was given is that the local gradient is basically just the value of the other variable. Yeah, so that's exactly right. So we can think of this as a gradient switcher, right? A switcher, and I guess a scaler, where we take the upstream gradient and we scale it by the value of the other branch. Okay, and so one other thing to note is that when we have a place where one node is connected to multiple nodes, the gradients add up at this node, right? So at these branches, using the multivariate chain rule, we're just going to take the value of the upstream gradient coming back from each of these nodes, and we'll add these together to get the total upstream gradient that's flowing back into this node, and you can see this from the multivariate chain rule and also thinking about this, you can think about this that if you're going to change this node a little bit, it's going to affect both of these connected nodes in the forward pass, right, when you're making your forward pass through the graph. And so then when you're doing backprop, right, then now the, both of these gradients coming back are going to affect this node, right, and so that's how we're going to sum these up to be the total upstream gradient flowing back into this node. Okay, so any questions about backprop, going through these forward and backward passes? - [Student] So we haven't did anything to actually update the weights. [speaking away from microphone] - Right, so the question is, we haven't done anything yet to update the values of these weights, we've only found the gradients with respect to the variables, that's exactly right. So what we've talked about so far in this lecture is how to compute gradients with respect to any variables in our function, right, and then once we have these we can just apply everything we learned in the optimization lecture, last lecture, right? So given the gradient, we now take a step in the direction of the gradient in order to update our weight, our parameters, right? So you can just take this entire framework that we learned about last lecture for optimization, and what we've done here is just learn how to compute the gradients we need for arbitrarily complex functions, right, and so this is going to be useful when we talk about complex functions like neural networks later on. Yeah? - [Student] Do you mind writing out the, all the variate, so you could help explain this slide a little better? - Yeah, so I can write this maybe on the board. Right, so basically if we're going to have, let's see, if we're going to have the gradient of f with respect to some variable x, right, and let's say it's connected through variables, let's see, i, we can basically... Right, so this is basically saying that if x is connected to these multiple elements, right, which in this case, different q-is, then the chain rule is taking all, it's going to take the effect of each of these intermediate variables, right, on our final output f, and then compound each one with the local effect of our variable x on that intermediate value, right? So yeah, it's basically just summing all these up together. Okay, so now that we've, you know, done all these examples in the scalar case, we're going to look at what happens when we have vectors, right? So now if our variables x, y and z, instead of just being numbers, we have vectors for these. And so everything stays exactly the same, the entire flow, the only difference is that now our gradients are going to be Jacobian matrices, right, so these are now going to be matrices containing the derivative of each element of, for example z with respect to each element of x. Okay, and so to, you know, so give an example of something where this is happening, right, let's say that we have our input is going to now be a vector, so let's say we have a 4096-dimensional input vector, and this is kind of a common size that you might see in convolutional neural networks later on, and our node is going to be an element-wise maximum, right? So we have f of x is equal to the maximum of x compared with zero element-wise, and then our output is going to be also a 4096-dimensional vector. Okay, so in this case, what's the size of our Jacobian matrix? Remember I said earlier, the Jacobian matrix is going to be, like each row is, it's going to be partial derivatives, a matrix of partial derivatives of each dimension of the output with respect to each dimension of the input. Okay, so the answer I heard was 4,096 squared, and that's, yeah, that's correct. So this is pretty large, right, 4,096 by 4,096 and in practice this is going to be even larger because we're going to work with many batches of, you know, of, for example, 100 inputs at the same time, right, and we'll put all of these through our node at the same time to be more efficient, and so this is going to scale this by 100, and in practice our Jacobian's actually going to turn out to be something like 409,000 by 409,000 right, so this is really huge, and basically completely impractical to work with. So in practice though, we don't actually need to compute this huge Jacobian most of the time, and so why is that, like, what does this Jacobian matrix look like? If we think about what's happening here, where we're taking this element-wise maximum, and we think about what are each of the partial derivatives, right, which dimension of the inputs affect which dimensions of the output? What sort of structure can we see in our Jacobian matrix? [student speaking away from microphone] Okay, so I heard that it's diagonal, right, exactly. So because this is element-wise, right, each element of the input, say the first dimension, only affects that corresponding element in the output, right? And so because of that our Jacobian matrix, which is just going to be a diagonal matrix. And so in practice then, we don't actually have to write out and formulate this entire Jacobian, we can just know the effect of x on the output, right, and then we can just use these values, right, and fill it in as we're computing the gradient. Okay, so now we're going to go through a more concrete vectorized example of a computational graph. Right, so let's look at a case where we have the function f of x and W is equal to, basically the L-two of W multiplied by x, and so in this case we're going to say x is n-dimensional and W is n by n. Right, so again our first step, writing out the computational graph, right? We have W multiplied by x, and then followed by, I'm just going to call this L-two. And so now let's also fill out some values for this, so we can see that, you know, let's say have W be this two by two matrix, and x is going to be this two-dimensional vector, right? And so we can say, label again our intermediate nodes. So our intermediate node after the multiplication it's going to be q, we have q equals W times x, which we can write out element-wise this way, where the first element is just W-one-one times x-one plus W-one-two times x-two and so on, and then we can now express f in relation to q, right? So looking at the second node we have f of q is equal to the L-two norm of q, which is equal to q-one squared plus q-two squared. Okay, so we filled this in, right, we get q and then we get our final output. Okay, so now let's do backprop through this, right? So again, this is always the first step, we have the gradient with respect to our output is just one. Okay, so now let's move back one node, so now we want to find the gradient with respect to q, right, our intermediate variable before the L-two. And so q is a two-dimensional vector, and what we want to do is we want to find how each element of q affects our final value of f, right, and so if we look at this expression that we've written out for f here at the bottom, we can see that the gradient of f with respect to a specific q-i, let's say q-one, is just going to be two times q-i, right? This is just taking this derivative here, and so we have this expression for, with respect to each element of q-i, we could also, you know, write this out in vector form if we want to, it's just going to be two times our vector of q, right, if we want to write this out in vector form, and so what we get is that our gradient is 0.44, and 0.52, this vector, right? And so you can see that it just took q and it scaled it by two, right? Each element is just multiplied by two. So the gradient of a vector is always going to be the same size as the original vector, and each element of this gradient is going to, it means how much of this particular element affects our final output of the function. Okay, so now let's move one step backwards, right, what's the gradient with respect to W? And so here again we want to use the same concept of trying to apply the chain rule, right, so we want to compute our local gradient of q with respect to W, and so let's look at this again element-wise, and if we do that, let's see what's the effect of each q, right, each element of q with respect to each element of W, and so this is going to be the Jacobian that we talked about earlier, and if we look at this in this multiplication, q is equal to W times x, right, what's the derivative, or the gradient of the first element of q, so our first element up top, with respect to W-one-one? So q-one with respect to W-one-one? What's that value? X-one, exactly. Yeah, so we know that this is x-one, and we can write this out more generally of the gradient of q-k with respect to W-i,j is equal to X-j. And then now if we want to find the gradient with respect to, of f, with respect to each W-i,j. So looking at these derivatives now, we can use this chain rule that we talked earlier where we basically compound df over dq-k for each element of q with dq-k over W-i,j for each element of W-i,j, right? So we find the effect of each element of W on each element of q, and sum this across all q. And so if you write this out, this is going to give this expression of two times q-i times x-j. Okay, and so filling this out then we get this gradient with respect to W, and so again we can compute this each element-wise, or we can also look at this expression that we've derived and write it out in vectorized form, right? So okay, and remember, the important thing is always to check the gradient with respect to a variable should have the same shape as the variable, and something, so this is something really useful in practice to sanity check, right, like once you've computed what your gradient should be, check that this is the same shape as your variable, because again, the element, each element of your gradient is quantifying how much that element is contributing to your, is affecting your final output. Yeah? [student speaking away from microphone] The both sides, oh the both sides one is an indicator function, so this is saying that it's just one if k equals i. Okay, so let's see, so we've done that, and so now just see, one more example. Now our last thing we need to find is the gradient with respect to q-I. So here if we compute the partial derivatives we can see that dq-k over dx-i is equal to W-k,i, right, using the same way as we did it for W, and then again we can just use the chain rule and get the total expression for that, right? And so this is going to be the gradient with respect to x, again, of the same shape as x, and we can also write this out in vectorized form if we want. Okay, so any questions about this, yeah? [student speaking away from microphone] So we are computing the Jacobian, so let me go back here, right, so if we're doing, so right, so we have these partial derivatives of q-k with respect to x-i, right, and these are forming your, the entries of your Jacobian, right? And so in practice what we're going to do is we basically take that, and you're going to see it up there in the chain rule, so the vectorized expression of gradient with respect to x, right, this is going to have the Jacobian here which is this transposed value here, so you can write it out in vectorized form. [student speaking away from microphone] So well, so in this case the matrix is going to be the same size as W right, so it's not actually a large matrix in this case, right? Okay, so the way that we've been thinking about this is like a really modularized implementation, right, where in our computational graph, right, we look at each node locally and we compute the local gradients and chain them with upstream gradients coming down, and so you can think of this as basically a forward and a backwards API, right? In the forward pass we implement the, you know, a function computing the output of this node, and then in the backwards pass we compute the gradient. And so when we actually implement this in code, we're going to do this in exactly the same way. So we can basically think about, for each gate, right, if we implement a forward function and a backward function, where the backward function is computing the chain rule, then if we have our entire graph, we can just make a forward pass through the entire graph by iterating through all the nodes in the graph, all the gates. Here I'm going to use the word gate and node, kind of interchangeably, we can iterate through all of these gates and just call forward on each of the gates, right? And we just want to do this in topologically sorted order, so we process all of the inputs coming in to a node before we process that node. And then going backwards, we're just going to then go through all of the gates in this reverse sorted order, and then call backwards on each of these gates. Okay, and so if we look at then the implementation for our particular gates, so for example, this MultiplyGate here, we want to implement the forward pass, right, so it gets x and y as inputs, and returns the value of z, and then when we go backwards, right, we get as input dz, which is our upstream gradient, and we want to output the gradients on the input's x and y to pass down, right? So we're going to output dx and dy, and so in this case, in this example, everything is back to the scalar case here, and so if we look at this in the forward pass, one thing that's important is that we need to, we should cache the values of the forward pass, right, because we end up using this in the backward pass a lot of the time. So here in the forward pass, we want to cache the values of x and y, right, and in the backward pass, using the chain rule, we're going to, remember, take the value of the upstream gradient and scale it by the value of the other branch, right, and so we'll keep, for dx we'll take our value of self.y that we kept, and multiply it by dz coming down, and same for dy. Okay, so if you look at a lot of deep-learning frameworks and libraries you'll see that they exactly follow this kind of modularization, right? So for example, Caffe is a popular deep learning framework, and you'll see, if you go look through the Caffe source code you'll get to some directory that says layers, and in layers, which are basically computational nodes, usually layers might be slightly more, you know, some of these more complex computational nodes like the sigmoid that we talked about earlier, you'll see, basically just a whole list of all different kinds of computational nodes, right? So you might have the sigmoid, and I know there might be here, there's like a convolution is one, there's an Argmax is another layer, you'll have all of these layers and if you dig in to each of them, they're just exactly implementing a forward pass and a backward pass, and then all of these are called when we do forward and backward pass through the entire network that we formed, and so our network is just basically going to be stacking up all of these, the different layers that we choose to use in the network. So for example, if we look at a specific one, in this case a sigmoid layer, you'll see that in the sigmoid layer, right, we've talked about the sigmoid function, you'll see that there's a forward pass which basically computes exactly the sigmoid expression, and then a backward pass, right, where it is taking as input something, basically a top_diff, which is our upstream gradient in this case, and multiplying it by a local gradient that we compute. So in assignment one you'll get practice with this kind of, this computational graph way of thinking where, you know, you're going to be writing your SVM and Softmax classes, and taking the gradients of these. And so again, remember always you want to first step, represent it as a computational graph, right? Figure out what are all the computations that you did leading up to the output, and then when you, when it's time to do your backward pass, just take the gradient with respect to each of these intermediate variables that you've defined in your computational graph, and use the chain rule to link them all together. Okay, so summary of what we've talked about so far. When we get down to, you know, working with neural networks, these are going to be really large and complex, so it's going to be impractical to write down the gradient formula by hand for all your parameters. So in order to get these gradients, right, we talked about how, what we should use is backpropagation, right, and this is kind of one of the core techniques of, you know, neural networks, is basically using backpropagation to get your gradients, right? And so this is a recursive application of the chain rule where we have this computational graph, and we start at the back and we go backwards through it to compute the gradients with respect to all of the intermediate variables, which are your inputs, your parameters, and everything else in the middle. And we've also talked about how really this implementation and this graph structure, each of these nodes is really, you can see this as implementing a forward and backwards API, right? And so in the forward pass we want to compute the results of the operation, and we want to save any intermediate values that we might want to use later in our gradient computation, and then in the backwards pass we apply this chain rule and we take this upstream gradient, we chain it, multiply it with our local gradient to compute the gradient with respect to the inputs of the node, and we pass this down to the nodes that are connected next. Okay, so now finally we're going to talk about neural networks. All right, so really, you know, neural networks, people draw a lot of analogies between neural networks and the brain, and different types of biological inspirations, and we'll get to that in a little bit, but first let's talk about it, you know, just looking at it as a function, as a class of functions without all of the brain stuff. So, so far we've talked about, you know, we've worked a lot with this linear score function, right? f equals W times x, and so we've been using this as a running example of a function that we want to optimize. So instead of using the single in your transformation, if we want a neural network where we can just, as the simplest form, just stack two of these together, right? Just a linear transformation on top of another one in order to get a two-layer neural network, right? And so what this looks like is first we have our, you know, a matrix multiply of W-one with x, and then we get this intermediate variable and we have this non-linear function of a max of zero with W, max with this output of this linear layer, and it's really important to have these non-linearities in place, which we'll talk about more later, because otherwise if you just stack linear layers on top of each other, they're just going to collapse to, like a single linear function. Okay, so we have our first linear layer and then we have this non-linearity, right, and then on top of this we'll add another linear layer. And then from here, finally we can get our score function, our output vector of scores. So basically, like, more broadly speaking, neural networks are a class of functions where we have simpler functions, right, that are stacked on top of each other, and we stack them in a hierarchical way in order to make up a more complex non-linear function, and so this is the idea of having, basically multiple stages of hierarchical computation, right? And so, you know, so this is kind of the main way that we do this is by taking something like this matrix multiply, this linear layer, and we just stack multiple of these on top of each other with non-linear functions in-between, right? And so one thing that this can help solve is if we look, if we remember back to this linear score function that we were talking about, right, remember we discussed earlier how each row of our weight matrix W was something like a template. It was a template that sort of expressed, you know, what we're looking for in the input for a specific class, right, so for example, you know, the car template looks something like this kind of fuzzy red car, and we were looking for this in the input to compute the score for the car class. And we talked about one of the problems with this is that there's only one template, right? There's this red car, whereas in practice, we actually have multiple modes, right? We might want, we're looking for, you know, a red car, there's also a yellow car, like all of these are different kinds of cars, and so what this kind of multiple layer network lets you do is now, you know, each of this intermediate variable h, right, W-one can still be these kinds of templates, but now you have all of these scores for these templates in h, and we can have another layer on top that's combining these together, right? So we can say that actually my car class should be, you know, connected to, we're looking for both red cars as well as yellow cars, right, because we have this matrix W-two which is now a weighting of all of our vector in h. Okay, any questions about this? Yeah? [student speaking away from microphone] Yeah, so there's a lot of ways, so there's a lot of different non-linear functions that you can choose from, and we'll talk later on in a later lecture about all the different kinds of non-linearities that you might want to use. - [Student] For the pictures in the slide, so, on the bottom row you have images of your vector W-one weight, and so maybe you would have images of another vector W-two? - So W-one, because it's directly connected to the input x, this is what's like, really interpretable, because you can formulate all of these templates. W-two, so h is going to be a score of how much of each template you solve, for example, all right, so it might be like you have a, you know, like a, I don't know, two for the red car, and like, one for the yellow car or something like that. - [Student] Oh, okay, so instead of W-one being just 10, like, you would have a left-facing horse and a right-facing horse, and they'd both be included-- - Exactly, so the question is basically whether in W-one you could have both left-facing horse and right-facing horse, right, and so yeah, exactly. So now W-one can be many different kinds of templates right? They're not, and then W-two, now we can, like basically it's a weighted sum of all of these templates. So now it allows you to weight together multiple templates in order to get the final score for a particular class. - [Student] So if you're processing an image then it's actually left-facing horse. It'll get a really high score with the left-facing horse template, and a lower score with the right-facing horse template, and then this will take the maximum of the two? - Right, so okay, so the question is, if our image x is like a left-facing horse and in W-one we have a template of a left-facing horse and a right-facing horse, then what's happening, right? So what happens is yeah, so in h you might have a really high score for your left-facing horse, kind of a lower score for your right-facing horse, and W-two is, it's a weighted sum, so it's not a maximum. It's a weighted sum of these templates, but if you have either a really high score for one of these templates, or let's say you have, kind of a lower and medium score for both of these templates, all of these kinds of combinations are going to give high scores, right? And so in the end what you're going to get is something that generally scores high when you have a horse of any kind. So let's say you had a front-facing horse, you might have medium values for both the left and the right templates. Yeah, question? - [Student] So is W-two doing the weighting, or is h doing the weighting? - W-two is doing the weighting, so the question is, "Is W-two doing the weighting or is h doing the weighting?" h is the value, like in this example, h is the value of scores for each of your templates that you have in W-one, right? So h is like the score function, right, it's how much of each template in W-one is present, and then W-two is going to weight all of these, weight all of these intermediate scores to get your final score for the class. - [Student] And which is the non-linear thing? - So the question is, "which is the non-linear thing?" So the non-linearity usually happens right before h, so h is the value right after the non-linearity. So we're talking about this, like, you know, intuitively as this example of like, W-one is looking for, you know, has these same templates as before, and W-two is a weighting for these. In practice it's not exactly like this, right, because as you said, there's all these non-linearities thrown in and so on, but it has this approximate type of interpretation to it. - [Student] So h is just W-one-x then? - Yeah, yeah, so the question is h just W-one-x? So h is just W-one times x, with the max function on top. Oh, let me just, okay so, so we've talked about this as an example of a two-layer neural network, and we can stack more layers of these to get deeper networks of arbitrary depth, right? So we can just do this one more time at another non-linearity and matrix multiply now by W-three, and now we have a three-layer neural network, right? And so this is where the term deep neural networks is basically coming from, right? This idea that you can stack multiple of these layers, you know, for very deep networks. And so in homework you'll get a practice of writing and you know, training one of these neural networks, I think in assignment two, but basically a full implementation of this using this idea of forward pass, right, and backward passes, and using chain rule to compute gradients that we've already seen. The entire implementation of a two-layer neural network is actually really simple, it can just be done in 20 lines, and so you'll get some practice with this in assignment two, writing out all of these parts. And okay, so now that we've sort of seen what neural networks are as a function, right, like, you know, we hear people talking a lot about how there's biological inspirations for neural networks, and so even though it's important that to emphasize that these analogies are really loose, it's really just very loose ties, but it's still interesting to understand where some of these connections and inspirations come from. And so now I'm going to talk briefly about that. So if we think about a neuron, in kind of a very simple way, this neuron is, here's a diagram of a neuron. We have the impulses that are carried towards each neuron, right, so we have a lot of neurons connected together and each neuron has dendrites, right, and these are sort of, these are what receives the impulses that come into the neuron. And then we have a cell body, right, that basically integrates these signals coming in and then there's a kind of, then it takes this, and after integrating all these signals, it passes on, you know, the impulse carries away from the cell body to downstream neurons that it's connected to, right, and it carries this away through axons. So now if we look at what we've been doing so far, right, with each computational node, you can see that this actually has, you can see it in kind of a similar way, right? Where nodes are connected to each other in the computational graph, and we have inputs, or signals, x, x right, coming into a neuron, and then all of these x, right, x-zero, x-one, x-two, these are combined and integrated together, right, using, for example our weights, W. So we do some sort of computation, right, and in some of the computations we've been doing so far, something like W times x plus b, right, integrating all these together, and then we have an activation function that we apply on top, we get this value of this output, and we pass it down to the connecting neurons. So if you look at that this, this is actually, you can think about this in a very similar way, right? Like, you know, these are what's the signals coming in are kind of the, connected at synapses, right? The synapse connecting the multiple neurons, the dendrites are integrating all of these, they're integrating all of this information together in the cell body, and then we have the output carried on the output later on. And so this is kind of the analogy that you can draw between them, and if you look at these activation functions, right? This is what basically takes all the inputs coming in and outputs one number that's going out later on, and we've talked about examples like sigmoid activation function, right, and different kinds of non-linearities, and so sort of one kind of loose analogy that you can draw is that these non-linearities can represent something sort of like the firing, or spiking rate of the neurons, right? Where our neurons transmit signals to connecting neurons using kind of these discrete spikes, right? And so we can think of, you know, if they're spiking very fast then there's kind of a strong signal that's passed later on, and so we can think of this value after our activation function as sort of, in a sense, sort of this firing rate that we're going to pass on. And you know in practice, I think neuroscientists who are actually studying this say that kind of one of the non-linearities that are most similar to the way that neurons are actually behaving is a ReLU function, which is a ReLU non-linearity, which is something that we're going to look at more later on, but it's a function that's at zero for all negative values of input, and then it's a linear function for everything that's in kind of a positive regime. And so, you know, we'll talk more about this activation function later on, but that's kind of, in practice, maybe the one that's most similar to how neurons are actually behaving. But it's really important to be extremely careful with making any of these sorts of brain analogies, because in practice biological neurons are way more complex than this. There's many different kinds of biological neurons, the dendrites can perform really complex non-linear computations. Our synapses, right, the W-zeros that we had earlier where we drew this analogy, are not single weights like we had, they're actually really complex, you know, non-linear dynamical systems in practice, and also this idea of interpreting our activation function as a sort of rate code or firing rate is also, is insufficient in practice, you know. It's just this kind of firing rate is probably not a sufficient model of how neurons will actually communicate to downstream neurons, right, like even as a very simple way, there's a very, the neurons will fire at a variable rate, and this variability probably should be taken into account. And so there's all of these, you know, it's kind of a much more complex thing than what we're dealing with. There's references, for example this dendritic computation that you can look at if you're interested in this topic, but yeah, so that in practice, you know, we can sort of see how it may resemble a neuron at this very high level, but neurons are, in practice, much more complicated than that. Okay, so we talked about how there's many different kinds of activation functions that could be used, there's the ReLU that I mentioned earlier, and we'll talk about all of these different kinds of activation functions in much more detail later on, choices of these activation functions that you might want to use. And so we'll also talk about different kinds of neural network architectures. So we gave the example of these fully connected neural networks, right, where each layer is this matrix multiply, and so the way we actually want to call these is, we said two-layer neural network before, and that corresponded to the fact that we have two of these linear layers, right, where we're doing a matrix multiply, two fully connected layers is what we call these. We could also call this a one-hidden-layer neural network, so instead of counting the number of matrix multiplies we're doing, counting the number of hidden layers that we have. I think it's, you can use either, I think maybe two-layer neural network is something that's a little more commonly used. And then also here, for our three-layer neural network that we have, this can also be called a two-hidden-layer neural network. And so we saw that, you know, when we're doing this type of feed forward, right, forward pass through a neural network, each of these nodes in this network is basically doing the kind of operation of the neuron that I showed earlier, right? And so what's actually happening is is basically each hidden layer you can think of as a whole vector, right, a set of these neurons, and so by writing it out this way with these matrix multiplies to compute our neuron values, it's a way that we can efficiently evaluate this entire layer of neurons, right? So with one matrix multiply we get output values of, you know, of a layer of let's say 10, or 50 or 100 of neurons. All right, and so looking at this again, writing this out, all out in matrix form, matrix-vector form, we have our, you know, non-linearity here. F that we're using, in this case a sigmoid function, right, and we can take our data x, some input vector or our values, and we can apply our first matrix multiply, W-one on top of this, then our non-linearity, then a second matrix multiply to get a second hidden layer, h-two, and then we have our final output, right? And so, you know, this is basically all you need to be able to write a neural network, and as we saw earlier, the backward pass. You then just use backprop to compute all of those, and so that's basically all there is to kind of the main idea of what's a neural network. Okay, so just to summarize, we talked about how we could arrange neurons into these computations, right, of fully-connected or linear layers. This abstraction of a layer has a nice property that we can use very efficient vectorized code to compute all of these. We also talked about how it's important to keep in mind that neural networks do have some, you know, analogy and loose inspiration from biology, but they're not really neural. I mean, this is a pretty loose analogy that we're making, and next time we'll talk about convolutional neural networks. Okay, thanks.
Stanford_Computer_Vision
Lecture_5_Convolutional_Neural_Networks.txt
- Okay, let's get started. Alright, so welcome to lecture five. Today we're going to be getting to the title of the class, Convolutional Neural Networks. Okay, so a couple of administrative details before we get started. Assignment one is due Thursday, April 20, 11:59 p.m. on Canvas. We're also going to be releasing assignment two on Thursday. Okay, so a quick review of last time. We talked about neural networks, and how we had the running example of the linear score function that we talked about through the first few lectures. And then we turned this into a neural network by stacking these linear layers on top of each other with non-linearities in between. And we also saw that this could help address the mode problem where we are able to learn intermediate templates that are looking for, for example, different types of cars, right. A red car versus a yellow car and so on. And to combine these together to come up with the final score function for a class. Okay, so today we're going to talk about convolutional neural networks, which is basically the same sort of idea, but now we're going to learn convolutional layers that reason on top of basically explicitly trying to maintain spatial structure. So, let's first talk a little bit about the history of neural networks, and then also how convolutional neural networks were developed. So we can go all the way back to 1957 with Frank Rosenblatt, who developed the Mark I Perceptron machine, which was the first implementation of an algorithm called the perceptron, which had sort of the similar idea of getting score functions, right, using some, you know, W times X plus a bias. But here the outputs are going to be either one or a zero. And then in this case we have an update rule, so an update rule for our weights, W, which also look kind of similar to the type of update rule that we're also seeing in backprop, but in this case there was no principled backpropagation technique yet, we just sort of took the weights and adjusted them in the direction towards the target that we wanted. So in 1960, we had Widrow and Hoff, who developed Adaline and Madaline, which was the first time that we were able to get, to start to stack these linear layers into multilayer perceptron networks. And so this is starting to now look kind of like this idea of neural network layers, but we still didn't have backprop or any sort of principled way to train this. And so the first time backprop was really introduced was in 1986 with Rumelhart. And so here we can start seeing, you know, these kinds of equations with the chain rule and the update rules that we're starting to get familiar with, right, and so this is the first time we started to have a principled way to train these kinds of network architectures. And so after that, you know, it still wasn't able to scale to very large neural networks, and so there was sort of a period in which there wasn't a whole lot of new things happening here, or a lot of popular use of these kinds of networks. And so this really started being reinvigorated around the 2000s, so in 2006, there was this paper by Geoff Hinton and Ruslan Salakhutdinov, which basically showed that we could train a deep neural network, and show that we could do this effectively. But it was still not quite the sort of modern iteration of neural networks. It required really careful initialization in order to be able to do backprop, and so what they had here was they would have this first pre-training stage, where you model each hidden layer through this kind of, through a restricted Boltzmann machine, and so you're going to get some initialized weights by training each of these layers iteratively. And so once you get all of these hidden layers you then use that to initialize your, you know, your full neural network, and then from there you do backprop and fine tuning of that. And so when we really started to get the first really strong results using neural networks, and what sort of really sparked the whole craze of starting to use these kinds of networks really widely was at around 2012, where we had first the strongest results using for speech recognition, and so this is work out of Geoff Hinton's lab for acoustic modeling and speech recognition. And then for image recognition, 2012 was the landmark paper from Alex Krizhevsky in Geoff Hinton's lab, which introduced the first convolutional neural network architecture that was able to do, get really strong results on ImageNet classification. And so it took the ImageNet, image classification benchmark, and was able to dramatically reduce the error on that benchmark. And so since then, you know, ConvNets have gotten really widely used in all kinds of applications. So now let's step back and take a look at what gave rise to convolutional neural networks specifically. And so we can go back to the 1950s, where Hubel and Wiesel did a series of experiments trying to understand how neurons in the visual cortex worked, and they studied this specifically for cats. And so we talked a little bit about this in lecture one, but basically in these experiments they put electrodes in the cat, into the cat brain, and they gave the cat different visual stimulus. Right, and so, things like, you know, different kinds of edges, oriented edges, different sorts of shapes, and they measured the response of the neurons to these stimuli. And so there were a couple of important conclusions that they were able to make, and observations. And so the first thing found that, you know, there's sort of this topographical mapping in the cortex. So nearby cells in the cortex also represent nearby regions in the visual field. And so you can see for example, on the right here where if you take kind of the spatial mapping and map this onto a visual cortex there's more peripheral regions are these blue areas, you know, farther away from the center. And so they also discovered that these neurons had a hierarchical organization. And so if you look at different types of visual stimuli they were able to find that at the earliest layers retinal ganglion cells were responsive to things that looked kind of like circular regions of spots. And then on top of that there are simple cells, and these simple cells are responsive to oriented edges, so different orientation of the light stimulus. And then going further, they discover that these were then connected to more complex cells, which were responsive to both light orientation as well as movement, and so on. And you get, you know, increasing complexity, for example, hypercomplex cells are now responsive to movement with kind of an endpoint, right, and so now you're starting to get the idea of corners and then blobs and so on. And so then in 1980, the neocognitron was the first example of a network architecture, a model, that had this idea of simple and complex cells that Hubel and Wiesel had discovered. And in this case Fukushima put these into these alternating layers of simple and complex cells, where you had these simple cells that had modifiable parameters, and then complex cells on top of these that performed a sort of pooling so that it was invariant to, you know, different minor modifications from the simple cells. And so this is work that was in the 1980s, right, and so by 1998 Yann LeCun basically showed the first example of applying backpropagation and gradient-based learning to train convolutional neural networks that did really well on document recognition. And specifically they were able to do a good job of recognizing digits of zip codes. And so these were then used pretty widely for zip code recognition in the postal service. But beyond that it wasn't able to scale yet to more challenging and complex data, right, digits are still fairly simple and a limited set to recognize. And so this is where Alex Krizhevsky, in 2012, gave the modern incarnation of convolutional neural networks and his network we sort of colloquially call AlexNet. But this network really didn't look so much different than the convolutional neural networks that Yann LeCun was dealing with. They're now, you know, they were scaled now to be larger and deeper and able to, the most important parts were that they were now able to take advantage of the large amount of data that's now available, in web images, in ImageNet data set. As well as take advantage of the parallel computing power in GPUs. And so we'll talk more about that later. But fast forwarding today, so now, you know, ConvNets are used everywhere. And so we have the initial classification results on ImageNet from Alex Krizhevsky. This is able to do a really good job of image retrieval. You can see that when we're trying to retrieve a flower for example, the features that are learned are really powerful for doing similarity matching. We also have ConvNets that are used for detection. So we're able to do a really good job of localizing where in an image is, for example, a bus, or a boat, and so on, and draw precise bounding boxes around that. We're able to go even deeper beyond that to do segmentation, right, and so these are now richer tasks where we're not looking for just the bounding box but we're actually going to label every pixel in the outline of, you know, trees, and people, and so on. And these kind of algorithms are used in, for example, self-driving cars, and a lot of this is powered by GPUs as I mentioned earlier, that's able to do parallel processing and able to efficiently train and run these ConvNets. And so we have modern powerful GPUs as well as ones that work in embedded systems, for example, that you would use in a self-driving car. So we can also look at some of the other applications that ConvNets are used for. So, face-recognition, right, we can put an input image of a face and get out a likelihood of who this person is. ConvNets are applied to video, and so this is an example of a video network that looks at both images as well as temporal information, and from there is able to classify videos. We're also able to do pose recognition. Being able to recognize, you know, shoulders, elbows, and different joints. And so here are some images of our fabulous TA, Lane, in various kinds of pretty non-standard human poses. But ConvNets are able to do a pretty good job of pose recognition these days. They're also used in game playing. So some of the work in reinforcement learning, deeper enforcement learning that you may have seen, playing Atari games, and Go, and so on, and ConvNets are an important part of all of these. Some other applications, so they're being used for interpretation and diagnosis of medical images, for classification of galaxies, for street sign recognition. There's also whale recognition, this is from a recent Kaggle Challenge. We also have examples of looking at aerial maps and being able to draw out where are the streets on these maps, where are buildings, and being able to segment all of these. And then beyond recognition of classification detection, these types of tasks, we also have tasks like image captioning, where given an image, we want to write a sentence description about what's in the image. And so this is something that we'll go into a little bit later in the class. And we also have, you know, really, really fancy and cool kind of artwork that we can do using neural networks. And so on the left is an example of a deep dream, where we're able to take images and kind of hallucinate different kinds of objects and concepts in the image. There's also neural style type work, where we take an image and we're able to re-render this image using a style of a particular artist and artwork, right. And so here we can take, for example, Van Gogh on the right, Starry Night, and use that to redraw our original image using that style. And Justin has done a lot of work in this and so if you guys are interested, these are images produced by some of his code and you guys should talk to him more about it. Okay, so basically, you know, this is just a small sample of where ConvNets are being used today. But there's really a huge amount that can be done with this, right, and so, you know, for you guys' projects, sort of, you know, let your imagination go wild and we're excited to see what sorts of applications you can come up with. So today we're going to talk about how convolutional neural networks work. And again, same as with neural networks, we're going to first talk about how they work from a functional perspective without any of the brain analogies. And then we'll talk briefly about some of these connections. Okay, so, last lecture, we talked about this idea of a fully connected layer. And how, you know, for a fully connected layer what we're doing is we operate on top of these vectors, right, and so let's say we have, you know, an image, a 3D image, 32 by 32 by three, so some of the images that we were looking at previously. We'll take that, we'll stretch all of the pixels out, right, and then we have this 3072 dimensional vector, for example in this case. And then we have these weights, right, so we're going to multiply this by a weight matrix. And so here for example our W we're going to say is 10 by 3072. And then we're going to get the activations, the output of this layer, right, and so in this case, we take each of our 10 rows and we do this dot product with 3072 dimensional input. And from there we get this one number that's kind of the value of that neuron. And so in this case we're going to have 10 of these neuron outputs. And so a convolutional layer, so the main difference between this and the fully connected layer that we've been talking about is that here we want to preserve spatial structure. And so taking this 32 by 32 by three image that we had earlier, instead of stretching this all out into one long vector, we're now going to keep the structure of this image, right, this three dimensional input. And then what we're going to do is our weights are going to be these small filters, so in this case for example, a five by five by three filter, and we're going to take this filter and we're going to slide it over the image spatially and compute dot products at every spatial location. And so we're going to go into detail of exactly how this works. So, our filters, first of all, always extend the full depth of the input volume. And so they're going to be just a smaller spatial area, so in this case five by five, right, instead of our full 32 by 32 spatial input, but they're always going to go through the full depth, right, so here we're going to take five by five by three. And then we're going to take this filter and at a given spatial location we're going to do a dot product between this filter and then a chunk of a image. So we're just going to overlay this filter on top of a spatial location in the image, right, and then do the dot product, the multiplication of each element of that filter with each corresponding element in that spatial location that we've just plopped it on top of. And then this is going to give us a dot product. So in this case, we have five times five times three, this is the number of multiplications that we're going to do, right, plus the bias term. And so this is basically taking our filter W and basically doing W transpose times X and plus bias. So is that clear how this works? Yeah, question. [faint speaking] Yeah, so the question is, when we do the dot product do we turn the five by five by three into one vector? Yeah, in essence that's what you're doing. You can, I mean, you can think of it as just plopping it on and doing the element-wise multiplication at each location, but this is going to give you the same result as if you stretched out the filter at that point, stretched out the input volume that it's laid over, and then took the dot product, and that's what's written here, yeah, question. [faint speaking] Oh, this is, so the question is, any intuition for why this is a W transpose? And this was just, not really, this is just the notation that we have here to make the math work out as a dot product. So it just depends on whether, how you're representing W and whether in this case if we look at the W matrix this happens to be each column and so we're just taking the transpose to get a row out of it. But there's no intuition here, we're just taking the filters of W and we're stretching it out into a one D vector, and in order for it to be a dot product it has to be like a one by, one by N vector. [faint speaking] Okay, so the question is, is W here not five by five by three, it's one by 75. So that's the case, right, if we're going to do this dot product of W transpose times X, we have to stretch it out first before we do the dot product. So we take the five by five by three, and we just take all these values and stretch it out into a long vector. And so again, similar to the other question, the actual operation that we're doing here is plopping our filter on top of a spatial location in the image and multiplying all of the corresponding values together, but in order just to make it kind of an easy expression similar to what we've seen before we can also just stretch each of these out, make sure that dimensions are transposed correctly so that it works out as a dot product. Yeah, question. [faint speaking] Okay, the question is, how do we slide the filter over the image. We'll go into that next, yes. [faint speaking] Okay, so the question is, should we rotate the kernel by 180 degrees to better match the convolution, the definition of a convolution. And so the answer is that we'll also show the equation for this later, but we're using convolution as kind of a looser definition of what's happening. So for people from signal processing, what we are actually technically doing, if you want to call this a convolution, is we're convolving with the flipped version of the filter. But for the most part, we just don't worry about this and we just, yeah, do this operation and it's like a convolution in spirit. Okay, so... Okay, so we had a question earlier, how do we, you know, slide this over all the spatial locations. Right, so what we're going to do is we're going to take this filter, we're going to start at the upper left-hand corner and basically center our filter on top of every pixel in this input volume. And at every position, we're going to do this dot product and this will produce one value in our output activation map. And so then we're going to just slide this around. The simplest version is just at every pixel we're going to do this operation and fill in the corresponding point in our output activation. You can see here that the dimensions are not exactly what would happen, right, if you're going to do this. I had 32 by 32 in the input and I'm having 28 by 28 in the output, and so we'll go into examples later of the math of exactly how this is going to work out dimension-wise, but basically you have a choice of how you're going to slide this, whether you go at every pixel or whether you slide, let's say, you know, two input values over at a time, two pixels over at a time, and so you can get different size outputs depending on how you choose to slide. But you're basically doing this operation in a grid fashion. Okay, so what we just saw earlier, this is taking one filter, sliding it over all of the spatial locations in the image and then we're going to get this activation map out, right, which is the value of that filter at every spatial location. And so when we're dealing with a convolutional layer, we want to work with multiple filters, right, because each filter is kind of looking for a specific type of template or concept in the input volume. And so we're going to have a set of multiple filters, and so here I'm going to take a second filter, this green filter, which is again five by five by three, I'm going to slide this over all of the spatial locations in my input volume, and then I'm going to get out this second green activation map also of the same size. And we can do this for as many filters as we want to have in this layer. So for example, if we have six filters, six of these five by five filters, then we're going to get in total six activation maps out. All of, so we're going to get this output volume that's going to be basically six by 28 by 28. Right, and so a preview of how we're going to use these convolutional layers in our convolutional network is that our ConvNet is basically going to be a sequence of these convolutional layers stacked on top of each other, same way as what we had with the simple linear layers in their neural network. And then we're going to intersperse these with activation functions, so for example, a ReLU activation function. Right, and so you're going to get something like Conv, ReLU, and usually also some pooling layers, and then you're just going to get a sequence of these each creating an output that's now going to be the input to the next convolutional layer. Okay, and so each of these layers, as I said earlier, has multiple filters, right, many filters. And each of the filter is producing an activation map. And so when you look at multiple of these layers stacked together in a ConvNet, what ends up happening is you end up learning this hierarching of filters, where the filters at the earlier layers usually represent low-level features that you're looking for. So things kind of like edges, right. And then at the mid-level, you're going to get more complex kinds of features, so maybe it's looking more for things like corners and blobs and so on. And then at higher-level features, you're going to get things that are starting to more resemble concepts than blobs. And we'll go into more detail later in the class in how you can actually visualize all these features and try and interpret what your network, what kinds of features your network is learning. But the important thing for now is just to understand that what these features end up being when you have a whole stack of these, is these types of simple to more complex features. [faint speaking] Yeah. Oh, okay. Oh, okay, so the question is, what's the intuition for increasing the depth each time. So here I had three filters in the original layer and then six filters in the next layer. Right, and so this is mostly a design choice. You know, people in practice have found certain types of these configurations to work better. And so later on we'll go into case studies of different kinds of convolutional neural network architectures and design choices for these and why certain ones work better than others. But yeah, basically the choice of, you're going to have many design choices in a convolutional neural network, the size of your filter, the stride, how many filters you have, and so we'll talk about this all more later. Question. [faint speaking] Yeah, so the question is, as we're sliding this filter over the image spatially it looks like we're sampling the edges and corners less than the other locations. Yeah, that's a really good point, and we'll talk I think in a few slides about how we try and compensate for that. Okay, so each of these convolutional layers that we have stacked together, we saw how we're starting with more simpler features and then aggregating these into more complex features later on. And so in practice this is compatible with what Hubel and Wiesel noticed in their experiments, right, that we had these simple cells at the earlier stages of processing, followed by more complex cells later on. And so even though we didn't explicitly force our ConvNet to learn these kinds of features, in practice when you give it this type of hierarchical structure and train it using backpropagation, these are the kinds of filters that end up being learned. [faint speaking] Okay, so yeah, so the question is, what are we seeing in these visualizations. And so, alright so, in these visualizations, like, if we look at this Conv1, the first convolutional layer, each of these grid, each part of this grid is a one neuron. And so what we've visualized here is what the input looks like that maximizes the activation of that particular neuron. So what sort of image you would get that would give you the largest value, make that neuron fire and have the largest value. And so the way we do this is basically by doing backpropagation from a particular neuron activation and seeing what in the input will trigger, will give you the highest values of this neuron. And this is something that we'll talk about in much more depth in a later lecture about how we create all of these visualizations. But basically each element of these grids is showing what in the input would look like that basically maximizes the activation of the neuron. So in a sense, what is the neuron looking for? Okay, so here is an example of some of the activation maps produced by each filter, right. So we can visualize up here on the top we have this whole row of example five by five filters, and so this is basically a real case from a trained ConvNet where each of these is what a five by five filter looks like, and then as we convolve this over an image, so in this case this I think it's like a corner of a car, the car light, what the activation looks like. Right, and so here for example, if we look at this first one, this red filter, filter like with a red box around it, we'll see that it's looking for, the template looks like an edge, right, an oriented edge. And so if you slide it over the image, it'll have a high value, a more white value where there are edges in this type of orientation. And so each of these activation maps is kind of the output of sliding one of these filters over and where these filters are causing, you know, where this sort of template is more present in the image. And so the reason we call these convolutional is because this is related to the convolution of two signals, and so someone pointed out earlier that this is basically this convolution equation over here, for people who have seen convolutions before in signal processing, and in practice it's actually more like a correlation where we're convolving with the flipped version of the filter, but this is kind of a subtlety, it's not really important for the purposes of this class. But basically if you're writing out what you're doing, it has an expression that looks something like this, which is the standard definition of a convolution. But this is basically just taking a filter, sliding it spatially over the image and computing the dot product at every location. Okay, so you know, as I had mentioned earlier, like what our total convolutional neural network is going to look like is we're going to have an input image, and then we're going to pass it through this sequence of layers, right, where we're going to have a convolutional layer first. We usually have our non-linear layer after that. So ReLU is something that's very commonly used that we're going to talk about more later. And then we have these Conv, ReLU, Conv, ReLU layers, and then once in a while we'll use a pooling layer that we'll talk about later as well that basically downsamples the size of our activation maps. And then finally at the end of this we'll take our last convolutional layer output and then we're going to use a fully connected layer that we've seen before, connected to all of these convolutional outputs, and use that to get a final score function basically like what we've already been working with. Okay, so now let's work out some examples of how the spatial dimensions work out. So let's take our 32 by 32 by three image as before, right, and we have our five by five by three filter that we're going to slide over this image. And we're going to see how we're going to use that to produce exactly this 28 by 28 activation map. So let's assume that we actually have a seven by seven input just to be simpler, and let's assume we have a three by three filter. So what we're going to do is we're going to take this filter, plop it down in our upper left-hand corner, right, and we're going to multiply, do the dot product, multiply all these values together to get our first value, and this is going to go into the upper left-hand value of our activation map. Right, and then what we're going to do next is we're just going to take this filter, slide it one position to the right, and then we're going to get another value out from here. And so we can continue with this to have another value, another, and in the end what we're going to get is a five by five output, right, because what fit was basically sliding this filter a total of five spatial locations horizontally and five spatial locations vertically. Okay, so as I said before there's different kinds of design choices that we can make. Right, so previously I slid it at every single spatial location and the interval at which I slide I'm going to call the stride. And so previously we used the stride of one. And so now let's see what happens if we have a stride of two. Right, so now we're going to take our first location the same as before, and then we're going to skip this time two pixels over and we're going to get our next value centered at this location. Right, and so now if we use a stride of two, we have in total three of these that can fit, and so we're going to get a three by three output. Okay, and so what happens when we have a stride of three, what's the output size of this? And so in this case, right, we have three, we slide it over by three again, and the problem is that here it actually doesn't fit. Right, so we slide it over by three and now it doesn't fit nicely within the image. And so what we in practice we just, it just doesn't work. We don't do convolutions like this because it's going to lead to asymmetric outputs happening. Right, and so just kind of looking at the way that we computed how many, what the output size is going to be, this actually can work into a nice formula where we take our dimension of our input N, we have our filter size F, we have our stride at which we're sliding along, and our final output size, the spatial dimension of each output size is going to be N minus F divided by the stride plus one, right, and you can kind of see this as a, you know, if I'm going to take my filter, let's say I fill it in at the very last possible position that it can be in and then take all the pixels before that, how many instances of moving by this stride can I fit in. Right, and so that's how this equation kind of works out. And so as we saw before, right, if we have N equal seven and F equals three, if we want a stride of one we plug it into this formula, we get five by five as we had before, and the same thing we had for two. And with a stride of three, this doesn't really work out. And so in practice it's actually common to zero pad the borders in order to make the size work out to what we want it to. And so this is kind of related to a question earlier, which is what do we do, right, at the corners. And so what in practice happens is we're going to actually pad our input image with zeros and so now you're going to be able to place a filter centered at the upper right-hand pixel location of your actual input image. Okay, so here's a question, so who can tell me if I have my same input, seven by seven, three by three filter, stride one, but now I pad with a one pixel border, what's the size of my output going to be? [faint speaking] So, I heard some sixes, heard some sev, so remember we have this formula that we had before. So if we plug in N is equal to seven, F is equal to three, right, and then our stride is equal to one. So what we actually get, so actually this is giving us seven, four, so seven minus three is four, divided by one plus one is five. And so this is what we had before. So we actually need to adjust this formula a little bit, right, so this was actually, this formula is the case where we don't have zero padded pixels. But if we do pad it, then if you now take your new output and you slide it along, you'll see that actually seven of the filters fit, so you get a seven by seven output. And plugging in our original formula, right, so our N now is not seven, it's nine, so if we go back here we have N equals nine minus a filter size of three, which gives six. Right, divided by our stride, which is one, and so still six, and then plus one we get seven. Right, and so once you've padded it you want to incorporate this padding into your formula. Yes, question. [faint speaking] Seven, okay, so the question is, what's the actual output of the size, is it seven by seven or seven by seven by three? The output is going to be seven by seven by the number of filters that you have. So remember each filter is going to do a dot product through the entire depth of your input volume. But then that's going to produce one number, right, so each filter is, let's see if we can go back here. Each filter is producing a one by seven by seven in this case activation map output, and so the depth is going to be the number of filters that we have. [faint speaking] Sorry, let me just, one second go back. Okay, can you repeat your question again? [muffled speaking] Okay, so the question is, how does this connect to before when we had a 32 by 32 by three input, right. So our input had depth and here in this example I'm showing a 2D example with no depth. And so yeah, I'm showing this for simplicity but in practice you're going to take your, you're going to multiply throughout the entire depth as we had before, so you're going to, your filter is going to be in this case a three be three spatial filter by whatever input depth that you had. So three by three by three in this case. Yeah, everything else stays the same. Yes, question. [muffled speaking] Yeah, so the question is, does the zero padding add some sort of extraneous features at the corners? And yeah, so I mean, we're doing our best to still, get some value and do, like, process that region of the image, and so zero padding is kind of one way to do this, where I guess we can, we are detecting part of this template in this region. There's also other ways to do this that, you know, you can try and like, mirror the values here or extend them, and so it doesn't have to be zero padding, but in practice this is one thing that works reasonably. And so, yeah, so there is a little bit of kind of artifacts at the edge and we sort of just, you do your best to deal with it. And in practice this works reasonably. I think there was another question. Yeah, question. [faint speaking] So if we have non-square images, do we ever use a stride that's different horizontally and vertically? So, I mean, there's nothing stopping you from doing that, you could, but in practice we just usually take the same stride, we usually operate square regions and we just, yeah we usually just take the same stride everywhere and it's sort of like, in a sense it's a little bit like, it's a little bit like the resolution at which you're, you know, looking at this image, and so usually there's kind of, you might want to match sort of your horizontal and vertical resolutions. But, yeah, so in practice you could but really people don't do that. Okay, another question. [faint speaking] So the question is, why do we do zero padding? So the way we do zero padding is to maintain the same input size as we had before. Right, so we started with seven by seven, and if we looked at just starting your filter from the upper left-hand corner, filling everything in, right, then we get a smaller size output, but we would like to maintain our full size output. Okay, so, yeah, so we saw how padding can basically help you maintain the size of the output that you want, as well as apply your filter at these, like, corner regions and edge regions. And so in general in terms of choosing, you know, your stride, your filter, your filter size, your stride size, zero padding, what's common to see is filters of size three by three, five by five, seven by seven, these are pretty common filter sizes. And so each of these, for three by three you will want to zero pad with one in order to maintain the same spatial size. If you're going to do five by five, you can work out the math, but it's going to come out to you want to zero pad by two. And then for seven you want to zero pad by three. Okay, and so again you know, the motivation for doing this type of zero padding and trying to maintain the input size, right, so we kind of alluded to this before, but if you have multiple of these layers stacked together... So if you have multiple of these layers stacked together you'll see that, you know, if we don't do this kind of zero padding, or any kind of padding, we're going to really quickly shrink the size of the outputs that we have. Right, and so this is not something that we want. Like, you can imagine if you have a pretty deep network then very quickly your, the size of your activation maps is going to shrink to something very small. And this is bad both because we're kind of losing out on some of this information, right, now you're using a much smaller number of values in order to represent your original image, so you don't want that. And then at the same time also as we talked about this earlier, your also kind of losing sort of some of this edge information, corner information that each time we're losing out and shrinking that further. Okay, so let's go through a couple more examples of computing some of these sizes. So let's say that we have an input volume which is 32 by 32 by three. And here we have 10 five by five filters. Let's use stride one and pad two. And so who can tell me what's the output volume size of this? So you can think about the formula earlier. Sorry, what was it? [faint speaking] 32 by 32 by 10, yes that's correct. And so the way we can see this, right, is so we have our input size, F is 32. Then in this case we want to augment it by the padding that we added onto this. So we padded it two in each dimension, right, so we're actually going to get, total width and total height's going to be 32 plus four on each side. And then minus our filter size five, divided by one plus one and we get 32. So our output is going to be 32 by 32 for each filter. And then we have 10 filters total, so we have 10 of these activation maps, and our total output volume is going to be 32 by 32 by 10. Okay, next question, so what's the number of parameters in this layer? So remember we have 10 five by five filters. [faint speaking] I kind of heard something, but it was quiet. Can you guys speak up? 250, okay so I heard 250, which is close, but remember that we're also, our input volume, each of these filters goes through by depth. So maybe this wasn't clearly written here because each of the filters is five by five spatially, but implicitly we also have the depth in here, right. It's going to go through the whole volume. So I heard, yeah, 750 I heard. Almost there, this is kind of a trick question 'cause also remember we usually always have a bias term, right, so in practice each filter has five by five by three weights, plus our one bias term, we have 76 parameters per filter, and then we have 10 of these total, and so there's 760 total parameters. Okay, and so here's just a summary of the convolutional layer that you guys can read a little bit more carefully later on. But we have our input volume of a certain dimension, we have all of these choice, we have our filters, right, where we have number of filters, the filter size, the stride of the size, the amount of zero padding, and you basically can use all of these, go through the computations that we talked about earlier in order to find out what your output volume is actually going to be and how many total parameters that you have. And so some common settings of this. You know, we talked earlier about common filter sizes of three by three, five by five. Stride is usually one and two is pretty common. And then your padding P is going to be whatever fits, like, whatever will preserve your spatial extent is what's common. And then the total number of filters K, usually we use powers of two just to be nice, so, you know, 32, 64, 128 and so on, 512, these are pretty common numbers that you'll see. And just as an aside, we can also do a one by one convolution, this still makes perfect sense where given a one by one convolution we still slide it over each spatial extent, but now, you know, the spatial region is not really five by five it's just kind of the trivial case of one by one, but we are still having this filter go through the entire depth. Right, so this is going to be a dot product through the entire depth of your input volume. And so the output here, right, if we have an input volume of 56 by 56 by 64 depth and we're going to do one by one convolution with 32 filters, then our output is going to be 56 by 56 by our number of filters, 32. Okay, and so here's an example of a convolutional layer in TORCH, a deep learning framework. And so you'll see that, you know, last lecture we talked about how you can go into these deep learning frameworks, you can see these definitions of each layer, right, where they have kind of the forward pass and the backward pass implemented for each layer. And so you'll see convolutions, spatial convolution is going to be just one of these, and then the arguments that it's going to take are going to be all of these design choices of, you know, I mean, I guess your input and output sizes, but also your choices of like your kernel width, your kernel size, padding, and these kinds of things. Right, and so if we look at another framework, Caffe, you'll see something very similar, where again now when you're defining your network you define networks in Caffe using this kind of, you know, proto text file where you're specifying each of your design choices for your layer and you can see for a convolutional layer will say things like, you know, the number of outputs that we have, this is going to be the number of filters for Caffe, as well as the kernel size and stride and so on. Okay, and so I guess before I go on, any questions about convolution, how the convolution operation works? Yes, question. [faint speaking] Yeah, so the question is, what's the intuition behind how you choose your stride. And so at one sense it's kind of the resolution at which you slide it on, and usually the reason behind this is because when we have a larger stride what we end up getting as the output is a down sampled image, right, and so what this downsampled image lets us have is both, it's a way, it's kind of like pooling in a sense but it's just a different and sometimes works better way of doing pooling is one of the intuitions behind this, 'cause you get the same effect of downsampling your image, and then also as you're doing this you're reducing the size of the activation maps that you're dealing with at each layer, right, and so this also affects later on the total number of parameters that you have because for example at the end of all your Conv layers, now you might put on fully connected layers on top, for example, and now the fully connected layer's going to be connected to every value of your convolutional output, right, and so a smaller one will give you smaller number of parameters, and so now you can get into, like, basically thinking about trade offs of, you know, number of parameters you have, the size of your model, overfitting, things like that, and so yeah, these are kind of some of the things that you want to think about with choosing your stride. Okay, so now if we look a little bit at kind of the, you know, brain neuron view of a convolutional layer, similar to what we looked at for the neurons in the last lecture. So what we have is that at every spatial location, we take a dot product between a filter and a specific part of the image, right, and we get one number out from here. And so this is the same idea of doing these types of dot products, right, taking your input, weighting it by these Ws, right, values of your filter, these weights that are the synapses, and getting a value out. But the main difference here is just that now your neuron has local connectivity. So instead of being connected to the entire input, it's just looking at a local region spatially of your image. And so this looks at a local region and then now you're going to get kind of, you know, this, how much this neuron is being triggered at every spatial location in your image. Right, so now you preserve the spatial structure and you can say, you know, be able to reason on top of these kinds of activation maps in later layers. And just a little bit of terminology, again for, you know, we have this five by five filter, we can also call this a five by five receptive field for the neuron, because this is, the receptive field is basically the, you know, input field that this field of vision that this neuron is receiving, right, and so that's just another common term that you'll hear for this. And then again remember each of these five by five filters we're sliding them over the spatial locations but they're the same set of weights, they share the same parameters. Okay, and so, you know, as we talked about what we're going to get at this output is going to be this volume, right, where spatially we have, you know, let's say 28 by 28 and then our number of filters is the depth. And so for example with five filters, what we're going to get out is this 3D grid that's 28 by 28 by five. And so if you look at the filters across in one spatial location of the activation volume and going through depth these five neurons, all of these neurons, basically the way you can interpret this is they're all looking at the same region in the input volume, but they're just looking for different things, right. So they're different filters applied to the same spatial location in the image. And so just a reminder again kind of comparing with the fully connected layer that we talked about earlier. In that case, right, if we look at each of the neurons in our activation or output, each of the neurons was connected to the entire stretched out input, so it looked at the entire full input volume, compared to now where each one just looks at this local spatial region. Question. [muffled talking] Okay, so the question is, within a given layer, are the filters completely symmetric? So what do you mean by symmetric exactly, I guess? Right, so okay, so the filters, are the filters doing, they're doing the same dimension, the same calculation, yes. Okay, so is there anything different other than they have the same parameter values? No, so you're exactly right, we're just taking a filter with a given set of, you know, five by five by three parameter values, and we just slide this in exactly the same way over the entire input volume to get an activation map. Okay, so you know, we've gone into a lot of detail in what these convolutional layers look like, and so now I'm just going to go briefly through the other layers that we have that form this entire convolutional network. Right, so remember again, we have convolutional layers interspersed with pooling layers once in a while as well as these non-linearities. Okay, so what the pooling layers do is that they make the representations smaller and more manageable, right, so we talked about this earlier with someone asked a question of why we would want to make the representation smaller. And so this is again for it to have fewer, it effects the number of parameters that you have at the end as well as basically does some, you know, invariance over a given region. And so what the pooling layer does is it does exactly just downsamples, and it takes your input volume, so for example, 224 by 224 by 64, and spatially downsamples this. So in the end you'll get out 112 by 112. And it's important to note this doesn't do anything in the depth, right, we're only pooling spatially. So the number of, your input depth is going to be the same as your output depth. And so, for example, a common way to do this is max pooling. So in this case our pooling layer also has a filter size and this filter size is going to be the region at which we pool over, right, so in this case if we have two by two filters, we're going to slide this, and so, here, we also have stride two in this case, so we're going to take this filter and we're going to slide it along our input volume in exactly the same way as we did for convolution. But here instead of doing these dot products, we just take the maximum value of the input volume in that region. Right, so here if we look at the red values, the value of that will be six is the largest. If we look at the greens it's going to give an eight, and then we have a three and a four. Yes, question. [muffled speaking] Yeah, so the question is, is it typical to set up the stride so that there isn't an overlap? And yeah, so for the pooling layers it is, I think the more common thing to do is to have them not have any overlap, and I guess the way you can think about this is basically we just want to downsample and so it makes sense to kind of look at this region and just get one value to represent this region and then just look at the next region and so on. Yeah, question. [faint speaking] Okay, so the question is, why is max pooling better than just taking the, doing something like average pooling? Yes, that's a good point, like, average pooling is also something that you can do, and intuition behind why max pooling is commonly used is that it can have this interpretation of, you know, if this is, these are activations of my neurons, right, and so each value is kind of how much this neuron fired in this location, how much this filter fired in this location. And so you can think of max pooling as saying, you know, giving a signal of how much did this filter fire at any location in this image. Right, and if we're thinking about detecting, you know, doing recognition, this might make some intuitive sense where you're saying, well, you know, whether a light or whether some aspect of your image that you're looking for, whether it happens anywhere in this region we want to fire at with a high value. Question. [muffled speaking] Yeah, so the question is, since pooling and stride both have the same effect of downsampling, can you just use stride instead of pooling and so on? Yeah, and so in practice I think looking at more recent neural network architectures people have begun to use stride more in order to do the downsampling instead of just pooling. And I think this gets into things like, you know, also like fractional strides and things that you can do. But in practice this in a sense maybe has a little bit better way to get better results using that, so. Yeah, so I think using stride is definitely, you can do it and people are doing it. Okay, so let's see, where were we. Okay, so yeah, so with these pooling layers, so again, there's right, some design choices that you make, you take this input volume of W by H by D, and then you're going to set your hyperparameters for design choices of your filter size or the spatial extent over which you are pooling, as well as your stride, and then you can again compute your output volume using the same equation that you used earlier for convolution, it still applies here, right, so we still have our W total extent minus filter size divided by stride plus one. Okay, and so just one other thing to note, it's also, typically people don't really use zero padding for the pooling layers because you're just trying to do a direct downsampling, right, so there isn't this problem of like, applying a filter at the corner and having some part of the filter go off your input volume. And so for pooling we don't usually have to worry about this and we just directly downsample. And so some common settings for the pooling layer is a filter size of two by two or three by three strides. Two by two, you know, you can have, also you can still have pooling of two by two even with a filter size of three by three, I think someone asked that earlier, but in practice it's pretty common just to have two by two. Okay, so now we've talked about these convolutional layers, the ReLU layers were the same as what we had before with the, you know, just the base neural network that we talked about last lecture. So we intersperse these and then we have a pooling layer every once in a while when we feel like downsampling, right. And then the last thing is that at the end we want to have a fully connected layer. And so this will be just exactly the same as the fully connected layers that you've seen before. So in this case now what we do is we take the convolutional network output, at the last layer we have some volume, so we're going to have width by height by some depth, and we just take all of these and we essentially just stretch these out, right. And so now we're going to get the same kind of, you know, basically 1D input that we're used to for a vanilla neural network, and then we're going to apply this fully connected layer on top, so now we're going to have connections to every one of these convolutional map outputs. And so what you can think of this is basically, now instead of preserving, you know, before we were preserving spatial structure, right, and so but at the last layer at the end, we want to aggregate all of this together and we want to reason basically on top of all of this as we had before. And so what you get from that is just our score outputs as we had earlier. Okay, so-- - [Student] This is sort of a silly question about this visual. Like what are the 16 pixels that are on the far right, like what should be interpreting those as? - Okay, so the question is, what are the 16 pixels that are on the far right, do you mean the-- - [Student] Like that column of-- - [Instructor] Oh, each column. - [Student] The column on the far right, yeah. - [Instructor] The green ones or the black ones? - [Student] The ones labeled pool. - The one with hold on, pool. Oh, okay, yeah, so the question is how do we interpret this column, right, for example at pool. And so what we're showing here is each of these columns is the output activation maps, right, the output from one of these layers. And so starting from the beginning, we have our car, after the convolutional layer we now have these activation maps of each of the filters slid spatially over the input image. Then we pass that through a ReLU, so you can see the values coming out from there. And then going all the way over, and so what you get for the pooling layer is that it's really just taking the output of the ReLU layer that came just before it and then it's pooling it. So it's going to downsample it, right, and then it's going to take the max value in each filter location. And so now if you look at this pool layer output, like, for example, the last one that you were mentioning, it looks the same as this ReLU output except that it's downsampled and that it has this kind of max value at every spatial location and so that's the minor difference that you'll see between those two. [distant speaking] So the question is, now this looks like just a very small amount of information, right, so how can it know to classify it from here? And so the way that you should think about this is that each of these values inside one of these pool outputs is actually, it's the accumulation of all the processing that you've done throughout this entire network, right. So it's at the very top of your hierarchy, and so each actually represents kind of a higher level concept. So we saw before, you know, for example, Hubel and Wiesel and building up these hierarchical filters, where at the bottom level we're looking for edges, right, or things like very simple structures, like edges. And so after your convolutional layer the outputs that you see here in this first column is basically how much do specific, for example, edges, fire at different locations in the image. But then as you go through you're going to get more complex, it's looking for more complex things, right, and so the next convolutional layer is going to fire at how much, you know, let's say certain kinds of corners show up in the image, right, because it's reasoning. Its input is not the original image, its input is the output, it's already the edge maps, right, so it's reasoning on top of edge maps, and so that allows it to get more complex, detect more complex things. And so by the time you get all the way up to this last pooling layer, each value is representing how much a relatively complex sort of template is firing. Right, and so because of that now you can just have a fully connected layer, you're just aggregating all of this information together to get, you know, a score for your class. So each of these values is how much a pretty complicated complex concept is firing. Question. [faint speaking] So the question is, when do you know you've done enough pooling to do the classification? And the answer is you just try and see. So in practice, you know, these are all design choices and you can think about this a little bit intuitively, right, like you want to pool but if you pool too much you're going to have very few values representing your entire image and so on, so it's just kind of a trade off. Something reasonable versus people have tried a lot of different configurations so you'll probably cross validate, right, and try over different pooling sizes, different filter sizes, different number of layers, and see what works best for your problem because yeah, like every problem with different data is going to, you know, different set of these sorts of hyperparameters might work best. Okay, so last thing, just wanted to point you guys to this demo of training a ConvNet, which was created by Andre Karpathy, the originator of this class. And so he wrote up this demo where you can basically train a ConvNet on CIFAR-10, the dataset that we've seen before, right, with 10 classes. And what's nice about this demo is you can, it basically plots for you what each of these filters look like, what the activation maps look like. So some of the images I showed earlier were taken from this demo. And so you can go try it out, play around with it, and you know, just go through and try and get a sense for what these activation maps look like. And just one thing to note, usually the first layer activation maps are, you can interpret them, right, because they're operating directly on the input image so you can see what these templates mean. As you get to higher level layers it starts getting really hard, like how do you actually interpret what do these mean. So for the most part it's just hard to interpret so you shouldn't, you know, don't worry if you can't really make sense of what's going on. But it's still nice just to see the entire flow and what outputs are coming out. Okay, so in summary, so today we talked about how convolutional neural networks work, how they're basically stacks of these convolutional and pooling layers followed by fully connected layers at the end. There's been a trend towards having smaller filters and deeper architectures, so we'll talk more about case studies for some of these later on. There's also been a trend towards getting rid of these pooling and fully connected layers entirely. So just keeping these, just having, you know, Conv layers, very deep networks of Conv layers, so again we'll discuss all of this later on. And then typical architectures again look like this, you know, as we had earlier. Conv, ReLU for some N number of steps followed by a pool every once in a while, this whole thing repeated some number of times, and then followed by fully connected ReLU layers that we saw earlier, you know, one or two or just a few of these, and then a softmax at the end for your class scores. And so, you know, some typical values you might have N up to five of these. You're going to have pretty deep layers of Conv, ReLU, pool sequences, and then usually just a couple of these fully connected layers at the end. But we'll also go into some newer architectures like ResNet and GoogLeNet, which challenge this and will give pretty different types of architectures. Okay, thank you and see you guys next time.
Stanford_Computer_Vision
Lecture_13_Generative_Models.txt
- Okay we have a lot to cover today so let's get started. Today we'll be talking about Generative Models. And before we start, a few administrative details. So midterm grades will be released on Gradescope this week A reminder that A3 is due next Friday May 26th. The HyperQuest deadline for extra credit you can do this still until Sunday May 21st. And our poster session is June 6th from 12 to 3 P.M.. Okay so an overview of what we're going to talk about today we're going to switch gears a little bit and take a look at unsupervised learning today. And in particular we're going to talk about generative models which is a type of unsupervised learning. And we'll look at three types of generative models. So pixelRNNs and pixelCNNs variational autoencoders and Generative Adversarial networks. So so far in this class we've talked a lot about supervised learning and different kinds of supervised learning problems. So in the supervised learning set up we have our data X and then we have some labels Y. And our goal is to learn a function that's mapping from our data X to our labels Y. And these labels can take many different types of forms. So for example, we've looked at classification where our input is an image and we want to output Y, a class label for the category. We've talked about object detection where now our input is still an image but here we want to output the bounding boxes of instances of up to multiple dogs or cats. We've talked about semantic segmentation where here we have a label for every pixel the category that every pixel belongs to. And we've also talked about image captioning where here our label is now a sentence and so it's now in the form of natural language. So unsupervised learning in this set up, it's a type of learning where here we have unlabeled training data and our goal now is to learn some underlying hidden structure of the data. Right, so an example of this can be something like clustering which you guys might have seen before where here the goal is to find groups within the data that are similar through some type of metric. For example, K means clustering. Another example of an unsupervised learning task is a dimensionality reduction. So in this problem want to find axes along which our training data has the most variation, and so these axes are part of the underlying structure of the data. And then we can use this to reduce of dimensionality of the data such that the data has significant variation among each of the remaining dimensions. Right, so this example here we start off with data in three dimensions and we're going to find two axes of variation in this case and reduce our data projected down to 2D. Another example of unsupervised learning is learning feature representations for data. We've seen how to do this in supervised ways before where we used the supervised loss, for example classification. Where we have the classification label. We have something like a Softmax loss And we can train a neural network where we can interpret activations for example our FC7 layers as some kind of future representation for the data. And in an unsupervised setting, for example here autoencoders which we'll talk more about later In this case our loss is now trying to reconstruct the input data to basically, you have a good reconstruction of our input data and use this to learn features. So we're learning a feature representation without using any additional external labels. And finally another example of unsupervised learning is density estimation where in this case we want to estimate the underlying distribution of our data. So for example in this top case over here, we have points in 1-d and we can try and fit a Gaussian into this density and in this bottom example over here it's 2D data and here again we're trying to estimate the density and we can model this density. We want to fit a model such that the density is higher where there's more points concentrated. And so to summarize the differences in unsupervised learning which we've looked a lot so far, we want to use label data to learn a function mapping from X to Y and an unsupervised learning we use no labels and instead we try to learn some underlying hidden structure of the data, whether this is grouping, acts as a variation or underlying density estimation. And unsupervised learning is a huge and really exciting area of research and and some of the reasons are that training data is really cheap, it doesn't use labels so we're able to learn from a lot of data at one time and basically utilize a lot more data than if we required annotating or finding labels for data. And unsupervised learning is still relatively unsolved research area by comparison. There's a lot of open problems in this, but it also, it holds the potential of if you're able to successfully learn and represent a lot of the underlying structure in the data then this also takes you a long way towards the Holy Grail of trying to understand the structure of the visual world. So that's a little bit of kind of a high-level big picture view of unsupervised learning. And today will focus more specifically on generative models which is a class of models for unsupervised learning where given training data our goal is to try and generate new samples from the same distribution. Right, so we have training data over here generated from some distribution P data and we want to learn a model, P model to generate samples from the same distribution and so we want to learn P model to be similar to P data. And generative models address density estimations. So this problem that we saw earlier of trying to estimate the underlying distribution of your training data which is a core problem in unsupervised learning. And we'll see that there's several flavors of this. We can use generative models to do explicit density estimation where we're going to explicitly define and solve for our P model or we can also do implicit density estimation where in this case we'll learn a model that can produce samples from P model without explicitly defining it. So, why do we care about generative models? Why is this a really interesting core problem in unsupervised learning? Well there's a lot of things that we can do with generative models. If we're able to create realistic samples from the data distributions that we want we can do really cool things with this, right? We can generate just beautiful samples to start with so on the left you can see a completely new samples of just generated by these generative models. Also in the center here generated samples of images we can also do tasks like super resolution, colorization so hallucinating or filling in these edges with generated ideas of colors and what the purse should look like. We can also use generative models of time series data for simulation and planning and so this will be useful in for reinforcement learning applications which we'll talk a bit more about reinforcement learning in a later lecture. And training generative models can also enable inference of latent representations. Learning latent features that can be useful as general features for downstream tasks. So if we look at types of generative models these can be organized into the taxonomy here where we have these two major branches that we talked about, explicit density models and implicit density models. And then we can also get down into many of these other sub categories. And well we can refer to this figure is adapted from a tutorial on GANs from Ian Goodfellow and so if you're interested in some of these different taxonomy and categorizations of generative models this is a good resource that you can take a look at. But today we're going to discuss three of the most popular types of generative models that are in use and in research today. And so we'll talk first briefly about pixelRNNs and CNNs And then we'll talk about variational autoencoders. These are both types of explicit density models. One that's using a tractable density and another that's using an approximate density And then we'll talk about generative adversarial networks, GANs which are a type of implicit density estimation. So let's first talk about pixelRNNs and CNNs. So these are a type of fully visible belief networks which are modeling a density explicitly so in this case what they do is we have this image data X that we have and we want to model the probability or likelihood of this image P of X. Right and so in this case, for these kinds of models, we use the chain rule to decompose this likelihood into a product of one dimensional distribution. So we have here the probability of each pixel X I conditioned on all previous pixels X1 through XI - 1. and your likelihood all right, your joint likelihood of all the pixels in your image is going to be the product of all of these pixels together, all of these likelihoods together. And then once we define this likelihood, in order to train this model we can just maximize the likelihood of our training data under this defined density. So if we look at this this distribution over pixel values right, we have this P of XI given all the previous pixel values, well this is a really complex distribution. So how can we model this? Well we've seen before that if we want to have complex transformations we can do these using neural networks. Neural networks are a good way to express complex transformations. And so what we'll do is we'll use a neural network to express this complex function that we have of the distribution. And one thing you'll see here is that, okay even if we're going to use a neural network for this another thing we have to take care of is how do we order the pixels. Right, I said here that we have a distribution for P of XI given all previous pixels but what does all previous the pixels mean? So we'll take a look at that. So PixelRNN was a model proposed in 2016 that basically defines a way for setting up and optimizing this problem and so how this model works is that we're going to generate pixels starting in a corner of the image. So we can look at this grid as basically the pixels of your image and so what we're going to do is start from the pixel in the upper left-hand corner and then we're going to sequentially generate pixels based on these connections from the arrows that you can see here. And each of the dependencies on the previous pixels in this ordering is going to be modeled using an RNN or more specifically an LSTM which we've seen before in lecture. Right so using this we can basically continue to move forward just moving down a long is diagonal and generating all of these pixel values dependent on the pixels that they're connected to. And so this works really well but the drawback here is that this sequential generation, right, so it's actually quite slow to do this. You can imagine you know if you're going to generate a new image instead of all of these feed forward networks that we see, we've seen with CNNs. Here we're going to have to iteratively go through and generate all these images, all these pixels. So a little bit later, after a pixelRNN, another model called pixelCNN was introduced. And this has very similar setup as pixelCNN and we're still going to do this image generation starting from the corner of the of the image and expanding outwards but the difference now is that now instead of using 255 00:12:43,074 --> 00:12:45,480 an RNN to model all these dependencies we're going to use the CNN instead. And we're now going to use a CNN over a a context region that you can see here around in the particular pixel that we're going to generate now. Right so we take the pixels around it, this gray area within the region that's already been generated and then we can pass this through a CNN and use that to generate our next pixel value. And so what this is going to give is this is going to give This is a CNN, a neural network at each pixel location right and so the output of this is going to be a soft max loss over the pixel values here. In this case we have a 0 to 255 and then we can train this by maximizing the likelihood of the training images. Right so we say that basically we want to take a training image we're going to do this generation process and at each pixel location we have the ground truth training data image value that we have here and this is a quick basically the label or the the the classification label that we want our pixel to be which of these 255 values and we can train this using a Softmax loss. Right and so basically the effect of doing this is that we're going to maximize the likelihood of our training data pixels being generated. Okay any questions about this? Yes. [student's words obscured due to lack of microphone] Yeah, so the question is, I thought we were talking about unsupervised learning, why do we have basically a classification label here? The reason is that this loss, this output that we have is the value of the input training data. So we have no external labels, right? We didn't go and have to manually collect any labels for this, we're just taking our input data and saying that this is what we used for the last function. [student's words obscured due to lack of microphone] The question is, is this like bag of words? I would say it's not really bag of words, it's more saying that we want where we're outputting a distribution over pixel values at each location of our image right, and what we want to do is we want to maximize the likelihood of our input, our training data being produced, being generated. Right so, in that sense, this is why it's using our input data to create our loss. So using pixelCNN training is faster than pixelRNN because here now right at every pixel location we want to maximize the value of our, we want to maximize the likelihood of our training data showing up and so we have all of these values already right, just from our training data and so we can do this much faster but a generation time for a test time we want to generate a completely new image right, just starting from the corner and we're not, we're not trying to do any type of learning so in that generation time we still have to generate each of these pixel locations before we can generate the next location. And so generation time here it still slow even though training time is faster. Question. [student's words obscured due to lack of microphone] So the question is, is this training a sensitive distribution to what you pick for the first pixel? Yeah, so it is dependent on what you have as the initial pixel distribution and then everything is conditioned based on that. So again, how do you pick this distribution? So at training time you have these distributions from your training data and then at generation time you can just initialize this with either uniform or from your training data, however you want. And then once you have that everything else is conditioned based on that. Question. [student's words obscured due to lack of microphone] Yeah so the question is is there a way that we define this in this chain rule fashion instead of predicting all the pixels at one time? And so we'll see, we'll see models later that do do this, but what the chain rule allows us to do is it allows us to find this very tractable density that we can then basically optimize and do, directly optimizes likelihood Okay so these are some examples of generations from this model and so here on the left you can see generations where the training data is CIFAR-10, CIFAR-10 dataset. And so you can see that in general they are starting to capture statistics of natural images. You can see general types of blobs and kind of things that look like parts of natural images coming out. On the right here it's ImageNet, we can again see samples from here and these are starting to look like natural images but they're still not, there's still room for improvement. You can still see that there are differences obviously with regional training images and some of the semantics are not clear in here. So, to summarize this, pixelRNNs and CNNs allow you to explicitly compute likelihood P of X. It's an explicit density that we can optimize. And being able to do this also has another benefit of giving a good evaluation metric. You know you can kind of measure how good your samples are by this likelihood of the data that you can compute. And it's able to produce pretty good samples but it's still an active area of research and the main disadvantage of these methods is that the generation is sequential and so it can be pretty slow. And these kinds of methods have also been used for generating audio for example. And you can look online for some pretty interesting examples of this, but again the drawback is that it takes a long time to generate these samples. And so there's a lot of work, has been work since then on still on improving pixelCNN performance And so all kinds of different you know architecture changes add the loss function formulating this differently on different types of training tricks And so if you're interested in learning more about this you can look at some of these papers on PixelCNN and then other pixelCNN plus plus better improved version that came out this year. Okay so now we're going to talk about another type of generative models call variational autoencoders. And so far we saw that pixelCNNs defined a tractable density function, right, using this this definition and based on that we can optimize directly optimize the likelihood of the training data. So with variational autoencoders now we're going to define an intractable density function. We're now going to model this with an additional latent variable Z and we'll talk in more detail about how this looks. And so our data likelihood P of X is now basically has to be this integral right, taking the expectation over all possible values of Z. And so this now is going to be a problem. We'll see that we cannot optimize this directly. And so instead what we have to do is we have to derive and optimize a lower bound on the likelihood instead. Yeah, question. So the question is is what is Z? Z is a latent variable and I'll go through this in much more detail. So let's talk about some background first. Variational autoencoders are related to a type of unsupervised learning model called autoencoders. And so we'll talk little bit more first about autoencoders and what they are and then I'll explain how variational autoencoders are related and build off of this and allow you to generate data. So with autoencoders we don't use this to generate data, but it's an unsupervised approach for learning a lower dimensional feature representation from unlabeled training data. All right so in this case we have our input data X and then we're going to want to learn some features that we call Z. And then we'll have an encoder that's going to be a mapping, a function mapping from this input data to our feature Z. And this encoder can take many different forms right, they would generally use neural networks so originally these models have been around, autoencoders have been around for a long time. So in the 2000s we used linear layers of non-linearities, then later on we had fully connected deeper networks and then after that we moved on to using CNNs for these encoders. So we take our input data X and then we map this to some feature Z. And Z we usually have as, we usually specify this to be smaller than X and we perform basically dimensionality reduction because of that. So the question who has an idea of why do we want to do dimensionality reduction here? Why do we want Z to be smaller than X? Yeah. [student's words obscured due to lack of microphone] So the answer I heard is Z should represent the most important features in X and that's correct. So we want Z to be able to learn features that can capture meaningful factors of variation in the data. Right this makes them good features. So how can we learn this feature representation? Well the way autoencoders do this is that we train the model such that the features can be used to reconstruct our original data. So what we want is we want to have input data that we use an encoder to map it to some lower dimensional features Z. This is the output of the encoder network, and we want to be able to take these features that were produced based on this input data and then use a decoder a second network and be able to output now something of the same size dimensionality as X and have it be similar to X right so we want to be able to reconstruct the original data. And again for the decoder we are basically using same types of networks as encoders so it's usually a little bit symmetric and now we can use CNN networks for most of these. Okay so the process is going to be we're going to take our input data right we pass it through our encoder first which is going to be something for example like a four layer convolutional network and then we're going to pass it, get these features and then we're going to pass it through a decoder which is a four layer for example upconvolutional network and then get a reconstructed data out at the end of this. Right in the reason why we have a convolutional network for the encoder and an upconvolutional network for the decoder is because at the encoder we're basically taking it from this high dimensional input to these lower dimensional features and now we want to go the other way go from our low dimensional features back out to our high dimensional reconstructed input. And so in order to get this effect that we said we wanted before of being able to reconstruct our input data we'll use something like an L2 loss function. Right that basically just says let me make my pixels of my input data to be the same as my, my pixels in my reconstructed data to be the same as the pixels of my input data. An important thing to notice here, this relates back to a question that we had earlier, is that even though we have this loss function here, there's no, there's no external labels that are being used in training this. All we have is our training data that we're going to use both to pass through the network as well as to compute our loss function. So once we have this after training this model what we can do is we can throw away this decoder. All this was used was too to be able to produce our reconstruction input and be able to compute our loss function. And we can use the encoder that we have which produces our feature mapping and we can use this to initialize a supervised model. Right and so for example we can now go from this input to our features and then have an additional classifier network on top of this that now we can use to output a class label for example for classification problem we can have external labels from here and use our standard loss functions like Softmax. And so the value of this is that we basically were able to use a lot of unlabeled training data to try and learn good general feature representations. Right, and now we can use this to initialize a supervised learning problem where sometimes we don't have so much data we only have small data. And we've seen in previous homeworks and classes that with small data it's hard to learn a model, right? You can have over fitting and all kinds of problems and so this allows you to initialize your model first with better features. Okay so we saw that autoencoders are able to reconstruct data and are able to, as a result, learn features to initialize, that we can use to initialize a supervised model. And we saw that these features that we learned have this intuition of being able to capture factors of variation in the training data. All right so based on this intuition of okay these, we can have this latent this vector Z which has factors of variation in our training data. Now a natural question is well can we use a similar type of setup to generate new images? And so now we will talk about variational autoencoders which is a probabillstic spin on autoencoders that will let us sample from the model in order to generate new data. Okay any questions on autoencoders first? Okay, so variational autoencoders. All right so here we assume that our training data that we have X I from one to N is generated from some underlying, unobserved latent representation Z. Right, so it's this intuition that Z is some vector right which element of Z is capturing how little or how much of some factor of variation that we have in our training data. Right so the intuition is, you know, maybe these could be something like different kinds of attributes. Let's say we're trying to generate faces, it could be how much of a smile is on the face, it could be position of the eyebrows hair orientation of the head. These are all possible types of latent factors that could be learned. Right, and so our generation process is that we're going to sample from a prior over Z. Right so for each of these attributes for example, you know, how much smile that there is, we can have a prior over what sort of distribution we think that there should be for this so, a gaussian is something that's a natural prior that we can use for each of these factors of Z and then we're going to generate our data X by sampling from a conditional, conditional distribution P of X given Z. So we sample Z first, we sample a value for each of these latent factors and then we'll use that and sample our image X from here. And so the true parameters of this generation process are theta, theta star right? So we have the parameters of our prior and our conditional distributions and what we want to do is in order to have a generative model be able to generate new data we want to estimate these parameters of our true parameters Okay so let's first talk about how should we represent this model. All right, so if we're going to have a model for this generator process, well we've already said before that we can choose our prior P of Z to be something simple. Something like a Gaussian, right? And this is the reasonable thing to choose for for latent attributes. Now for our conditional distribution P of X given Z this is much more complex right, because we need to use this to generate an image and so for P of X given Z, well as we saw before, when we have some type of complex function that we want to represent we can represent this with a neural network. And so that's a natural choice for let's try and model P of X given Z with a neural network. And we're going to call this the decoder network. Right, so we're going to think about taking some latent representation and trying to decode this into the image that it's specifying. So now how can we train this model? Right, we want to be able to train this model so that we can learn an estimate of these parameters. So if we remember our strategy from training generative models, back from are fully visible belief networks, our pixelRNNs and CNNs, a straightforward natural strategy is to try and learn these model parameters in order to maximize the likelihood of the training data. Right, so we saw earlier that in this case, with our latent variable Z, we're going to have to write out P of X taking expectation over all possible values of Z which is continuous and so we get this expression here. Right so now we have it with this latent Z and now if we're going to, if you want to try and maximize its likelihood, well what's the problem? Can we just take this take gradients and maximize this likelihood? [student's words obscured due to lack of microphone] Right, so this integral is not going to be tractable, that's correct. So let's take a look at this in a little bit more detail. Right, so we have our data likelihood term here. And the first time is P of Z. And here we already said earlier, we can just choose this to be a simple Gaussian prior, so this is fine. P of X given Z, well we said we were going to specify a decoder neural network. So given any Z, we can get P of X given Z from here. It's the output of our neural network. But then what's the problem here? Okay this was supposed to be a different unhappy face but somehow I don't know what happened, in the process of translation, it turned into a crying black ghost but what this is symbolizing is that basically if we want to compute P of X given Z for every Z this is now intractable right, we cannot compute this integral. So data likelihood is intractable and it turns out that if we look at other terms in this model if we look at our posterior density, So P of our posterior of Z given X, then this is going to be P of X given Z times P of Z over P of X by Bayes' rule and this is also going to be intractable, right. We have P of X given Z is okay, P of Z is okay, but we have this P of X our likelihood which has the integral and it's intractable. So we can't directly optimizes this. but we'll see that a solution, a solution that will enable us to learn this model is if in addition to using a decoder network defining this neural network to model P of X given Z. If we now define an additional encoder network Q of Z given X we're going to call this an encoder because we want to turn our input X into, get the likelihood of Z given X, we're going to encode this into Z. And defined this network to approximate the P of Z given X. Right this was posterior density term now is also intractable. If we use this additional network to approximate this then we'll see that this will actually allow us to derive a lower bound on the data likelihood that is tractable and which we can optimize. Okay so first just to be a little bit more concrete about these encoder and decoder networks that I specified, in variational autoencoders we want the model probabilistic generation of data. So in autoencoders we already talked about this concept of having an encoder going from input X to some feature Z and a decoder network going from Z back out to some image X. And so here we go to again have an encoder network and a decoder network but we're going to make these probabilistic. So now our encoder network Q of Z given X with parameters phi are going to output a mean and a diagonal covariance and from here, this will be the direct outputs of our encoder network and the same thing for our decoder network which is going to start from Z and now it's going to output the mean and the diagonal covariance of some X, same dimension as the input given Z And then this decoder network has different parameters theta. And now in order to actually get our Z and our, This should be Z given X and X given Z. We'll sample from these distributions. So now our encoder and our decoder network are producing distributions over Z and X respectively and will sample from this distribution in order to get a value from here. So you can see how this is taking us on the direction towards being able to sample and generate new data. And just one thing to note is that these encoder and decoder networks, you'll also hear different terms for them. The encoder network can also be kind of recognition or inference network because we're trying to form inference of this latent representation of Z given X and then for the decoder network, this is what we'll use to perform generation. Right so you also hear generation network being used. Okay so now equipped with our encoder and decoder networks, let's try and work out the data likelihood again. and we'll use the log of the data likelihood here. So we'll see that if we want the log of P of X right we can write this out as like a P of X but take the expectation with respect to Z. So Z samples from our distribution of Q of Z given X that we've now defined using the encoder network. And we can do this because P of X doesn't depend on Z. Right 'cause Z is not part of that. And so we'll see that taking the expectation with respect to Z is going to come in handy later on. Okay so now from this original expression we can now expand it out to be log of P of X given Z, P of Z over P of Z given X using Bayes' rule. And so this is just directly writing this out. And then taking this we can also now multiply it by a constant. Right, so Q of Z given X over Q of Z given X. This is one we can do this. It doesn't change it but it's going to be helpful later on. So given that what we'll do is we'll write it out into these three separate terms. And you can work out this math later on by yourself but it's essentially just using logarithm rules taking all of these terms that we had in the line above and just separating it out into these three different terms that will have nice meanings. Right so if we look at this, the first term that we get separated out is log of P given X and then expectation of log of P given X and then we're going to have two KL terms, right. This is basically KL divergence term to say how close these two distributions are. So how close is a distribution Q of Z given X to P of Z. So it's just the, it's exactly this expectation term above. And it's just a distance metric for distributions. And so we'll see that, right, we saw that these are nice KL terms that we can write out. And now if we look at these three terms that we have here, the first term is P of X given Z, which is provided by our decoder network. And we're able to compute an estimate of these term through sampling and we'll see that we can do a sampling that's differentiable through something called the re-parametrization trick which is a detail that you can look at this paper if you're interested. But basically we can now compute this term. And then these KL terms, the second KL term is a KL between two Gaussians, so our Q of Z given X, remember our encoder produced this distribution which had a mean and a covariance, it was a nice Gaussian. And then also our prior P of Z which is also a Gaussian. And so this has a nice, when you have a KL of two Gaussians you have a nice closed form solution that you can have. And then this third KL term now, this is a KL of Q given X with a P of Z given X. But we know that P of Z given X was this intractable posterior that we saw earlier, right? That we didn't want to compute that's why we had this approximation using Q. And so this term is still is a problem. But one thing we do know about this term is that KL divergence, it's a distance between two distributions is always greater than or equal to zero by definition. And so what we can do with this is that, well what we have here, the two terms that we can work nicely with, this is a, this is a tractable lower bound which we can actually take gradient of and optimize. P of X given Z is differentiable and the KL terms are also, the close form solution is also differentiable. And this is a lower bound because we know that the KL term on the right, the ugly one is greater than or equal it zero. So we have a lower bound. And so what we'll do to train a variational autoencoder is that we take this lower bound and we instead optimize and maximize this lower bound instead. So we're optimizing a lower bound on the likelihood of our data. So that means that our data is always going to have a likelihood that's at least as high as this lower bound that we're maximizing. And so we want to find the parameters theta, estimate parameters theta and phi that allows us to maximize this. And then one last sort of intuition about this lower bound that we have is that this first term is expectation over all samples of Z sampled from passing our X through the encoder network sampling Z taking expectation over all of these samples of likelihood of X given Z and so this is a reconstruction, right? This is basically saying, if I want this to be big I want this likelihood P of X given Z to be high, so it's kind of like trying to do a good job reconstructing the data. So similar to what we had from our autoencoder before. But the second term here is saying make this KL small. Make our approximate posterior distribution close to our prior distribution. And this basically is saying that well we want our latent variable Z to be following this, have this distribution type, distribution shape that we would like it to have. Okay so any questions about this? I think this is a lot of math that if you guys are interested you should go back and kind of work through all of the derivations yourself. Yeah. [student's words obscured due to lack of microphone] So the question is why do we specify the prior and the latent variables as Gaussian? And the reason is that well we're defining some sort of generative process right, of sampling Z first and then sampling X first. And defining it as a Gaussian is a reasonable type of prior that we can say makes sense for these types of latent attributes to be distributed according to some sort of Gaussian, and then this lets us now then optimize our model. Okay, so we talked about how we can deride this lower bound and now let's put this all together and walk through the process of the training of the AE. Right so here's the bound that we want to optimize, to maximize. And now for a forward pass. We're going to proceed in the following manner. We have our input data X, so we'll a mini batch of input data. And then we'll pass it through our encoder network so we'll get Q of Z given X. And from this Q of Z given X, this'll be the terms that we use to compute the KL term. And then from here we'll sample Z from this distribution of Z given X so we have a sample of the latent factors that we can infer from X. And then from here we're going to pass a Z through another, our second decoder network. And from the decoder network we'll get this output for the mean and variance on our distribution for X given Z and then finally we can sample now our X given Z from this distribution and here this will produce some sample output. And when we're training we're going to take this distribution and say well our loss term is going to be log of our training image pixel values given Z. So our loss functions going to say let's maximize the likelihood of this original input being reconstructed. And so now for every mini batch of input we're going to compute this forward pass. Get all these terms that we need and then this is all differentiable so then we just backprop though all of this and then get our gradient, we update our model and we use this to continuously update our parameters, our generator and decoder network parameters theta and phi in order to maximize the likelihood of the trained data. Okay so once we've trained our VAE, so now to generate data, what we can do is we can use just the decoder network. All right, so from here we can sample Z now, instead of sampling Z from this posterior that we had during training, while during generation we sample from our true generative process. So we sample from our prior that we specify. And then we're going to then sample our data X from here. And we'll see that this can produce, in this case, train on MNIST, these are samples of digits generated from a VAE trained on MNIST. And you can see that, you know, we talked about this idea of Z representing these latent factors where we can bury Z right according to our sample from different parts of our prior and then get different kind of interpretable meanings from here. So here we can see that this is the data manifold for two dimensional Z. So if we have a two dimensional Z and we take Z and let's say some range from you know, from different percentiles of the distribution, and we vary Z1 and we vary Z2, then you can see how the image generated from every combination of Z1 and Z2 that we have here, you can see it's transitioning smoothly across all of these different variations. And you know our prior on Z was, it was diagonal, so we chose this in order to encourage this to be independent latent variables that can then encode interpretable factors of variation. So because of this now we'll have different dimensions of Z, encoding different interpretable factors of variation. So, in this example train now on Faces, we'll see as we vary Z1, going up and down, you'll see the amount of smile changing. So from a frown at the top to like a big smile at the bottom and then as we go vary Z2, from left to right, you can see the head pose changing. From one direction all the way to the other. And so one additional thing I want to point out is that as a result of doing this, these Z variables are also good feature representations. Because they encode how much of these different these different interpretable semantics that we have. And so we can use our Q of Z given X, the encoder that we've learned and give it an input images X, we can map this to Z and use the Z as features that we can use for downstream tasks like supervision, or like classification or other tasks. Okay so just another couple of examples of data generated from VAEs. So on the left here we have data generated on CIFAR-10, trained on CIFAR-10, and then on the right we have data trained and generated on Faces. And we'll see so we can see that in general VAEs are able to generate recognizable data. One of the main drawbacks of VAEs is that they tend to still have a bit of a blurry aspect to them. You can see this in the faces and so this is still an active area of research. Okay so to summarize VAEs, they're a probabilistic spin on traditional autoencoders. So instead of deterministically taking your input X and going to Z, feature Z and then back to reconstructing X, now we have this idea of distributions and sampling involved which allows us to generate data. And in order to train this, VAEs are defining an intractable density. So we can derive and optimize a lower bound, a variational lower bound, so variational means basically using approximations to handle these types of intractable expressions. And so this is why this is called a variational autoencoder. And so some of the advantages of this approach is that VAEs are, they're a principled approach to generative models and they also allow this inference query so being able to infer things like Q of Z given X. That we said could be useful feature representations for other tasks. So disadvantages of VAEs are that while we're maximizing the lower bound of the likelihood, which is okay like you know in general this is still pushing us in the right direction and there's more other theoretical analysis of this. So you know, it's doing okay, but it's maybe not still as direct an optimization and evaluation as the pixel RNNs and CNNs that we saw earlier, but which had, and then, also the VAE samples are tending to be a little bit blurrier and of lower quality compared to state of the art samples that we can see from other generative models such as GANs that we'll talk about next. And so VAEs now are still, they're still an active area of research. People are working on more flexible approximations, so richer approximate posteriors, so instead of just a diagonal Gaussian some richer functions for this. And then also, another area that people have been working on is incorporating more structure in these latent variables. So now we had all of these independent latent variables but people are working on having modeling structure in here, groupings, other types of structure. Okay, so yeah, question. [student's words obscured due to lack of microphone] Yeah, so the question is we're deciding the dimensionality of the latent variable. Yeah, that's something that you specify. Okay, so we've talked so far about pixelCNNs and VAEs and now we'll take a look at a third and very popular type of generative model called GANs. So the models that we've seen so far, pixelCNNs and RNNs define a tractable density function. And they optimize the likelihood of the trained data. And then VAEs in contrast to that now have this additional latent variable Z that they define in the generative process. And so having the Z has a lot of nice properties that we talked about, but they are also cause us to have this intractable density function that we can't optimize directly and so we derive and optimize a lower bound on the likelihood instead. And so now what if we just give up on explicitly modeling this density at all? And we say well what we want is just the ability to sample and to have nice samples from our distribution. So this is the approach that GANs take. So in GANs we don't work with an explicit density function, but instead we're going to take a game-theoretic approach and we're going to learn to generate from our training distribution through a set up of a two player game, and we'll talk about this in more detail. So, in the GAN set up we're saying, okay well what we want, what we care about is we want to be able to sample from a complex high dimensional training distribution. So if we think about well we want to produce samples from this distribution, there's no direct way that we can do this. We have this very complex distribution, we can't just take samples from here. So the solution that we're going to take is that we can, however, sample from simpler distributions. For example random noise, right? Gaussians are, these we can sample from. And so what we're going to do is we're going to learn a transformation from these simple distributions directly to the training distribution that we want. So the question, what can we used to represent this complex distribution? Neural network, I heard the answer. So when we want to model some kind of complex function or transformation we use a neural network. Okay so what we're going to do is we're going to take in the GAN set up, we're going to take some input which is a vector of some dimension that we specify of random noise and then we're going to pass this through a generator network, and then we're going to get as output directly a sample from the training distribution. So every input of random noise we want to correspond to a sample from the training distribution. And so the way we're going to train and learn this network is that we're going to look at this as a two player game. So we have two players, a generator network as well as an additional discriminator network that I'll show next. And our generator network is going to try to, as player one, it's going to try to fool the discriminator by generating real looking images. And then our second player, our discriminator network is then going to try to distinguish between real and fake images. So it wants to do as good a job as possible of trying to determine which of these images are counterfeit or fake images generated by this generator. Okay so what this looks like is, we have our random noise going to our generator network, generator network is generating these images that we're going to call, they're fake from our generator. And then we're going to also have real images that we take from our training set and then we want the discriminator to be able to distinguish between real and fake images. Output real and fake for each images. So the idea is if we're able to have a very good discriminator, we want to train a good discriminator, if it can do a good job of discriminating real versus fake, and then if our generator network is able to generate, if it's able to do well and generate fake images that can successfully fool this discriminator, then we have a good generative model. We're generating images that look like images from the training set. Okay, so we have these two players and so we're going to train this jointly in a minimax game formulation. So this minimax objective function is what we have here. We're going to take, it's going to be minimum over theta G our parameters of our generator network G, and maximum over parameter Zeta of our Discriminator network D, of this objective, right, these terms. And so if we look at these terms, what this is saying is well this first thing, expectation over data of log of D given X. This log of D of X is the discriminator output for real data X. This is going to be likelihood of real data being real from the data distribution P data. And then the second term here, expectation of Z drawn from P of Z, Z drawn from P of Z means samples from our generator network and this term D of G of Z that we have here is the output of our discriminator for generated fake data for our, what does the discriminator output of G of Z which is our fake data. And so if we think about this is trying to do, our discriminator wants to maximize this objective, right, it's a max over theta D such that D of X is close to one. It's close to real, it's high for the real data. And then D of G of X, what it thinks of the fake data on the left here is small, we want this to be close to zero. So if we're able to maximize this, this means discriminator is doing a good job of distinguishing between real and zero. Basically classifying between real and fake data. And then our generator, here we want the generator to minimize this objective such that D of G of Z is close to one. So if this D of G of Z is close to one over here, then the one minus side is small and basically we want to, if we minimize this term then, then it's having discriminator think that our fake data's actually real. So that means that our generator is producing real samples. Okay so this is the important objective of GANs to try and understand so are there any questions about this? [student's words obscured due to lack of microphone] I'm not sure I understand your question, can you, [student's words obscured due to lack of microphone] Yeah, so the question is is this basically trying to have the first network produce real looking images that our second network, the discriminator cannot distinguish between. Okay, so the question is how do we actually label the data or do the training for these networks. We'll see how to train the networks next. But in terms of like what is the data label basically, this is unsupervised, so there's no data labeling. But data generated from the generator network, the fake images have a label of basically zero or fake. And we can take training images that are real images and this basically has a label of one or real. So when we have, the loss function for our discriminator is using this. It's trying to output a zero for the generator images and a one for the real images. So there's no external labels. [student's words obscured due to lack of microphone] So the question is the label for the generator network will be the output for the discriminator network. The generator is not really doing, it's not really doing classifications necessarily. What it's objective is is here, D of G of Z, it wants this to be high. So given a fixed discriminator, it wants to learn the generator parameter such that this is high. So we'll take the fixed discriminator output and use that to do the backprop. Okay so in order to train this, what we're going to do is we're going to alternate between gradient ascent on our discriminator, so we're trying to learn theta beta to maximizing this objective. And then gradient descent on the generator. So taking gradient ascent on these parameters theta G such that we're minimizing this and this objective. And here we are only taking this right part over here because that's the only part that's dependent on theta G parameters. Okay so this is how we can train this GAN. We can alternate between training our discriminator and our generator in this game, each trying to fool the other or generator trying to fool the discriminator. But one thing that is important to note is that in practice this generator objective as we've just defined actually doesn't work that well. And the reason for this is we have to look at the loss landscape. So if we look at the loss landscape over here for D of G of X, if we apply here one minus D of G of X which is what we want to minimize for the generator, it has this shape here. So we want to minimize this and it turns out the slope of this loss is actually going to be higher towards the right. High when D of G of Z is closer to one. So that means that when our generator is doing a good job of fooling the discriminator, we're going to have a high gradient, more higher gradient terms. And on the other hand when we have bad samples, our generator has not learned a good job yet, it's not good at generating yet, then this is when the discriminator can easily tell it's now closer to this zero region on the X axis. Then here the gradient's relatively flat. And so what this actually means is that our our gradient signal is dominated by region where the sample is already pretty good. Whereas we actually want it to learn a lot when the samples are bad, right? These are training samples that we want to learn from. And so in order to, so this basically makes it hard to learn and so in order to improve learning, what we're going to do is define a different, slightly different objective function for the gradient. Where now we're going to do gradient ascent instead. And so instead of minimizing the likelihood of our discriminator being correct, which is what we had earlier, now we'll kind of flip it and say let's maximize the likelihood of our discriminator being wrong. And so this will produce this objective here of maximizing, maximizing log of D of G of X. And so, now basically we want to, there should be a negative sign here. But basically we want to now maximize this flip objective instead and what this now does is if we plot this function on the right here, then we have a high gradient signal in this region on the left where we have bad samples, and now the flatter region is to the right where we would have good samples. So now we're going to learn more from regions of bad samples. And so this has the same objective of fooling the discriminator but it actually works much better in practice and for a lot of work on GANs that are using these kind of vanilla GAN formulation is actually using this objective. Okay so just an aside on that is that jointly training these two networks is challenging and can be unstable. So as we saw here, like we're alternating between training a discriminator and training a generator. This type of alternation is, basically it's hard to learn two networks at once and there's also this issue of depending on what our loss landscape looks at, it can affect our training dynamics. So an active area of research still is how can we choose objectives with better loss landscapes that can help training and make it more stable? Okay so now let's put this all together and look at the full GAN training algorithm. So what we're going to do is for each iteration of training we're going to first train the generation, train the discriminator network a bit and then train the generator network. So for k steps of training the discriminator network we'll sample a mini batch of noise samples from our noise prior Z and then also sample a mini batch of real samples from our training data X. So what we'll do is we'll pass the noise through our generator, we'll get our fake images out. So we have a mini batch of fake images and mini batch of real images. And then we'll pick a gradient step on the discriminator using this mini batch, our fake and our real images and then update our discriminator parameters. And use this and do this a certain number of iterations to train the discriminator for a bit basically. And then after that we'll go to our second step which is training the generator. And so here we'll sample just a mini batch of noise samples. We'll pass this through our generator and then now we want to do backpop on this to basically optimize our generator objective that we saw earlier. So we want to have our generator fool our discriminator as much as possible. And so we're going to alternate between these two steps of taking gradient steps for our discriminator and for the generator. And I said for k steps up here, for training the discriminator and so this is kind of a topic of debate. Some people think just having one iteration of discriminator one type of discriminator, one type of generator is best. Some people think it's better to train the discriminator for a little bit longer before switching to the generator. There's no real clear rule and it's something that people have found different things to work better depending on the problem. And one thing I want to point out is that there's been a lot of recent work that alleviates this problem and makes it so you don't have to spend so much effort trying to balance how the training of these two networks. It'll have more stable training and give better results. And so Wasserstein GAN is an example of a paper that was an important work towards doing this. Okay so looking at the whole picture we've now trained, we have our network setup, we've trained both our generator network and our discriminator network and now after training for generation, we can just take our generator network and use this to generate new images. So we just take noise Z and pass this through and generate fake images from here. Okay and so now let's look at some generated samples from these GANs. So here's an example of trained on MNIST and then on the right on Faces. And for each of these you can also see, just for visualization the closest, on the right, the nearest neighbor from the training set to the column right next to it. And so you can see that we're able to generate very realistic samples and it never directly memorizes the training set. And here are some examples from the original GAN paper on CIFAR images. And these are still fairly, not such good quality yet, these were, the original work is from 2014, so these are some older, simpler networks. And these were using simple, fully connected networks. And so since that time there's been a lot of work on improving GANs. One example of a work that really took a big step towards improving the quality of samples is this work from Alex Radford in ICLR 2016 on adding convolutional architectures to GANs. In this paper there was a whole set of guidelines on architectures for helping GANs to produce better samples. So you can look at this for more details. This is an example of a convolutional architecture that they're using which is going from our input Z noise vector Z and transforming this all the way to the output sample. So now from this large convolutional architecture we'll see that the samples from this model are really starting to look very good. So this is trained on a dataset of bedrooms and we can see all kinds of very realistic fancy looking bedrooms with windows and night stands and other furniture around there so these are some really pretty samples. And we can also try and interpret a little bit of what these GANs are doing. So in this example here what we can do is we can take two points of Z, two different random noise vectors and let's just interpolate between these points. And each row across here is an interpolation from one random noise Z to another random noise vector Z and you can see that as it's changing, it's smoothly interpolating the image as well all the way over. And so something else that we can do is we can see that, well, let's try to analyze further what these vectors Z mean, and so we can try and do vector math on here. So what this experiment does is it says okay, let's take some images of smiling, samples of smiling women images and then let's take some samples of neutral women and then also some samples of neutral men. And so let's try and do take the average of the Z vectors that produced each of these samples and if we, Say we take this, mean vector for the smiling women, subtract the mean vector for the neutral women and add the mean vector for the neutral man, what do we get? And we get samples of smiling man. So we can take the Z vector produced there, generate samples and get samples of smiling men. And we can have another example of this. Of glasses man minus no glasses man and plus glasses women. And get women with glasses. So here you can see that basically the Z has this type of interpretability that you can use this to generate some pretty cool examples. Okay so this year, 2017 has really been the year of the GAN. There's been tons and tons of work on GANs and it's really sort of exploded and gotten some really cool results. So on the left here you can see people working on better training and generation. So we talked about improving the loss functions, more stable training and this was able to get really nice generations here of different types of architectures on the bottom here really crisp high resolution faces. With GANs you can also do, there's also been models on source to try to domain transfer and conditional GANs. And so here, this is an example of source to try to get domain transfer where, for example in the upper part here we are trying to go from source domain of horses to an output domain of zebras. So we can take an image of horses and train a GAN such that the output is going to be the same thing but now zebras in the same image setting as the horses and go the other way around. We can transform apples into oranges. And also the other way around. We can also use this to do photo enhancement. So producing these, really taking a standard photo and trying to make really nice, as if you had, pretending that you have a really nice expensive camera. That you can get the nice blur effects. On the bottom here we have scene changing, so transforming an image of Yosemite from the image in winter time to the image in summer time. And there's really tons of applications. So on the right here there's more. There's also going from a text description and having a GAN that's now conditioned on this text description and producing an image. So there's something here about a small bird with a pink breast and crown and now we're going to generate images of this. And there's also examples down here of filling in edges. So given conditions on some sketch that we have, can we fill in a color version of what this would look like. Can we take a Google, a map grid and put something that looks like Google Earth on, and turn it into something that looks like Google Earth. Go in and hallucinate all of these buildings and trees and so on. And so there's lots of really cool examples of this. And there's also this website for pics to pics which did a lot of these kind of conditional GAN type examples. I encourage you to go look at for more interesting applications that people have done with GANs. And in terms of research papers there's also there's a huge number of papers about GANs this year now. There's a website called the GAN Zoo that kind of is trying to compile a whole list of these. And so here this has only taken me from A through C on the left here and through like L on the right. So it won't even fit on the slide. There's tons of papers as well that you can look at if you're interested. And then one last pointer is also for tips and tricks for training GANs, here's a nice little website that has pointers if you're trying to train these GANs in practice. Okay, so summary of GANs. GANs don't work with an explicit density function. Instead we're going to represent this implicitly through samples and they take a game-theoretic approach to training so we're going to learn to generate from our training distribution through a two player game setup. And the pros of GANs are that they're really having gorgeous state of the art samples and you can do a lot with these. The cons are that they are trickier and more unstable to train, we're not just directly optimizing a one objective function that we can just do backpop and train easily. Instead we have these two networks that we're trying to balance training with so it can be a bit more unstable. And we also can lose out on not being able to do some of the inference queries, P of X, P of Z given X that we had for example in our VAE. And GANs are still an active area of research, this is a relatively new type of model that we're starting to see a lot of and you'll be seeing a lot more of. And so people are still working now on better loss functions more stable training, so Wasserstein GAN for those of you who are interested is basically an improvement in this direction. That now a lot of people are also using and basing models off of. There's also other works like LSGAN, Least Square's GAN, Least Square's GAN and others. So you can look into this more. And a lot of times for these new models in terms of actually implementing this, they're not necessarily big changes. They're different loss functions that you can change a little bit and get like a big improvement in training. And so this is, some of these are worth looking into and you'll also get some practice on your homework assignment. And there's also a lot of work on different types of conditional GANs and GANs for all kinds of different problem setups and applications. Okay so a recap of today. We talked about generative models. We talked about three of the most common kinds of generative models that people are using and doing research on today. So we talked first about pixelRNN and pixelCNN, which is an explicit density model. It optimizes the exact likelihood and it produces good samples but it's pretty inefficient because of the sequential generation. We looked at VAE which optimizes a variational or lower bound on the likelihood and this also produces useful a latent representation. You can do inference queries. But the example quality is still not the best. So even though it has a lot of promise, it's still a very active area of research and has a lot of open problems. And then GANs we talked about is a game-theoretic approach for training and it's what currently achieves the best state of the art examples. But it can also be tricky and unstable to train and it loses out a bit on the inference queries. And so what you'll also see is a lot of recent work on combinations of these kinds of models. So for example adversarial autoencoders. Something like a VAE trained with an additional adversarial loss on top which improves the sample quality. There's also things like pixelVAE is now a combination of pixelCNN and VAE so there's a lot of combinations basically trying to take the best of all these worlds and put them together. Okay so today we talked about generative models and next time we'll talk about reinforcement learning. Thanks.
Stanford_Computer_Vision
Lecture_1_Introduction_to_Convolutional_Neural_Networks_for_Visual_Recognition.txt
- So welcome everyone to CS231n. I'm super excited to offer this class again for the third time. It seems that every time we offer this class it's growing exponentially unlike most things in the world. This is the third time we're teaching this class. The first time we had 150 students. Last year, we had 350 students, so it doubled. This year we've doubled again to about 730 students when I checked this morning. So anyone who was not able to fit into the lecture hall I apologize. But, the videos will be up on the SCPD website within about two hours. So if you weren't able to come today, then you can still check it out within a couple hours. So this class CS231n is really about computer vision. And, what is computer vision? Computer vision is really the study of visual data. Since there's so many people enrolled in this class, I think I probably don't need to convince you that this is an important problem, but I'm still going to try to do that anyway. The amount of visual data in our world has really exploded to a ridiculous degree in the last couple of years. And, this is largely a result of the large number of sensors in the world. Probably most of us in this room are carrying around smartphones, and each smartphone has one, two, or maybe even three cameras on it. So I think on average there's even more cameras in the world than there are people. And, as a result of all of these sensors, there's just a crazy large, massive amount of visual data being produced out there in the world each day. So one statistic that I really like to kind of put this in perspective is a 2015 study from CISCO that estimated that by 2017 which is where we are now that roughly 80% of all traffic on the internet would be video. This is not even counting all the images and other types of visual data on the web. But, just from a pure number of bits perspective, the majority of bits flying around the internet are actually visual data. So it's really critical that we develop algorithms that can utilize and understand this data. However, there's a problem with visual data, and that's that it's really hard to understand. Sometimes we call visual data the dark matter of the internet in analogy with dark matter in physics. So for those of you who have heard of this in physics before, dark matter accounts for some astonishingly large fraction of the mass in the universe, and we know about it due to the existence of gravitational pulls on various celestial bodies and what not, but we can't directly observe it. And, visual data on the internet is much the same where it comprises the majority of bits flying around the internet, but it's very difficult for algorithms to actually go in and understand and see what exactly is comprising all the visual data on the web. Another statistic that I like is that of Youtube. So roughly every second of clock time that happens in the world, there's something like five hours of video being uploaded to Youtube. So if we just sit here and count, one, two, three, now there's 15 more hours of video on Youtube. Google has a lot of employees, but there's no way that they could ever have an employee sit down and watch and understand and annotate every video. So if they want to catalog and serve you relevant videos and maybe monetize by putting ads on those videos, it's really crucial that we develop technologies that can dive in and automatically understand the content of visual data. So this field of computer vision is truly an interdisciplinary field, and it touches on many different areas of science and engineering and technology. So obviously, computer vision's the center of the universe, but sort of as a constellation of fields around computer vision, we touch on areas like physics because we need to understand optics and image formation and how images are actually physically formed. We need to understand biology and psychology to understand how animal brains physically see and process visual information. We of course draw a lot on computer science, mathematics, and engineering as we actually strive to build computer systems that implement our computer vision algorithms. So a little bit more about where I'm coming from and about where the teaching staff of this course is coming from. Me and my co-instructor Serena are both PHD students in the Stanford Vision Lab which is headed by professor Fei-Fei Li, and our lab really focuses on machine learning and the computer science side of things. I work a little bit more on language and vision. I've done some projects in that. And, other folks in our group have worked a little bit on the neuroscience and cognitive science side of things. So as a bit of introduction, you might be curious about how this course relates to other courses at Stanford. So we kind of assume a basic introductory understanding of computer vision. So if you're kind of an undergrad, and you've never seen computer vision before, maybe you should've taken CS131 which was offered earlier this year by Fei-Fei and Juan Carlos Niebles. There was a course taught last quarter by Professor Chris Manning and Richard Socher about the intersection of deep learning and natural language processing. And, I imagine a number of you may have taken that course last quarter. There'll be some overlap between this course and that, but we're really focusing on the computer vision side of thing, and really focusing all of our motivation in computer vision. Also concurrently taught this quarter is CS231a taught by Professor Silvio Savarese. And, CS231a really focuses is a more all encompassing computer vision course. It's focusing on things like 3D reconstruction, on matching and robotic vision, and it's a bit more all encompassing with regards to vision than our course. And, this course, CS231n, really focuses on a particular class of algorithms revolving around neural networks and especially convolutional neural networks and their applications to various visual recognition tasks. Of course, there's also a number of seminar courses that are taught, and you'll have to check the syllabus and course schedule for more details on those 'cause they vary a bit each year. So this lecture is normally given by Professor Fei-Fei Li. Unfortunately, she wasn't able to be here today, so instead for the majority of the lecture we're going to tag team a little bit. She actually recorded a bit of pre-recorded audio describing to you the history of computer vision because this class is a computer vision course, and it's very critical and important that you understand the history and the context of all the existing work that led us to these developments of convolutional neural networks as we know them today. I'll let virtual Fei-Fei take over [laughing] and give you a brief introduction to the history of computer vision. Okay let's start with today's agenda. So we have two topics to cover one is a brief history of computer vision and the other one is the overview of our course CS 231 so we'll start with a very brief history of where vision comes from when did computer vision start and where we are today. The history the history of vision can go back many many years ago in fact about 543 million years ago. What was life like during that time? Well the earth was mostly water there were a few species of animals floating around in the ocean and life was very chill. Animals didn't move around much there they don't have eyes or anything when food swims by they grab them if the food didn't swim by they just float around but something really remarkable happened around 540 million years ago. From fossil studies zoologists found out within a very short period of time — ten million years — the number of animal species just exploded. It went from a few of them to hundreds of thousands and that was strange — what caused this? There were many theories but for many years it was a mystery evolutionary biologists call this evolution's Big Bang. A few years ago an Australian zoologist called Andrew Parker proposed one of the most convincing theory from the studies of fossils he discovered around 540 million years ago the first animals developed eyes and the onset of vision started this explosive speciation phase. Animals can suddenly see; once you can see life becomes much more proactive. Some predators went after prey and prey have to escape from predators so the evolution or onset of vision started a evolutionary arms race and animals had to evolve quickly in order to survive as a species so that was the beginning of vision in animals after 540 million years vision has developed into the biggest sensory system of almost all animals especially intelligent animals in humans we have almost 50% of the neurons in our cortex involved in visual processing it is the biggest sensory system that enables us to survive, work, move around, manipulate things, communicate, entertain, and many things. The vision is really important for animals and especially intelligent animals. So that was a quick story of biological vision. What about humans, the history of humans making mechanical vision or cameras? Well one of the early cameras that we know today is from the 1600s, the Renaissance period of time, camera obscura and this is a camera based on pinhole camera theories. It's very similar to, it's very similar to the to the early eyes that animals developed with a hole that collects lights and then a plane in the back of the camera that collects the information and project the imagery. So as cameras evolved, today we have cameras everywhere this is one of the most popular sensors people use from smartphones to to other sensors. In the mean time biologists started studying the mechanism of vision. One of the most influential work in both human vision where animal vision as well as that inspired computer vision is the work done by Hubel and Wiesel in the 50s and 60s using electrophysiology. What they were asking, the question is "what was the visual processing mechanism like in primates, in mammals" so they chose to study cat brain which is more or less similar to human brain from a visual processing point of view. What they did is to stick some electrodes in the back of the cat brain which is where the primary visual cortex area is and then look at what stimuli makes the neurons in the in the back in the primary visual cortex of cat brain respond excitedly what they learned is that there are many types of cells in the, in the primary visual cortex part of the the cat brain but one of the most important cell is the simple cells they respond to oriented edges when they move in certain directions. Of course there are also more complex cells but by and large what they discovered is visual processing starts with simple structure of the visual world, oriented edges and as information moves along the visual processing pathway the brain builds up the complexity of the visual information until it can recognize the complex visual world. So the history of computer vision also starts around early 60s. Block World is a set of work published by Larry Roberts which is widely known as one of the first, probably the first PhD thesis of computer vision where the visual world was simplified into simple geometric shapes and the goal is to be able to recognize them and reconstruct what these shapes are. In 1966 there was a now famous MIT summer project called "The Summer Vision Project." The goal of this Summer Vision Project, I read: "is an attempt to use our summer workers effectively in a construction of a significant part of a visual system." So the goal is in one summer we're gonna work out the bulk of the visual system. That was an ambitious goal. Fifty years have passed; the field of computer vision has blossomed from one summer project into a field of thousands of researchers worldwide still working on some of the most fundamental problems of vision. We still have not yet solved vision but it has grown into one of the most important and fastest growing areas of artificial intelligence. Another person that we should pay tribute to is David Marr. David Marr was a MIT vision scientist and he has written an influential book in the late 70s about what he thinks vision is and how we should go about computer vision and developing algorithms that can enable computers to recognize the visual world. The thought process in his, in David Mars book is that in order to take an image and arrive at a final holistic full 3d representation of the visual world we have to go through several process. The first process is what he calls "primal sketch;" this is where mostly the edges, the bars, the ends, the virtual lines, the curves, the boundaries, are represented and this is very much inspired by what neuroscientists have seen: Hubel and Wiesel told us the early stage of visual processing has a lot to do with simple structures like edges. Then the next step after the edges and the curves is what David Marr calls "two-and-a-half d sketch;" this is where we start to piece together the surfaces, the depth information, the layers, or the discontinuities of the visual scene, and then eventually we put everything together and have a 3d model hierarchically organized in terms of surface and volumetric primitives and so on. So that was a very idealized thought process of what vision is and this way of thinking actually has dominated computer vision for several decades and is also a very intuitive way for students to enter the field of vision and think about how we can deconstruct the visual information. Another very important seminal group of work happened in the 70s where people began to ask the question "how can we move beyond the simple block world and start recognizing or representing real world objects?" Think about the 70s, it's the time that there's very little data available; computers are extremely slow, PCs are not even around, but computer scientists are starting to think about how we can recognize and represent objects. So in Palo Alto both at Stanford as well as SRI, two groups of scientists that propose similar ideas: one is called "generalized cylinder," one is called "pictorial structure." The basic idea is that every object is composed of simple geometric primitives; for example a person can be pieced together by generalized cylindrical shapes or a person can be pieced together by critical part in their elastic distance between these parts so either representation is a way to reduce the complex structure of the object into a collection of simpler shapes and their geometric configuration. These work have been influential for quite a few, quite a few years and then in the 80s David Lowe, here is another example of thinking how to reconstruct or recognize the visual world from simple world structures, this work is by David Lowe which he tries to recognize razors by constructing lines and edges and and mostly straight lines and their combination. So there was a lot of effort in trying to think what what is the tasks in computer vision in the 60s 70s and 80s and frankly it was very hard to solve the problem of object recognition; everything I've shown you so far are very audacious ambitious attempts but they remain at the level of toy examples or just a few examples. Not a lot of progress have been made in terms of delivering something that can work in real world. So as people think about what are the problems to solving vision one important question came around is: if object recognition is too hard, maybe we should first do object segmentation, that is the task of taking an image and group the pixels into meaningful areas. We might not know the pixels that group together is called a person, but we can extract out all the pixels that belong to the person from its background; that is called image segmentation. So here's one very early seminal work by Jitendra Malik and his student Jianbo Shi from Berkeley from using a graph theory algorithm for the problem of image segmentation. Here's another problem that made some headway ahead of many other problems in computer vision, which is face detection. Faces one of the most important objects to humans, probably the most important objects to humans, around the time of 1999 to 2000 machine learning techniques, especially statistical machine learning techniques start to gain momentum. These are techniques such as support vector machines, boosting, graphical models, including the first wave of neural networks. One particular work that made a lot of contribution was using AdaBoost algorithm to do real-time face detection by Paul Viola and Michael Jones and there's a lot to admire in this work. It was done in 2001 when computer chips are still very very slow but they're able to do face detection in images in near-real-time and after the publication of this paper in five years time, 2006, Fujifilm rolled out the first digital camera that has a real-time face detector in the in the camera so it was a very rapid transfer from basic science research to real world application. So as a field we continue to explore how we can do object recognition better so one of the very influential way of thinking in the late 90s til the first 10 years of 2000 is feature based object recognition and here is a seminal work by David Lowe called SIFT feature. The idea is that to match and the entire object for example here is a stop sign to another stop sight is very difficult because there might be all kinds of changes due to camera angles, occlusion, viewpoint, lighting, and just the intrinsic variation of the object itself but it's inspired to observe that there are some parts of the object, some features, that tend to remain diagnostic and invariant to changes so the task of object recognition began with identifying these critical features on the object and then match the features to a similar object, that's a easier task than pattern matching the entire object. So here is a figure from his paper where it shows that a handful, several dozen SIFT features from one stop sign are identified and matched to the SIFT features of another stop sign. Using the same building block which is features, diagnostic features in images, we have as a field has made another step forward and start to recognizing holistic scenes. Here is an example algorithm called Spatial Pyramid Matching; the idea is that there are features in the images that can give us clues about which type of scene it is, whether it's a landscape or a kitchen or a highway and so on and this particular work takes these features from different parts of the image and in different resolutions and put them together in a feature descriptor and then we do support vector machine algorithm on top of that. Similarly a very similar work has gained momentum in human recognition so putting together these features well we have a number of work that looks at how we can compose human bodies in more realistic images and recognize them. So one work is called the "histogram of gradients," another work is called "deformable part models," so as you can see as we move from the 60s 70s 80s towards the first decade of the 21st century one thing is changing and that's the quality of the pictures were no longer, with the Internet the the the growth of the Internet the digital cameras were having better and better data to study computer vision. So one of the outcome in the early 2000s is that the field of computer vision has defined a very important building block problem to solve. It's not the only problem to solve but in terms of recognition this is a very important problem to solve which is object recognition. I talked about object recognition all along but in the early 2000s we began to have benchmark data set that can enable us to measure the progress of object recognition. One of the most influential benchmark data set is called PASCAL Visual Object Challenge, and it's a data set composed of 20 object classes, three of them are shown here: train, airplane, person; I think it also has cows, bottles, cats, and so on; and the data set is composed of several thousand to ten thousand images per category and then the field different groups develop algorithm to test against the testing set and see how we have made progress. So here is a figure that shows from year 2007 to year 2012. The performance on detecting objects the 20 object in this image in a in a benchmark data set has steadily increased. So there was a lot of progress made. Around that time a group of us from Princeton to Stanford also began to ask a harder question to ourselves as well as our field which is: are we ready to recognize every object or most of the object in the world. It's also motivated by an observation that is rooted in machine learning which is that most of the machine learning algorithms it doesn't matter if it's graphical model, or support vector machine, or AdaBoost, is very likely to overfit in the training process and part of the problem is visual data is very complex because it's complex our models tend to have a high dimension a high dimension of input and have to have a lot of parameters to fit and when we don't have enough training data overfitting happens very fast and then we cannot generalize very well. So motivated by this dual reason, one is just want to recognize the world of all the objects, the other one is to come back the machine learning overcome the the machine learning bottleneck of overfitting, we began this project called ImageNet. We wanted to put together the largest possible dataset of all the pictures we can find, the world of objects, and use that for training as well as for benchmarking. So it was a project that took us about three years, lots of hard work; it basically began with downloading billions of images from the internet organized by the dictionary we called WordNet which is tens of thousands of object classes and then we have to use some clever crowd engineering trick a method using Amazon Mechanical Turk platform to sort, clean, label each of the images. The end result is a ImageNet of almost 15 million or 40 million plus images organized in twenty-two thousand categories of objects and scenes and this is the gigantic, probably the biggest dataset produced in the field of AI at that time and it began to push forward the algorithm development of object recognition into another phase. Especially important is how to benchmark the progress so starting 2009 the ImageNet team rolled out an international challenge called ImageNet Large-Scale Visual Recognition Challenge and for this challenge we put together a more stringent test set of 1.4 million objects across 1,000 object classes and this is to test the image classification recognition results for the computer vision algorithms. So here's the example picture and if an algorithm can output 5 labels and and top five labels includes the correct object in this picture then we call this a success. So here is a result summary of the ImageNet Challenge, of the image classification result from 2010 to 2015 so on x axis you see the years and the y axis you see the error rate. So the good news is the error rate is steadily decreasing to the point by 2012 the error rate is so low is on par with what humans can do and here a human I mean a single Stanford PhD student who spend weeks doing this task as if he were a computer participating in the ImageNet Challenge. So that's a lot of progress made even though we have not solved all the problems of object recognition which you'll learn about in this class but to go from an error rate that's unacceptable for real-world application all the way to on par being on par with humans in ImageNet challenge, the field took only a few years. And one particular moment you should notice on this graph is the the year 2012. In the first two years our error rate hovered around 25 percent but in 2012 the error rate was dropped more almost 10 percent to 16 percent even though now it's better but that drop was very significant and the winning algorithm of that year is a convolutional neural network model that beat all other algorithms around that time to win the ImageNet challenge and this is the focus of our whole course this quarter is to look at to have a deep dive into what convolutional neural network models are and another name for this is deep learning by by popular popular name now it's called deep learning and to look at what these models are what are the principles what are the good practices what are the recent progress of this model, but here is where the history was made is that we, around 2012 convolutional neural network model or deep learning models showed the tremendous capacity and ability in making a good progress in the field of computer vision along with several other sister fields like natural language processing and speech recognition. So without further ado I'm going to hand the rest of the lecture to to Justin to talk about the overview of CS 231n. Alright, thanks so much Fei-Fei. I'll take it over from here. So now I want to shift gears a little bit and talk a little bit more about this class CS231n. So this class focuses on one of these most, so the primary focus of this class is this image classification problem which we previewed a little bit in the contex of the ImageNet Challenge. So in image classification, again, the setup is that your algorithm looks at an image and then picks from among some fixed set of categories to classify that image. And, this might seem like somewhat of a restrictive or artificial setup, but it's actual quite general. And, this problem can be applied in many different settings both in industry and academia and many different places. So for example, you could apply this to recognizing food or recognizing calories in food or recognizing different artworks, different product out in the world. So this relatively basic tool of image classification is super useful on its own and could be applied all over the place for many different applications. But, in this course, we're also going to talk about several other visual recognition problems that build upon many of the tools that we develop for the purpose of image classification. We'll talk about other problems such as object detection or image captioning. So the setup in object detection is a little bit different. Rather than classifying an entire image as a cat or a dog or a horse or whatnot, instead we want to go in and draw bounding boxes and say that there is a dog here, and a cat here, and a car over in the background, and draw these boxes describing where objects are in the image. We'll also talk about image captioning where given an image the system now needs to produce a natural language sentence describing the image. It sounds like a really hard, complicated, and different problem, but we'll see that many of the tools that we develop in service of image classification will be reused in these other problems as well. So we mentioned this before in the context of the ImageNet Challenge, but one of the things that's really driven the progress of the field in recent years has been this adoption of convolutional neural networks or CNNs or sometimes called convnets. So if we look at the algorithms that have won the ImageNet Challenge for the last several years, in 2011 we see this method from Lin et al which is still hierarchical. It consists of multiple layers. So first we compute some features, next we compute some local invariances, some pooling, and go through several layers of processing, and then finally feed this resulting descriptor to a linear SVN. What you'll notice here is that this is still hierarchical. We're still detecting edges. We're still having notions of invariance. And, many of these intuitions will carry over into convnets. But, the breakthrough moment was really in 2012 when Jeff Hinton's group in Toronto together with Alex Krizhevsky and Ilya Sutskever who were his PHD student at that time created this seven layer convolutional neural network now known as AlexNet, then called Supervision which just did very, very well in the ImageNet competition in 2012. And, since then every year the winner of ImageNet has been a neural network. And, the trend has been that these networks are getting deeper and deeper each year. So AlexNet was a seven or eight layer neural network depending on how exactly you count things. In 2015 we had these much deeper networks. GoogleNet from Google and VGG, the VGG network from Oxford which was about 19 layers at that time. And, then in 2015 it got really crazy and this paper came out from Microsoft Research Asia called Residual Networks which were 152 layers at that time. And, since then it turns out you can get a little bit better if you go up to 200, but you run our of memory on your GPUs. We'll get into all of that later, but the main takeaway here is that convolutional neural networks really had this breakthrough moment in 2012, and since then there's been a lot of effort focused in tuning and tweaking these algorithms to make them perform better and better on this problem of image classification. And, throughout the rest of the quarter, we're going to really dive in deep, and you'll understand exactly how these different models work. But, one point that's really important, it's true that the breakthrough moment for convolutional neural networks was in 2012 when these networks performed very well on the ImageNet Challenge, but they certainly weren't invented in 2012. These algorithms had actually been around for quite a long time before that. So one of the sort of foundational works in this area of convolutional neural networks was actually in the '90s from Jan LeCun and collaborators who at that time were at Bell Labs. So in 1998 they build this convolutional neural network for recognizing digits. They wanted to deploy this and wanted to be able to automatically recognize handwritten checks or addresses for the post office. And, they built this convolutional neural network which could take in the pixels of an image and then classify either what digit it was or what letter it was or whatnot. And, the structure of this network actually look pretty similar to the AlexNet architecture that was used in 2012. Here we see that, you know, we're taking in these raw pixels. We have many layers of convolution and sub-sampling, together with the so called fully connected layers. All of which will be explained in much more detail later in the course. But, if you just kind of look at these two pictures, they look pretty similar. And, this architecture in 2012 has a lot of these architectural similarities that are shared with this network going back to the '90s. So then the question you might ask is if these algorithms were around since the '90s, why have they only suddenly become popular in the last couple of years? And, there's a couple really key innovations that happened that have changed since the '90s. One is computation. Thanks to Moore's law, we've gotten faster and faster computers every year. And, this is kind of a coarse measure, but if you just look at the number of transistors that are on chips, then that has grown by several orders of magnitude between the '90s and today. We've also had this advent of graphics processing units or GPUs which are super parallelizable and ended up being a perfect tool for really crunching these computationally intensive convolutional neural network models. So just by having more compute available, it allowed researchers to explore with larger architectures and larger models, and in some cases, just increasing the model size, but still using these kind of classical approaches and classical algorithms tends to work quite well. So this idea of increasing computation is super important in the history of deep learning. I think the second key innovation that changed between now and the '90s was data. So these algorithms are very hungry for data. You need to feed them a lot of labeled images and labeled pixels for them to eventually work quite well. And, in the '90s there just wasn't that much labeled data available. This was, again, before tools like Mechanical Turk, before the internet was super, super widely used. And, it was very difficult to collect large, varied datasets. But, now in the 2010s with datasets like PASCAL and ImageNet, there existed these relatively large, high quality labeled datasets that were, again, orders and orders magnitude bigger than the dataset available in the '90s. And, these much large datasets, again, allowed us to work with higher capacity models and train these models to actually work quite well on real world problems. But, the critical takeaway here is that convolutional neural networks although they seem like this sort of fancy, new thing that's only popped up in the last couple of years, that's really not the case. And, these class of algorithms have existed for quite a long time in their own right as well. Another thing I'd like to point out in computer vision we're in the business of trying to build machines that can see like people. And, people can actually do a lot of amazing things with their visual systems. When you go around the world, you do a lot more than just drawing boxes around the objects and classifying things as cats or dogs. Your visual system is much more powerful than that. And, as we move forward in the field, I think there's still a ton of open challenges and open problems that we need to address. And, we need to continue to develop our algorithms to do even better and tackle even more ambitious problems. Some examples of this are going back to these older ideas in fact. Things like semantic segmentation or perceptual grouping where rather than labeling the entire image, we want to understand for every pixel in the image what is it doing, what does it mean. And, we'll revisit that idea a little bit later in the course. There's definitely work going back to this idea of 3D understanding, of reconstructing the entire world, and that's still an unsolved problem I think. There're just tons and tons of other tasks that you can imagine. For example activity recognition, if I'm given a video of some person doing some activity, what's the best way to recognize that activity? That's quite a challenging problem as well. And, then as we move forward with things like augmented reality and virtual reality, and as new technologies and new types of sensors become available, I think we'll come up with a lot of new, interesting hard and challenging problems to tackle as a field. So this is an example from some of my own work in the vision lab on this dataset called Visual Genome. So here the idea is that we're trying to capture some of these intricacies in the real world. Rather than maybe describing just boxes, maybe we should be describing images as these whole large graphs of semantically related concepts that encompass not just object identities but also object relationships, object attributes, actions that are occurring in the scene, and this type of representation might allow us to capture some of this richness of the visual world that's left on the table when we're using simple classification. This is by no means a standard approach at this point, but just kind of giving you this sense that there's so much more that your visual system can do that is maybe not captured in this vanilla image classification setup. I think another really interesting work that kind of points in this direction actually comes from Fei-Fei's grad school days when she was doing her PHD at Cal Tech with her advisors there. In this setup, they had people, they stuck people, and they showed people this image for just half a second. So they flashed this image in front of them for just a very short period of time, and even in this very, very rapid exposure to an image, people were able to write these long descriptive paragraphs giving a whole story of the image. And, this is quite remarkable if you think about it that after just half a second of looking at this image, a person was able to say that this is some kind of a game or fight, two groups of men. The man on the left is throwing something. Outdoors because it seem like I have an impression of grass, and so on and so on. And, you can imagine that if a person were to look even longer at this image, they could write probably a whole novel about who these people are, and why are they in this field playing this game. They could go on and on and on roping in things from their external knowledge and their prior experience. This is in some sense the holy grail of computer vision. To sort of understand the story of an image in a very rich and deep way. And, I think that despite the massive progress in the field that we've had over the past several years, we're still quite a long way from achieving this holy grail. Another image that I think really exemplifies this idea actually comes, again, from Andrej Karpathy's blog is this amazing image. Many of you smiled, many of you laughed. I think this is a pretty funny image. But, why is it a funny image? Well we've got a man standing on a scale, and we know that people are kind of self conscious about their weight sometimes, and scales measure weight. Then we've got this other guy behind him pushing his foot down on the scale, and we know that because of the way scales work that will cause him to have an inflated reading on the scale. But, there's more. We know that this person is not just any person. This is actually Barack Obama who was at the time President of the United States, and we know that Presidents of the United States are supposed to be respectable politicians that are [laughing] probably not supposed to be playing jokes on their compatriots in this way. We know that there's these people in the background that are laughing and smiling, and we know that that means that they're understanding something about the scene. We have some understanding that they know that President Obama is this respectable guy who's looking at this other guy. Like, this is crazy. There's so much going on in this image. And, our computer vision algorithms today are actually a long way I think from this true, deep understanding of images. So I think that sort of despite the massive progress in the field, we really have a long way to go. To me, that's really exciting as a researcher 'cause I think that we'll have just a lot of really exciting, cool problems to tackle moving forward. So I hope at this point I've done a relatively good job to convince you that computer vision is really interesting. It's really exciting. It can be very useful. It can go out and make the world a better place in various ways. Computer vision could be applied in places like medical diagnosis and self-driving cars and robotics and all these different places. In addition to sort of tying back to sort of this core idea of understanding human intelligence. So to me, I think that computer vision is this fantastically amazing, interesting field, and I'm really glad that over the course of the quarter, we'll get to really dive in and dig into all these different details about how these algorithms are working these days. That's sort of my pitch about computer vision and about the history of computer vision. I don't know if there's any questions about this at this time. Okay. So then I want to talk a little bit more about the logistics of this class for the rest of the quarter. So you might ask who are we? So this class is taught by Fei-Fei Li who is a professor of computer science here at Standford who's my advisor and director of the Stanford Vision Lab and also the Stanford AI Lab. The other two instructors are me, Justin Johnson, and Serena Yeung who is up here in the front. We're both PHD students working under Fei-Fei on various computer vision problems. We have an amazing teaching staff this year of 18 TAs so far. Many of whom are sitting over here in the front. These guys are really the unsung heroes behind the scenes making the course run smoothly, making sure everything happens well. So be nice to them. [laughing] I think I also should mention this is the third time we've taught this course, and it's the first time that Andrej Karpathy has not been an instructor in this course. He was a very close friend of mine. He's still alive. He's okay, don't worry. [laughing] But, he graduated, so he's actually here I think hanging around in the lecture hall. A lot of the development and the history of this course is really due to him working on it with me over the last couple of years. So I think you should be aware of that. Also about logistics, probably the best way for keeping in touch with the course staff is through Piazza. You should all go and signup right now. Piazza is really our preferred method of communication with the class with the teaching staff. If you have questions that you're afraid of being embarrassed about asking in front of your classmates, go ahead and ask anonymously even post private questions directly to the teaching staff. So basically anything that you need should ideally go through Piazza. We also have a staff mailing list, but we ask that this is mostly for sort of personal, confidential things that you don't want going on Piazza, or if you have something that's super confidential, super personal, then feel free to directly email me or Fei-Fei or Serena about that. But, for the most part, most of your communication with the staff should be through Piazza. We also have an optional textbook this year. This is by no means required. You can go through the course totally fine without it. Everything will be self contained. This is sort of exciting because it's maybe the first textbook about deep learning that got published earlier this year by E.N. Goodfellow, Yoshua Bengio, and Aaron Courville. I put the Amazon link here in the slides. You can get it if you want to, but also the whole content of the book is free online, so you don't even have to buy it if you don't want to. So again, this is totally optional, but we'll probably be posting some readings throughout the quarter that give you an additional perspective on some of the material. So our philosophy about this class is that you should really understand the deep mechanics of all of these algorithms. You should understand at a very deep level exactly how these algorithms are working like what exactly is going on when you're stitching together these neural networks, how do these architectural decisions influence how the network is trained and tested and whatnot and all that. And, throughout the course through the assignments, you'll be implementing your own convolutional neural networks from scratch in Python. You'll be implementing the full forward and backward passes through these things, and by the end, you'll have implemented a whole convolutional neural network totally on your own. I think that's really cool. But, we also kind of practical, and we know that in most cases people are not writing these things from scratch, so we also want to give you a good introduction to some of the state of the art software tools that are used in practice for these things. So we're going to talk about some of the state of the art software packages like Tensor Flow, Torch, [Py]Torch, all these other things. And, I think you'll get some exposure to those on the homeworks and definitely through the course project as well. Another note about this course is that it's very state of the art. I think it's super exciting. This is a very fast moving field. As you saw, even these plots in the imaging challenge basically there's been a ton of progress since 2012, and like while I've been in grad school, the whole field is sort of transforming ever year. And, that's super exciting and super encouraging. But, what that means is that there's probably content that we'll cover this year that did not exist the last time that this course was taught last year. I think that's super exciting, and that's one of my favorite parts about teaching this course is just roping in all these new scientific, hot off the presses stuff and being able to present it to you guys. We're also sort of about fun. So we're going to talk about some interesting maybe not so serious topics as well this quarter including image captioning is pretty fun where we can write descriptions about images. But, we'll also cover some of these more artistic things like DeepDream here on the left where we can use neural networks to hallucinate these crazy, psychedelic images. And, by the end of the course, you'll know how that works. Or on the right, this idea of style transfer where we can take an image and render it in the style of famous artists like Picasso or Van Gogh or what not. And again, by the end of the quarter, you'll see how this stuff works. So the way the course works is we're going to have three problem sets. The first problem set will hopefully be out by the end of the week. We'll have an in class, written midterm exam. And, a large portion of your grade will be the final course project where you'll work in teams of one to three and produce some amazing project that will blow everyone's minds. We have a late policy, so you have seven late days that you're free to allocate among your different homeworks. These are meant to cover things like minor illnesses or traveling or conferences or anything like that. If you come to us at the end of the quarter and say that, "I suddenly have to give a presentation "at this conference." That's not going to be okay. That's what your late days are for. That being said, if you have some very extenuating circumstances, then do feel free to email the course staff if you have some extreme circumstances about that. Finally, I want to make a note about the collaboration policy. As Stanford students, you should all be aware of the honor code that governs the way that you should be collaborating and working together, and we take this very seriously. We encourage you to think very carefully about how you're collaborating and making sure it's within the bounds of the honor code. So in terms of prerequisites, I think the most important is probably a deep familiarity with Python because all of the programming assignments will be in Python. Some familiarity with C or C++ would be useful. You will probably not be writing any C or C++ in this course, but as you're browsing through the source code of these various software packages, being able to read C++ code at least is very useful for understanding how these packages work. We also assume that you know what calculus is, you know how to take derivatives all that sort of stuff. We assume some linear algebra. That you know what matrices are and how to multiply them and stuff like that. We can't be teaching you how to take like derivatives and stuff. We also assume a little bit of knowledge coming in of computer vision maybe at the level of CS131 or 231a. If you have taken those courses before, you'll be fine. If you haven't, I think you'll be okay in this class, but you might have a tiny bit of catching up to do. But, I think you'll probably be okay. Those are not super strict prerequisites. We also assume a little bit of background knowledge about machine learning maybe at the level of CS229. But again, I think really important, key fundamental machine learning concepts we'll reintroduce as they come up and become important. But, that being said, a familiarity with these things will be helpful going forward. So we have a course website. Go check it out. There's a lot of information and links and syllabus and all that. I think that's all that I really want to cover today. And, then later this week on Thursday, we'll really dive into our first learning algorithm and start diving into the details of these things.
Stanford_Computer_Vision
Lecture_11_Detection_and_Segmentation.txt
- Hello, hi. So I want to get started. Welcome to CS 231N Lecture 11. We're going to talk about today detection segmentation and a whole bunch of other really exciting topics around core computer vision tasks. But as usual, a couple administrative notes. So last time you obviously took the midterm, we didn't have lecture, hopefully that went okay for all of you but so we're going to work on grading the midterm this week, but as a reminder please don't make any public discussions about the midterm questions or answers or whatever until at least tomorrow because there are still some people taking makeup midterms today and throughout the rest of the week so we just ask you that you refrain from talking publicly about midterm questions. Why don't you wait until Monday? [laughing] Okay, great. So we're also starting to work on midterm grading. We'll get those back to you as soon as you can, as soon as we can. We're also starting to work on grading assignment two so there's a lot of grading being done this week. The TA's are pretty busy. Also a reminder for you guys, hopefully you've been working hard on your projects now that most of you are done with the midterm so your project milestones will be due on Tuesday so any sort of last minute changes that you had in your projects, I know some people decided to switch projects after the proposal, some teams reshuffled a little bit, that's fine but your milestone should reflect the project that you're actually doing for the rest of the quarter. So hopefully that's going out well. I know there's been a lot of worry and stress on Piazza, wondering about assignment three. So we're working on that as hard as we can but that's actually a bit of a new assignment, it's changing a bit from last year so it will be out as soon as possible, hopefully today or tomorrow. Although we promise that whenever it comes out you'll have two weeks to finish it so try not to stress out about that too much. But I'm pretty excited, I think assignment three will be really cool, has a lot of cool, it'll cover a lot of really cool material. So another thing, last time in lecture we mentioned this thing called the Train Game which is this really cool thing we've been working on sort of as a side project a little bit. So this is an interactive tool that you guys can go on and use to explore a little bit the process of tuning hyperparameters in practice so we hope that, so this is again totally not required for the course. Totally optional, but if you do we will offer a small amount of extra credit for those of you who want to do well and participate on this. And we'll send out exactly some more details later this afternoon on Piazza. But just a bit of a demo for what exactly is this thing. So you'll get to go in and we've changed the name from Train Game to HyperQuest because you're questing to solve, to find the best hyperparameters for your model so this is really cool, it'll be an interactive tool that you can use to explore the training of hyperparameters interactively in your browser. So you'll login with your student ID and name. You'll fill out a little survey with some of your experience on deep learning then you'll read some instructions. So in this game you'll be shown some random data set on every trial. This data set might be images or it might be vectors and your goal is to train a model by picking the right hyperparameters interactively to perform as well as you can on the validation set of this random data set. And it'll sort of keep track of your performance over time and there'll be a leaderboard, it'll be really cool. So every time you play the game, you'll get some statistics about your data set. In this case we're doing a classification problem with 10 classes. You can see down at the bottom you have these statistics about random data set, we have 10 classes. The input data size is three by 32 by 32 so this is some image data set and we can see that in this case we have 8500 examples in the training set and 1500 examples in the validation set. These are all random, they'll change a little bit every time. Based on these data set statistics you'll make some choices on your initial learning rate, your initial network size, and your initial dropout rate. Then you'll see a screen like this where it'll run one epoch with those chosen hyperparameters, show you on the right here you'll see two plots. One is your training and validation loss for that first epoch. Then you'll see your training and validation accuracy for that first epoch and based on the gaps that you see in these two graphs you can make choices interactively to change the learning rates and hyperparameters for the next epoch. So then you can either choose to continue training with the current or changed hyperparameters, you can also stop training, or you can revert to go back to the previous checkpoint in case things got really messed up. So then you'll get to make some choice, so here we'll decide to continue training and in this case you could go and set new learning rates and new hyperparameters for the next epoch of training. You can also, kind of interesting here, you can actually grow the network interactively during training in this demo. There's this cool trick from a couple recent papers where you can either take existing layers and make them wider or add new layers to the network in the middle of training while still maintaining the same function in the network so you can do that to increase the size of your network in the middle of training here which is kind of cool. So then you'll make choices over several epochs and eventually your final validation accuracy will be recorded and we'll have some leaderboard that compares your score on that data set to some simple baseline models. And depending on how well you do on this leaderboard we'll again offer some small amounts of extra credit for those of you who choose to participate. So this is again, totally optional, but I think it can be a really cool learning experience for you guys to play around with and explore how hyperparameters affect the learning process. Also, it's really useful for us. You'll help science out by participating in this experiment. We're pretty interested in seeing how people behave when they train neural networks so you'll be helping us out as well if you decide to play this. But again, totally optional, up to you. Any questions on that? Hopefully at some point but it's. So the question was will this be a paper or whatever eventually? Hopefully but it's really early stages of this project so I can't make any promises but I hope so. But I think it'll be really cool. [laughing] Yeah, so the question is how can you add layers during training? I don't really want to get into that right now but the paper to read is Net2Net by Ian Goodfellow's one of the authors and there's another paper from Microsoft called Network Morphism. So if you read those two papers you can see how this works. Okay, so last time, a bit of a reminder before we had the midterm last time we talked about recurrent neural networks. We saw that recurrent neural networks can be used for different types of problems. In addition to one to one we can do one to many, many to one, many to many. We saw how this can apply to language modeling and we saw some cool examples of applying neural networks to model different sorts of languages at the character level and we sampled these artificial math and Shakespeare and C source code. We also saw how similar things could be applied to image captioning by connecting a CNN feature extractor together with an RNN language model. And we saw some really cool examples of that. We also talked about the different types of RNN's. We talked about this Vanilla RNN. I also want to mention that this is sometimes called a Simple RNN or an Elman RNN so you'll see all of these different terms in literature. We also talked about the Long Short Term Memory or LSTM. And we talked about how the gradient, the LSTM has this crazy set of equations but it makes sense because it helps improve gradient flow during back propagation and helps this thing model more longer term dependencies in our sequences. So today we're going to switch gears and talk about a whole bunch of different exciting tasks. We're going to talk about, so so far we've been talking about mostly the image classification problem. Today we're going to talk about various types of other computer vision tasks where you actually want to go in and say things about the spatial pixels inside your images so we'll see segmentation, localization, detection, a couple other different computer vision tasks and how you can approach these with convolutional neural networks. So as a bit of refresher, so far the main thing we've been talking about in this class is image classification so here we're going to have some input image come in. That input image will go through some deep convolutional network, that network will give us some feature vector of maybe 4096 dimensions in the case of AlexNet RGB and then from that final feature vector we'll have some fully-connected, some final fully-connected layer that gives us 1000 numbers for the different class scores that we care about where 1000 is maybe the number of classes in ImageNet in this example. And then at the end of the day what the network does is we input an image and then we output a single category label saying what is the content of this entire image as a whole. But this is maybe the most basic possible task in computer vision and there's a whole bunch of other interesting types of tasks that we might want to solve using deep learning. So today we're going to talk about several of these different tasks and step through each of these and see how they all work with deep learning. So we'll talk about these more in detail about what each problem is as we get to it but this is kind of a summary slide that we'll talk first about semantic segmentation. We'll talk about classification and localization, then we'll talk about object detection, and finally a couple brief words about instance segmentation. So first is the problem of semantic segmentation. In the problem of semantic segmentation, we want to input an image and then output a decision of a category for every pixel in that image so for every pixel in this, so this input image for example is this cat walking through the field, he's very cute. And in the output we want to say for every pixel is that pixel a cat or grass or sky or trees or background or some other set of categories. So we're going to have some set of categories just like we did in the image classification case but now rather than assigning a single category labeled to the entire image, we want to produce a category label for each pixel of the input image. And this is called semantic segmentation. So one interesting thing about semantic segmentation is that it does not differentiate instances so in this example on the right we have this image with two cows where they're standing right next to each other and when we're talking about semantic segmentation we're just labeling all the pixels independently for what is the category of that pixel. So in the case like this where we have two cows right next to each other the output does not make any distinguishing, does not distinguish between these two cows. Instead we just get a whole mass of pixels that are all labeled as cow. So this is a bit of a shortcoming of semantic segmentation and we'll see how we can fix this later when we move to instance segmentation. But at least for now we'll just talk about semantic segmentation first. So you can imagine maybe using a class, so one potential approach for attacking semantic segmentation might be through classification. So there's this, you could use this idea of a sliding window approach to semantic segmentation. So you might imagine that we take our input image and we break it up into many many small, tiny local crops of the image so in this example we've taken maybe three crops from around the head of this cow and then you could imagine taking each of those crops and now treating this as a classification problem. Saying for this crop, what is the category of the central pixel of the crop? And then we could use all the same machinery that we've developed for classifying entire images but now just apply it on crops rather than on the entire image. And this would probably work to some extent but it's probably not a very good idea. So this would end up being super super computationally expensive because we want to label every pixel in the image, we would need a separate crop for every pixel in that image and this would be super super expensive to run forward and backward passes through. And moreover, we're actually, if you think about this we can actually share computation between different patches so if you're trying to classify two patches that are right next to each other and actually overlap then the convolutional features of those patches will end up going through the same convolutional layers and we can actually share a lot of the computation when applying this to separate passes or when applying this type of approach to separate patches in the image. So this is actually a terrible idea and nobody does this and you should probably not do this but it's at least the first thing you might think of if you were trying to think about semantic segmentation. Then the next idea that works a bit better is this idea of a fully convolutional network right. So rather than extracting individual patches from the image and classifying these patches independently, we can imagine just having our network be a whole giant stack of convolutional layers with no fully connected layers or anything so in this case we just have a bunch of convolutional layers that are all maybe three by three with zero padding or something like that so that each convolutional layer preserves the spatial size of the input and now if we pass our image through a whole stack of these convolutional layers, then the final convolutional layer could just output a tensor of something by C by H by W where C is the number of categories that we care about and you could see this tensor as just giving our classification scores for every pixel in the input image at every location in the input image. And we could compute this all at once with just some giant stack of convolutional layers. And then you could imagine training this thing by putting a classification loss at every pixel of this output, taking an average over those pixels in space, and just training this kind of network through normal, regular back propagation. Question? Oh, the question is how do you develop training data for this? It's very expensive right. So the training data for this would be we need to label every pixel in those input images so there's tools that people sometimes have online where you can go in and sort of draw contours around the objects and then fill in regions but in general getting this kind of training data is very expensive. Yeah, the question is what is the loss function? So here since we're making a classification decision per pixel then we put a cross entropy loss on every pixel of the output. So we have the ground truth category label for every pixel in the output, then we compute across entropy loss between every pixel in the output and the ground truth pixels and then take either a sum or an average over space and then sum or average over the mini-batch. Question? Yeah, yeah. Yeah, the question is do we assume that we know the categories? So yes, we do assume that we know the categories up front so this is just like the image classification case. So an image classification we know at the start of training based on our data set that maybe there's 10 or 20 or 100 or 1000 classes that we care about for this data set and then here we are fixed to that set of classes that are fixed for the data set. So this model is relatively simple and you can imagine this working reasonably well assuming that you tuned all the hyperparameters right but it's kind of a problem right. So in this setup, since we're applying a bunch of convolutions that are all keeping the same spatial size of the input image, this would be super super expensive right. If you wanted to do convolutions that maybe have 64 or 128 or 256 channels for those convolutional filters which is pretty common in a lot of these networks, then running those convolutions on this high resolution input image over a sequence of layers would be extremely computationally expensive and would take a ton of memory. So in practice, you don't usually see networks with this architecture. Instead you tend to see networks that look something like this where we have some downsampling and then some upsampling of the feature map inside the image. So rather than doing all the convolutions of the full spatial resolution of the image, we'll maybe go through a small number of convolutional layers at the original resolution then downsample that feature map using something like max pooling or strided convolutions and sort of downsample, downsample, so we have convolutions in downsampling and convolutions in downsampling that look much like a lot of the classification networks that you see but now the difference is that rather than transitioning to a fully connected layer like you might do in an image classification setup, instead we want to increase the spatial resolution of our predictions in the second half of the network so that our output image can now be the same size as our input image and this ends up being much more computationally efficient because you can make the network very deep and work at a lower spatial resolution for many of the layers at the inside of the network. So we've already seen examples of downsampling when it comes to convolutional networks. We've seen that you can do strided convolutions or various types of pooling to reduce the spatial size of the image inside a network but we haven't really talked about upsampling and the question you might be wondering is what are these upsampling layers actually look like inside the network? And what are our strategies for increasing the size of a feature map inside the network? Sorry, was there a question in the back? Yeah, so the question is how do we upsample? And the answer is that's the topic of the next couple slides. [laughing] So one strategy for upsampling is something like unpooling so we have this notion of pooling to downsample so we talked about average pooling or max pooling so when we talked about average pooling we're kind of taking a spatial average within a receptive field of each pooling region. One kind of analog for upsampling is this idea of nearest neighbor unpooling. So here on the left we see this example of nearest neighbor unpooling where our input is maybe some two by two grid and our output is a four by four grid and now in our output we've done a two by two stride two nearest neighbor unpooling or upsampling where we've just duplicated that element for every point in our two by two receptive field of the unpooling region. Another thing you might see is this bed of nails unpooling or bed of nails upsampling where you'll just take, again we have a two by two receptive field for our unpooling regions and then you'll take the, in this case you make it all zeros except for one element of the unpooling region so in this case we've taken all of our inputs and always put them in the upper left hand corner of this unpooling region and everything else is zeros. And this is kind of like a bed of nails because the zeros are very flat, then you've got these things poking up for the values at these various non-zero regions. Another thing that you see sometimes which was alluded to by the question a minute ago is this idea of max unpooling so in a lot of these networks they tend to be symmetrical where we have a downsampling portion of the network and then an upsampling portion of the network with a symmetry between those two portions of the network. So sometimes what you'll see is this idea of max unpooling where for each unpooling, for each upsampling layer, it is associated with one of the pooling layers in the first half of the network and now in the first half, in the downsampling when we do max pooling we'll actually remember which element of the receptive field during max pooling was used to do the max pooling and now when we go through the rest of the network then we'll do something that looks like this bed of nails upsampling except rather than always putting the elements in the same position, instead we'll stick it into the position that was used in the corresponding max pooling step earlier in the network. I'm not sure if that explanation was clear but hopefully the picture makes sense. Yeah, so then you just end up filling the rest with zeros. So then you fill the rest with zeros and then you stick the elements from the low resolution patch up into the high resolution patch at the points where the max pooling took place at the corresponding max pooling there. Okay, so that's kind of an interesting idea. Sorry, question? Oh yeah, so the question is why is this a good idea? Why might this matter? So the idea is that when we're doing semantic segmentation we want our predictions to be pixel perfect right. We kind of want to get those sharp boundaries and those tiny details in our predictive segmentation so now if you're doing this max pooling, there's this sort of heterogeneity that's happening inside the feature map due to the max pooling where from the low resolution image you don't know, you're sort of losing spatial information in some sense by you don't know where that feature vector came from in the local receptive field after max pooling. So if you actually unpool by putting the vector in the same slot you might think that that might help us handle these fine details a little bit better and help us preserve some of that spatial information that was lost during max pooling. Question? The question is does this make things easier for back prop? Yeah, I guess, I don't think it changes the back prop dynamics too much because storing these indices is not a huge computational overhead. They're pretty small in comparison to everything else. So another thing that you'll see sometimes is this idea of transpose convolution. So transpose convolution, so for these various types of unpooling that we just talked about, these bed of nails, this nearest neighbor, this max unpooling, all of these are kind of a fixed function, they're not really learning exactly how to do the upsampling so if you think about something like strided convolution, strided convolution is kind of like a learnable layer that learns the way that the network wants to perform downsampling at that layer. And by analogy with that there's this type of layer called a transpose convolution that lets us do kind of learnable upsampling. So it will both upsample the feature map and learn some weights about how it wants to do that upsampling. And this is really just another type of convolution so to see how this works remember how a normal three by three stride one pad one convolution would work. That for this kind of normal convolution that we've seen many times now in this class, our input might by four by four, our output might be four by four, and now we'll have this three by three kernel and we'll take an inner product between, we'll plop down that kernel at the corner of the image, take an inner product, and that inner product will give us the value and the activation in the upper left hand corner of our output. And we'll repeat this for every receptive field in the image. Now if we talk about strided convolution then strided convolution ends up looking pretty similar. However, our input is maybe a four by four region and our output is a two by two region. But we still have this idea of taking, of there being some three by three filter or kernel that we plop down in the corner of the image, take an inner product and use that to compute a value of the activation and the output. But now with strided convolution the idea is that we're moving that, rather than popping down that filter at every possible point in the input, instead we're going to move the filter by two pixels in the input every time we move the filter by one pixel, every time we move by one pixel in the output. Right so this stride of two gives us a ratio between how much do we move in the input versus how much do we move in the output. So when you do a strided convolution with stride two this ends up downsampling the image or the feature map by a factor of two in kind of a learnable way. And now a transpose convolution is sort of the opposite in a way so here our input will be a two by two region and our output will be a four by four region. But now the operation that we perform with transpose convolution is a little bit different. Now so rather than taking an inner product instead what we're going to do is we're going to take the value of our input feature map at that upper left hand corner and that'll be some scalar value in the upper left hand corner. We're going to multiply the filter by that scalar value and then copy those values over to this three by three region in the output so rather than taking an inner product with our filter and the input, instead our input gives weights that we will use to weight the filter and then our output will be weighted copies of the filter that are weighted by the values in the input. And now we can do this sort of same ratio trick in order to upsample so now when we move one pixel in the input now we can plop our filter down two pixels away in the output and it's the same trick that now the blue pixel in the input is some scalar value and we'll take that scalar value, multiply it by the values in the filter, and copy those weighted filter values into this new region in the output. The tricky part is that sometimes these receptive fields in the output can overlap now and now when these receptive fields in the output overlap we just sum the results in the output. So then you can imagine repeating this everywhere and repeating this process everywhere and this ends up doing sort of a learnable upsampling where we use these learned convolutional filter weights to upsample the image and increase the spatial size. By the way, you'll see this operation go by a lot of different names in literature. Sometimes this gets called things like deconvolution which I think is kind of a bad name but you'll see it out there in papers so from a signal processing perspective deconvolution means the inverse operation to convolution which this is not however you'll frequently see this type of layer called a deconvolution layer in some deep learning papers so be aware of that, watch out for that terminology. You'll also sometimes see this called upconvolution which is kind of a cute name. Sometimes it gets called fractionally strided convolution because if we think of the stride as the ratio in step between the input and the output then now this is something like a stride one half convolution because of this ratio of one to two between steps in the input and steps in the output. This also sometimes gets called a backwards strided convolution because if you think about it and work through the math this ends up being the same, the forward pass of a transpose convolution ends up being the same mathematical operation as the backwards pass in a normal convolution so you might have to take my word for it, that might not be super obvious when you first look at this but that's kind of a neat fact so you'll sometimes see that name as well. And as maybe a bit of a more concrete example of what this looks like I think it's maybe a little easier to see in one dimension so if we imagine, so here we're doing a three by three transpose convolution in one dimension. Sorry, not three by three, a three by one transpose convolution in one dimension. So our filter here is just three numbers. Our input is two numbers and now you can see that in our output we've taken the values in the input, used them to weight the values of the filter and plopped down those weighted filters in the output with a stride of two and now where these receptive fields overlap in the output then we sum. So you might be wondering, this is kind of a funny name. Where does the name transpose convolution come from and why is that actually my preferred name for this operation? So that comes from this kind of neat interpretation of convolution. So it turns out that any time you do convolution you can always write convolution as a matrix multiplication. So again, this is kind of easier to see with a one-dimensional example but here we've got some weight. So we're doing a one-dimensional convolution of a weight vector x which has three elements, and an input vector, a vector, which has four elements, A, B, C, D. So here we're doing a three by one convolution with stride one and you can see that we can frame this whole operation as a matrix multiplication where we take our convolutional kernel x and turn it into some matrix capital X which contains copies of that convolutional kernel that are offset by different regions. And now we can take this giant weight matrix X and do a matrix vector multiplication between x and our input a and this just produces the same result as convolution. And now with transpose convolution means that we're going to take this same weight matrix but now we're going to multiply by the transpose of that same weight matrix. So here you can see the same example for this stride one convolution on the left and the corresponding stride one transpose convolution on the right. And if you work through the details you'll see that when it comes to stride one, a stride one transpose convolution also ends up being a stride one normal convolution so there's a little bit of details in the way that the border and the padding are handled but it's fundamentally the same operation. But now things look different when you talk about a stride of two. So again, here on the left we can take a stride two convolution and write out this stride two convolution as a matrix multiplication. And now the corresponding transpose convolution is no longer a convolution so if you look through this weight matrix and think about how convolutions end up getting represented in this way then now this transposed matrix for the stride two convolution is something fundamentally different from the original normal convolution operation so that's kind of the reasoning behind the name and that's why I think that's kind of the nicest name to call this operation by. Sorry, was there a question? Sorry? It's very possible there's a typo in the slide so please point out on Piazza and I'll fix it but I hope the idea was clear. Is there another question? Okay, thank you [laughing]. Yeah, so, oh no lots of questions. Yeah, so the issue is why do we sum and not average? So the reason we sum is due to this transpose convolution formula zone so that's the reason why we sum but you're right that you actually, this is kind of a problem that the magnitudes will actually vary in the output depending on how many receptive fields were in the output. So actually in practice this is something that people started to point out very recently and somewhat switched away from this stride, so using three by three stride two transpose convolution upsampling can sometimes produce some checkerboard artifacts in the output exactly due to that problem. So what I've seen in a couple more recent papers is maybe to use four by four stride two or two by two stride two transpose convolution for upsampling and that helps alleviate that problem a little bit. Yeah, so the question is what is a stride half convolution and where does that terminology come from? I think that was from my paper. So that was actually, yes that was definitely this. So at the time I was writing that paper I was kind of into the name fractionally strided convolution but after thinking about it a bit more I think transpose convolution is probably the right name. So then this idea of semantic segmentation actually ends up being pretty natural. You just have this giant convolutional network with downsampling and upsampling inside the network and now our downsampling will be by strided convolution or pooling, our upsampling will be by transpose convolution or various types of unpooling or upsampling and we can train this whole thing end to end with back propagation using this cross entropy loss over every pixel. So this is actually pretty cool that we can take a lot of the same machinery that we already learned for image classification and now just apply it very easily to extend to new types of problems so that's super cool. So the next task that I want to talk about is this idea of classification plus localization. So we've talked about image classification a lot where we want to just assign a category label to the input image but sometimes you might want to know a little bit more about the image. In addition to predicting what the category is, in this case the cat, you might also want to know where is that object in the image? So in addition to predicting the category label cat, you might also want to draw a bounding box around the region of the cat in that image. And classification plus localization, the distinction here between this and object detection is that in the localization scenario you assume ahead of time that you know there's exactly one object in the image that you're looking for or maybe more than one but you know ahead of time that we're going to make some classification decision about this image and we're going to produce exactly one bounding box that's going to tell us where that object is located in the image so we sometimes call that task classification plus localization. And again, we can reuse a lot of the same machinery that we've already learned from image classification in order to tackle this problem. So kind of a basic architecture for this problem looks something like this. So again, we have our input image, we feed our input image through some giant convolutional network, this is Alex, this is AlexNet for example, which will give us some final vector summarizing the content of the image. Then just like before we'll have some fully connected layer that goes from that final vector to our class scores. But now we'll also have another fully connected layer that goes from that vector to four numbers. Where the four numbers are something like the height, the width, and the x and y positions of that bounding box. And now our network will produce these two different outputs, one is this set of class scores, and the other are these four numbers giving the coordinates of the bounding box in the input image. And now during training time, when we train this network we'll actually have two losses so in this scenario we're sort of assuming a fully supervised setting so we assume that each of our training images is annotated with both a category label and also a ground truth bounding box for that category in the image. So now we have two loss functions. We have our favorite softmax loss that we compute using the ground truth category label and the predicted class scores, and we also have some kind of loss that gives us some measure of dissimilarity between our predicted coordinates for the bounding box and our actual coordinates for the bounding box. So one very simple thing is to just take an L2 loss between those two and that's kind of the simplest thing that you'll see in practice although sometimes people play around with this and maybe use L1 or smooth L1 or they parametrize the bounding box a little bit differently but the idea is always the same, that you have some regression loss between your predicted bounding box coordinates and the ground truth bounding box coordinates. Question? Sorry, go ahead. So the question is, is this a good idea to do all at the same time? Like what happens if you misclassify, should you even look at the box coordinates? So sometimes people get fancy with it, so in general it works okay. It's not a big problem, you can actually train a network to do both of these things at the same time and it'll figure it out but sometimes things can get tricky in terms of misclassification so sometimes what you'll see for example is that rather than predicting a single box you might make predictions like a separate prediction of the box for each category and then only apply loss to the predicted box corresponding to the ground truth category. So people do get a little bit fancy with these things that sometimes helps a bit in practice. But at least this basic setup, it might not be perfect or it might not be optimal but it will work and it will do something. Was there a question in the back? Yeah, so that's the question is do these losses have different units, do they dominate the gradient? So this is what we call a multi-task loss so whenever we're taking derivatives we always want to take derivative of a scalar with respect to our network parameters and use that derivative to take gradient steps. But now we've got two scalars that we want to both minimize so what you tend to do in practice is have some additional hyperparameter that gives you some weighting between these two losses so you'll take a weighted sum of these two different loss functions to give our final scalar loss. And then you'll take your gradients with respect to this weighted sum of the two losses. And this ends up being really really tricky because this weighting parameter is a hyperparameter that you need to set but it's kind of different from some of the other hyperparameters that we've seen so far in the past right because this weighting hyperparameter actually changes the value of the loss function so one thing that you might often look at when you're trying to set hyperparameters is you might make different hyperparameter choices and see what happens to the loss under different choices of hyperparameters. But in this case because the loss actually, because the hyperparameter affects the absolute value of the loss making those comparisons becomes kind of tricky. So setting that hyperparameter is somewhat difficult. And in practice, you kind of need to take it on a case by case basis for exactly the problem you're solving but my general strategy for this is to have some other metric of performance that you care about other than the actual loss value which then you actually use that final performance metric to make your cross validation choices rather than looking at the value of the loss to make those choices. Question? So the question is why do we do this all at once? Why not do this separately? Yeah, so the question is why don't we fix the big network and then just only learn separate fully connected layers for these two tasks? People do do that sometimes and in fact that's probably the first thing you should try if you're faced with a situation like this but in general whenever you're doing transfer learning you always get better performance if you fine tune the whole system jointly because there's probably some mismatch between the features, if you train on ImageNet and then you use that network for your data set you're going to get better performance on your data set if you can also change the network. But one trick you might see in practice sometimes is that you might freeze that network then train those two things separately until convergence and then after they converge then you go back and jointly fine tune the whole system. So that's a trick that sometimes people do in practice in that situation. And as I've kind of alluded to this big network is often a pre-trained network that is taken from ImageNet for example. So a bit of an aside, this idea of predicting some fixed number of positions in the image can be applied to a lot of different problems beyond just classification plus localization. One kind of cool example is human pose estimation. So here we want to take an input image is a picture of a person. We want to output the positions of the joints for that person and this actually allows the network to predict what is the pose of the human. Where are his arms, where are his legs, stuff like that, and generally most people have the same number of joints. That's a bit of a simplifying assumption, it might not always be true but it works for the network. So for example one parameterization that you might see in some data sets is define a person's pose by 14 joint positions. Their feet and their knees and their hips and something like that and now when we train the network then we're going to input this image of a person and now we're going to output 14 numbers in this case giving the x and y coordinates for each of those 14 joints. And then you apply some kind of regression loss on each of those 14 different predicted points and just train this network with back propagation again. Yeah, so you might see an L2 loss but people play around with other regression losses here as well. Question? So the question is what do I mean when I say regression loss? So I mean something other than cross entropy or softmax right. When I say regression loss I usually mean like an L2 Euclidean loss or an L1 loss or sometimes a smooth L1 loss. But in general classification versus regression is whether your output is categorical or continuous so if you're expecting a categorical output like you ultimately want to make a classification decision over some fixed number of categories then you'll think about a cross entropy loss, softmax loss or these SVM margin type losses that we talked about already in the class. But if your expected output is to be some continuous value, in this case the position of these points, then your output is continuous so you tend to use different types of losses in those situations. Typically an L2, L1, different kinds of things there. So sorry for not clarifying that earlier. But the bigger point here is that for any time you know that you want to make some fixed number of outputs from your network, if you know for example. Maybe you knew that you wanted to, you knew that you always are going to have pictures of a cat and a dog and you want to predict both the bounding box of the cat and the bounding box of the dog in that case you'd know that you have a fixed number of outputs for each input so you might imagine hooking up this type of regression classification plus localization framework for that problem as well. So this idea of some fixed number of regression outputs can be applied to a lot of different problems including pose estimation. So the next task that I want to talk about is object detection and this is a really meaty topic. This is kind of a core problem in computer vision and you could probably teach a whole seminar class on just the history of object detection and various techniques applied there. So I'll be relatively brief and try to go over the main big ideas of object detection plus deep learning that have been used in the last couple of years. But the idea in object detection is that we again start with some fixed set of categories that we care about, maybe cats and dogs and fish or whatever but some fixed set of categories that we're interested in. And now our task is that given our input image, every time one of those categories appears in the image, we want to draw a box around it and we want to predict the category of that box so this is different from classification plus localization because there might be a varying number of outputs for every input image. You don't know ahead of time how many objects you expect to find in each image so that's, this ends up being a pretty challenging problem. So we've seen graphs, so this is kind of interesting. We've seen this graph many times of the ImageNet classification performance as a function of years and we saw that it just got better and better every year and there's been a similar trend with object detection because object detection has again been one of these core problems in computer vision that people have cared about for a very long time. So this slide is due to Ross Girshick who's worked on this problem a lot and it shows the progression of object detection performance on this one particular data set called PASCAL VOC which has been relatively used for a long time in the object detection community. And you can see that up until about 2012 performance on object detection started to stagnate and slow down a little bit and then in 2013 was when some of the first deep learning approaches to object detection came around and you could see that performance just shot up very quickly getting better and better year over year. One thing you might notice is that this plot ends in 2015 and it's actually continued to go up since then so the current state of the art in this data set is well over 80 and in fact a lot of recent papers don't even report results on this data set anymore because it's considered too easy. So it's a little bit hard to know, I'm not actually sure what is the state of the art number on this data set but it's off the top of this plot. Sorry, did you have a question? Nevermind. Okay, so as I already said this is different from localization because there might be differing numbers of objects for each image. So for example in this cat on the upper left there's only one object so we only need to predict four numbers but now for this image in the middle there's three animals there so we need our network to predict 12 numbers, four coordinates for each bounding box. Or in this example of many many ducks then you want your network to predict a whole bunch of numbers. Again, four numbers for each duck. So this is quite different from object detection. Sorry object detection is quite different from localization because in object detection you might have varying numbers of objects in the image and you don't know ahead of time how many you expect to find. So as a result, it's kind of tricky if you want to think of object detection as a regression problem. So instead, people tend to work, use kind of a different paradigm when thinking about object detection. So one approach that's very common and has been used for a long time in computer vision is this idea of sliding window approaches to object detection. So this is kind of similar to this idea of taking small patches and applying that for semantic segmentation and we can apply a similar idea for object detection. So the ideas is that we'll take different crops from the input image, in this case we've got this crop in the lower left hand corner of our image and now we take that crop, feed it through our convolutional network and our convolutional network does a classification decision on that input crop. It'll say that there's no dog here, there's no cat here, and then in addition to the categories that we care about we'll add an additional category called background and now our network can predict background in case it doesn't see any of the categories that we care about, so then when we take this crop from the lower left hand corner here then our network would hopefully predict background and say that no, there's no object here. Now if we take a different crop then our network would predict dog yes, cat no, background no. We take a different crop we get dog yes, cat no, background no. Or a different crop, dog no, cat yes, background no. Does anyone see a problem here? Yeah, the question is how do you choose the crops? So this is a huge problem right. Because there could be any number of objects in this image, these objects could appear at any location in the image, these objects could appear at any size in the image, these objects could also appear at any aspect ratio in the image, so if you want to do kind of a brute force sliding window approach you'd end up having to test thousands, tens of thousands, many many many many different crops in order to tackle this problem with a brute force sliding window approach. And in the case where every one of those crops is going to be fed through a giant convolutional network, this would be completely computationally intractable. So in practice people don't ever do this sort of brute force sliding window approach for object detection using convolutional networks. Instead there's this cool line of work called region proposals that comes from, this is not using deep learning typically. These are slightly more traditional computer vision techniques but the idea is that a region proposal network kind of uses more traditional signal processing, image processing type things to make some list of proposals for where, so given an input image, a region proposal network will then give you something like a thousand boxes where an object might be present. So you can imagine that maybe we do some local, we look for edges in the image and try to draw boxes that contain closed edges or something like that. These various types of image processing approaches, but these region proposal networks will basically look for blobby regions in our input image and then give us some set of candidate proposal regions where objects might be potentially found. And these are relatively fast-ish to run so one common example of a region proposal method that you might see is something called Selective Search which I think actually gives you 2000 region proposals, not the 1000 that it says on the slide. So you kind of run this thing and then after about two seconds of turning on your CPU it'll spit out 2000 region proposals in the input image where objects are likely to be found so there'll be a lot of noise in those. Most of them will not be true objects but there's a pretty high recall. If there is an object in the image then it does tend to get covered by these region proposals from Selective Search. So now rather than applying our classification network to every possible location and scale in the image instead what we can do is first apply one of these region proposal networks to get some set of proposal regions where objects are likely located and now apply a convolutional network for classification to each of these proposal regions and this will end up being much more computationally tractable than trying to do all possible locations and scales. And this idea all came together in this paper called R-CNN from a few years ago that does exactly that. So given our input image in this case we'll run some region proposal network to get our proposals, these are also sometimes called regions of interest or ROI's so again Selective Search gives you something like 2000 regions of interest. Now one of the problems here is that these input, these regions in the input image could have different sizes but if we're going to run them all through a convolutional network our classification, our convolutional networks for classification all want images of the same input size typically due to the fully connected net layers and whatnot so we need to take each of these region proposals and warp them to that fixed square size that is expected as input to our downstream network. So we'll crop out those region proposal, those regions corresponding to the region proposals, we'll warp them to that fixed size, and then we'll run each of them through a convolutional network which will then use in this case an SVM to make a classification decision for each of those, to predict categories for each of those crops. And then I lost a slide. But it'll also, not shown in the slide right now but in addition R-CNN also predicts a regression, like a correction to the bounding box in addition for each of these input region proposals because the problem is that your input region proposals are kind of generally in the right position for an object but they might not be perfect so in addition R-CNN will, in addition to category labels for each of these proposals, it'll also predict four numbers that are kind of an offset or a correction to the box that was predicted at the region proposal stage. So then again, this is a multi-task loss and you would train this whole thing. Sorry was there a question? The question is how much does the change in aspect ratio impact accuracy? It's a little bit hard to say. I think there's some controlled experiments in some of these papers but I'm not sure I can give a generic answer to that. Question? The question is is it necessary for regions of interest to be rectangles? So they typically are because it's tough to warp these non-region things but once you move to something like instant segmentation then you sometimes get proposals that are not rectangles. If you actually do care about predicting things that are not rectangles. Is there another question? Yeah, so the question is are the region proposals learned so in R-CNN it's a traditional thing. These are not learned, this is kind of some fixed algorithm that someone wrote down but we'll see in a couple minutes that we can actually, we've changed that a little bit in the last couple of years. Is there another question? The question is is the offset always inside the region of interest? The answer is no, it doesn't have to be. You might imagine that suppose the region of interest put a box around a person but missed the head then you could imagine the network inferring that oh this is a person but people usually have heads so the network showed the box should be a little bit higher. So sometimes the final predicted boxes will be outside the region of interest. Question? Yeah. Yeah the question is you have a lot of ROI's that don't correspond to true objects? And like we said, in addition to the classes that you actually care about you add an additional background class so your class scores can also predict background to say that there was no object here. Question? Yeah, so the question is what kind of data do we need and yeah, this is fully supervised in the sense that our training data has each image, consists of images. Each image has all the object categories marked with bounding boxes for each instance of that category. There are definitely papers that try to approach this like oh what if you don't have the data. What if you only have that data for some images? Or what if that data is noisy but at least in the generic case you assume full supervision of all objects in the images at training time. Okay, so I think we've kind of alluded to this but there's kind of a lot of problems with this R-CNN framework. And actually if you look at the figure here on the right you can see that additional bounding box head so I'll put it back. But this is kind of still computationally pretty expensive because if we've got 2000 region proposals, we're running each of those proposals independently, that can be pretty expensive. There's also this question of relying on this fixed region proposal network, this fixed region proposals, we're not learning them so that's kind of a problem. And just in practice it ends up being pretty slow so in the original implementation R-CNN would actually dump all the features to disk so it'd take hundreds of gigabytes of disk space to store all these features. Then training would be super slow since you have to make all these different forward and backward passes through the image and it took something like 84 hours is one number they've recorded for training time so this is super super slow. And now at test time it's also super slow, something like roughly 30 seconds minute per image because you need to run thousands of forward passes through the convolutional network for each of these region proposals so this ends up being pretty slow. Thankfully we have fast R-CNN that fixed a lot of these problems so when we do fast R-CNN then it's going to look kind of the same. We're going to start with our input image but now rather than processing each region of interest separately instead we're going to run the entire image through some convolutional layers all at once to give this high resolution convolutional feature map corresponding to the entire image. And now we still are using some region proposals from some fixed thing like Selective Search but rather than cropping out the pixels of the image corresponding to the region proposals, instead we imagine projecting those region proposals onto this convolutional feature map and then taking crops from the convolutional feature map corresponding to each proposal rather than taking crops directly from the image. And this allows us to reuse a lot of this expensive convolutional computation across the entire image when we have many many crops per image. But again, if we have some fully connected layers downstream those fully connected layers are expecting some fixed-size input so now we need to do some reshaping of those crops from the convolutional feature map and they do that in a differentiable way using something they call an ROI pooling layer. Once you have these warped crops from the convolutional feature map then you can run these things through some fully connected layers and predict your classification scores and your linear regression offsets to the bounding boxes. And now when we train this thing then we again have a multi-task loss that trades off between these two constraints and during back propagation we can back prop through this entire thing and learn it all jointly. This ROI pooling, it looks kind of like max pooling. I don't really want to get into the details of that right now. And in terms of speed if we look at R-CNN versus fast R-CNN versus this other model called SPP net which is kind of in between the two, then you can see that at training time fast R-CNN is something like 10 times faster to train because we're sharing all this computation between different feature maps. And now at test time fast R-CNN is super fast and in fact fast R-CNN is so fast at test time that its computation time is actually dominated by computing region proposals. So we said that computing these 2000 region proposals using Selective Search takes something like two seconds and now once we've got all these region proposals then because we're processing them all sort of in a shared way by sharing these expensive convolutions across the entire image that we can process all of these region proposals in less than a second altogether. So fast R-CNN ends up being bottlenecked by just the computing of these region proposals. Thankfully we've solved this problem with faster R-CNN. So the idea in faster R-CNN is to just make, so the problem was the computing the region proposals using this fixed function was a bottleneck. So instead we'll just make the network itself predict its own region proposals. And so the way that this sort of works is that again, we take our input image, run the entire input image altogether through some convolutional layers to get some convolutional feature map representing the entire high resolution image and now there's a separate region proposal network which works on top of those convolutional features and predicts its own region proposals inside the network. Now once we have those predicted region proposals then it looks just like fast R-CNN where now we take crops from those region proposals from the convolutional features, pass them up to the rest of the network. And now we talked about multi-task losses and multi-task training networks to do multiple things at once. Well now we're telling the network to do four things all at once so balancing out this four-way multi-task loss is kind of tricky. But because the region proposal network needs to do two things: it needs to say for each potential proposal is it an object or not an object, it needs to actually regress the bounding box coordinates for each of those proposals, and now the final network at the end needs to do these two things again. Make final classification decisions for what are the class scores for each of these proposals, and also have a second round of bounding box regression to again correct any errors that may have come from the region proposal stage. Question? So the question is that sometimes multi-task learning might be seen as regularization and are we getting that affect here? I'm not sure if there's been super controlled studies on that but actually in the original version of the faster R-CNN paper they did a little bit of experimentation like what if we share the region proposal network, what if we don't share? What if we learn separate convolutional networks for the region proposal network versus the classification network? And I think there were minor differences but it wasn't a dramatic difference either way. So in practice it's kind of nicer to only learn one because it's computationally cheaper. Sorry, question? Yeah the question is how do you train this region proposal network because you don't know, you don't have ground truth region proposals for the region proposal network. So that's a little bit hairy. I don't want to get too much into those details but the idea is that at any time you have a region proposal which has more than some threshold of overlap with any of the ground truth objects then you say that that is the positive region proposal and you should predict that as the region proposal and any potential proposal which has very low overlap with any ground truth objects should be predicted as a negative. But there's a lot of dark magic hyperparameters in that process and that's a little bit hairy. Question? Yeah, so the question is what is the classification loss on the region proposal network and the answer is that it's making a binary, so I didn't want to get into too much of the details of that architecture 'cause it's a little bit hairy but it's making binary decisions. So it has some set of potential regions that it's considering and it's making a binary decision for each one. Is this an object or not an object? So it's like a binary classification loss. So once you train this thing then faster R-CNN ends up being pretty darn fast. So now because we've eliminated this overhead from computing region proposals outside the network, now faster R-CNN ends up being very very fast compared to these other alternatives. Also, one interesting thing is that because we're learning the region proposals here you might imagine maybe what if there was some mismatch between this fixed region proposal algorithm and my data? So in this case once you're learning your own region proposals then you can overcome that mismatch if your region proposals are somewhat weird or different than other data sets. So this whole family of R-CNN methods, R stands for region, so these are all region-based methods because there's some kind of region proposal and then we're doing some processing, some independent processing for each of those potential regions. So this whole family of methods are called these region-based methods for object detection. But there's another family of methods that you sometimes see for object detection which is sort of all feed forward in a single pass. So one of these is YOLO for You Only Look Once. And another is SSD for Single Shot Detection and these two came out somewhat around the same time. But the idea is that rather than doing independent processing for each of these potential regions instead we want to try to treat this like a regression problem and just make all these predictions all at once with some big convolutional network. So now given our input image you imagine dividing that input image into some coarse grid, in this case it's a seven by seven grid and now within each of those grid cells you imagine some set of base bounding boxes. Here I've drawn three base bounding boxes like a tall one, a wide one, and a square one but in practice you would use more than three. So now for each of these grid cells and for each of these base bounding boxes you want to predict several things. One, you want to predict an offset off the base bounding box to predict what is the true location of the object off this base bounding box. And you also want to predict classification scores so maybe a classification score for each of these base bounding boxes. How likely is it that an object of this category appears in this bounding box. So then at the end we end up predicting from our input image, we end up predicting this giant tensor of seven by seven grid by 5B + C. So that's just where we have B base bounding boxes, we have five numbers for each giving our offset and our confidence for that base bounding box and C classification scores for our C categories. So then we kind of see object detection as this input of an image, output of this three dimensional tensor and you can imagine just training this whole thing with a giant convolutional network. And that's kind of what these single shot methods do where they just, and again matching the ground truth objects into these potential base boxes becomes a little bit hairy but that's what these methods do. And by the way, the region proposal network that gets used in faster R-CNN ends up looking quite similar to these where they have some set of base bounding boxes over some gridded image, another region proposal network does some regression plus some classification. So there's kind of some overlapping ideas here. So in faster R-CNN we're kind of treating the object, the region proposal step as kind of this fixed end-to-end regression problem and then we do the separate per region processing but now with these single shot methods we only do that first step and just do all of our object detection with a single forward pass. So object detection has a ton of different variables. There could be different base networks like VGG, ResNet, we've seen different metastrategies for object detection including this faster R-CNN type region based family of methods, this single shot detection family of methods. There's kind of a hybrid that I didn't talk about called R-FCN which is somewhat in between. There's a lot of different hyperparameters like what is the image size, how many region proposals do you use. And there's actually this really cool paper that will appear at CVPR this summer that does a really controlled experimentation around a lot of these different variables and tries to tell you how do these methods all perform under these different variables. So if you're interested I'd encourage you to check it out but kind of one of the key takeaways is that the faster R-CNN style of region based methods tends to give higher accuracies but ends up being much slower than the single shot methods because the single shot methods don't require this per region processing. But I encourage you to check out this paper if you want more details. Also as a bit of aside, I had this fun paper with Andre a couple years ago that kind of combined object detection with image captioning and did this problem called dense captioning so now the idea is that rather than predicting a fixed category label for each region, instead we want to write a caption for each region. And again, we had some data set that had this sort of data where we had a data set of regions together with captions and then we sort of trained this giant end-to-end model that just predicted these captions all jointly. And this ends up looking somewhat like faster R-CNN where you have some region proposal stage then a bounding box, then some per region processing. But rather than a SVM or a softmax loss instead those per region processing has a whole RNN language model that predicts a caption for each region. So that ends up looking quite a bit like faster R-CNN. There's a video here but I think we're running out of time so I'll skip it. But the idea here is that once you have this, you can kind of tie together a lot of these ideas and if you have some new problem that you're interested in tackling like dense captioning, you can recycle a lot of the components that you've learned from other problems like object detection and image captioning and kind of stitch together one end-to-end network that produces the outputs that you care about for your problem. So the last task that I want to talk about is this idea of instance segmentation. So here instance segmentation is in some ways like the full problem We're given an input image and we want to predict one, the locations and identities of objects in that image similar to object detection, but rather than just predicting a bounding box for each of those objects, instead we want to predict a whole segmentation mask for each of those objects and predict which pixels in the input image corresponds to each object instance. So this is kind of like a hybrid between semantic segmentation and object detection because like object detection we can handle multiple objects and we differentiate the identities of different instances so in this example since there are two dogs in the image and instance segmentation method actually distinguishes between the two dog instances and the output and kind of like semantic segmentation we have this pixel wise accuracy where for each of these objects we want to say which pixels belong to that object. So there's been a lot of different methods that people have tackled, for instance segmentation as well, but the current state of the art is this new paper called Mask R-CNN that actually just came out on archive about a month ago so this is not yet published, this is like super fresh stuff. And this ends up looking a lot like faster R-CNN. So it has this multi-stage processing approach where we take our whole input image, that whole input image goes into some convolutional network and some learned region proposal network that's exactly the same as faster R-CNN and now once we have our learned region proposals then we project those proposals onto our convolutional feature map just like we did in fast and faster R-CNN. But now rather than just making a classification and a bounding box for regression decision for each of those boxes we in addition want to predict a segmentation mask for each of those bounding box, for each of those region proposals. So now it kind of looks like a mini, like a semantic segmentation problem inside each of the region proposals that we're getting from our region proposal network. So now after we do this ROI aligning to warp our features corresponding to the region of proposal into the right shape, then we have two different branches. One branch will come up that looks exact, and this first branch at the top looks just like faster R-CNN and it will predict classification scores telling us what is the category corresponding to that region of proposal or alternatively whether or not it's background. And we'll also predict some bounding box coordinates that regressed off the region proposal coordinates. And now in addition we'll have this branch at the bottom which looks basically like a semantic segmentation mini network which will classify for each pixel in that input region proposal whether or not it's an object so this mask R-CNN problem, this mask R-CNN architecture just kind of unifies all of these different problems that we've been talking about today into one nice jointly end-to-end trainable model. And it's really cool and it actually works really really well so when you look at the examples in the paper they're kind of amazing. They look kind of indistinguishable from ground truth. So in this example on the left you can see that there are these two people standing in front of motorcycles, it's drawn the boxes around these people, it's also gone in and labeled all the pixels of those people and it's really small but actually in the background on that image on the left there's also a whole crowd of people standing very small in the background. It's also drawn boxes around each of those and grabbed the pixels of each of those images. And you can see that this is just, it ends up working really really well and it's a relatively simple addition on top of the existing faster R-CNN framework. So I told you that mask R-CNN unifies everything we talked about today and it also does pose estimation by the way. So we talked about, you can do pose estimation by predicting these joint coordinates for each of the joints of the person so you can do mask R-CNN to do joint object detection, pose estimation, and instance segmentation. And the only addition we need to make is that for each of these region proposals we add an additional little branch that predicts these coordinates of the joints for the instance of the current region proposal. So now this is just another loss, like another layer that we add, another head coming out of the network and an additional term in our multi-task loss. But once we add this one little branch then you can do all of these different problems jointly and you get results looking something like this. Where now this network, like a single feed forward network is deciding how many people are in the image, detecting where those people are, figuring out the pixels corresponding to each of those people and also drawing a skeleton estimating the pose of those people and this works really well even in crowded scenes like this classroom where there's a ton of people sitting and they all overlap each other and it just seems to work incredibly well. And because it's built on the faster R-CNN framework it also runs relatively close to real time so this is running something like five frames per second on a GPU because this is all sort of done in the single forward pass of the network. So this is again, a super new paper but I think that this will probably get a lot of attention in the coming months. So just to recap, we've talked. Sorry question? The question is how much training data do you need? So all of these instant segmentation results were trained on the Microsoft Coco data set so Microsoft Coco is roughly 200,000 training images. It has 80 categories that it cares about so in each of those 200,000 training images it has all the instances of those 80 categories labeled. So there's something like 200,000 images for training and there's something like I think an average of fivee or six instances per image. So it actually is quite a lot of data. And for Microsoft Coco for all the people in Microsoft Coco they also have all the joints annotated as well so this actually does have quite a lot of supervision at training time you're right, and actually is trained with quite a lot of data. So I think one really interesting topic to study moving forward is that we kind of know that if you have a lot of data to solve some problem, at this point we're relatively confident that you can stitch up some convolutional network that can probably do a reasonable job at that problem but figuring out ways to get performance like this with less training data is a super interesting and active area of research and I think that's something people will be spending a lot of their efforts working on in the next few years. So just to recap, today we had kind of a whirlwind tour of a whole bunch of different computer vision topics and we saw how a lot of the machinery that we built up from image classification can be applied relatively easily to tackle these different computer vision topics. And next time we'll talk about, we'll have a really fun lecture on visualizing CNN features. Well also talk about DeepDream and neural style transfer.
Stanford_Computer_Vision
Lecture_12_Visualizing_and_Understanding.txt
- Good morning. So, it's 12:03 so, I want to get started. Welcome to Lecture 12, of CS-231N. Today we are going to talk about Visualizing and Understanding convolutional networks. This is always a super fun lecture to give because we get to look a lot of pretty pictures. So, it's, it's one of my favorites. As usual a couple administrative things. So, hopefully your projects are all going well, because as a reminder your milestones are due on Canvas tonight. It is Canvas, right? Okay, so want to double check, yeah. Due on Canvas tonight, we are working on furiously grading your midterms. So, we'll hope to have those midterms grades to you back by on grade scope this week. So, I know that was little confusion, you all got registration email's for grade scope probably in the last week. Something like that, we start couple of questions on piazo. So, we've decided to use grade scope to grade the midterms. So, don't be confused, if you get some emails about that. Another reminder is that assignment three was released last week on Friday. It will be due, a week from this Friday, on the 26th. This is, an assignment three, is almost entirely brand new this year. So, it we apologize for taking a little bit longer than expected to get it out. But I think it's super cool. A lot of that stuff, we'll talk about in today's lecture. You'll actually be implementing on your assignment. And for the assignment, you'll get the choice of either Pi torch or tensure flow. To work through these different examples. So, we hope that's really useful experience for you guys. We also saw a lot of activity on HyperQuest over the weekend. So that's, that's really awesome. The leader board went up yesterday. It seems like you guys are really trying to battle it out to show off your deep learning neural network training skills. So that's super cool. And we because due to the high interest in HyperQuest and due to the conflicts with the, with the Milestones submission time. We decided to extend the deadline for extra credit through Sunday. So, anyone who does at least 12 runs on HyperQuest by Sunday will get little bit of extra credit in the class. Also those of you who are, at the top of leader board doing really well, will get may be little bit extra, extra credit. So, I thanks for participating we got lot of interest and that was really cool. Final reminder is about the poster session. So, we have the poster session will be on June 6th. That date is finalized, I think that, I don't remember the exact time. But it is June 6th. So that, we have some questions about when exactly that poster session is for those of you who are traveling at the end of quarter or starting internships or something like that. So, it will be June 6th. Any questions on the admin notes. No, totally clear. So, last time we talked. So, last time we had a pretty jam packed lecture, when we talked about lot of different computer vision tasks, as a reminder. We talked about semantic segmentation which is this problem, where you want to sign labels to every pixel in the input image. But does not differentiate the object instances in those images. We talked about classification plus localization. Where in addition to a class label you also want to draw a box or perhaps several boxes in the image. Where the distinction here is that, in a classification plus localization setup. You have some fix number of objects that you are looking for So, we also saw that this type of paradigm can be applied to the things like pose recognition. Where you want to regress to different numbers of joints in the human body. We also talked about the object detection where you start with some fixed set of category labels that you are interested in. Like dogs and cats. And then the task is to draw a boxes around every instance of those objects that appear in the input image. And object detection is really distinct from classification plus localization because with object detection, we don't know ahead of time, how many object instances we're looking for in the image. And we saw that there's this whole family of methods based on RCNN, Fast RCNN and faster RCNN, as well as the single shot detection methods for addressing this problem of object detection. Then finally we talked pretty briefly about instance segmentation, which is kind of combining aspects of a semantic segmentation and object detection where the goal is to detect all the instances of the categories we care about, as well as label the pixels belonging to each instance. So, in this case, we detected two dogs and one cat and for each of those instances we wanted to label all the pixels. So, these are we kind of covered a lot last lecture but those are really interesting and exciting problems that you guys might consider to using in parts of your projects. But today we are going to shift gears a little bit and ask another question. Which is, what's really going on inside convolutional networks. We've seen by this point in the class how to train convolutional networks. How to stitch up different types of architectures to attack different problems. But one question that you might have had in your mind, is what exactly is going on inside these networks? How did they do the things that they do? What kinds of features are they looking for? And all this source of related questions. So, so far we've sort of seen ConvNets as a little bit of a black box. Where some input image of raw pixels is coming in on one side. It goes to the many layers of convulsion and pooling in different sorts of transformations. And on the outside, we end up with some set of class scores or some types of understandable interpretable output. Such as class scores or bounding box positions or labeled pixels or something like that. But the question is. What are all these other layers in the middle doing? What kinds of things in the input image are they looking for? And can we try again intuition for. How ConvNets are working? What types of things in the image they are looking for? And what kinds of techniques do we have for analyzing this internals of the network? So, one relatively simple thing is the first layer. So, we've seen, we've talked about this before. But recalled that, the first convolutional layer consists of a filters that, so, for example in AlexNet. The first convolutional layer consists of a number of convolutional filters. Each convolutional of filter has shape 3 by 11 by 11. And these convolutional filters gets slid over the input image. We take inner products between some chunk of the image. And the weights of the convolutional filter. And that gives us our output of the at, at after that first convolutional layer. So, in AlexNet then we have 64 of these filters. But now in the first layer because we are taking in a direct inner product between the weights of the convolutional layer and the pixels of the image. We can get some since for what these filters are looking for by simply visualizing the learned weights of these filters as images themselves. So, for each of those 11 by 11 by 3 filters in AlexNet, we can just visualize that filter as a little 11 by 11 image with a three channels give you the red, green and blue values. And then because there are 64 of these filters we just visualize 64 little 11 by 11 images. And we can repeat... So we have shown here at the. So, these are filters taken from the prechain models, in the pi torch model zoo. And we are looking at the convolutional filters. The weights of the convolutional filters. at the first layer of AlexNet, ResNet-18, ResNet-101 and DenseNet-121. And you can see, kind of what all these layers what this filters looking for. You see the lot of things looking for oriented edges. Likes bars of light and dark. At various angles, in various angles and various positions in the input, we can see opposing colors. Like this are green and pink. opposing colors or this orange and blue opposing colors. So, this, this kind of connects back to what we talked about with Hugh and Wiesel. All the way in the first lecture. That remember the human visual system is known to the detect things like oriented edges. At the very early layers of the human visual system. And it turns out of that these convolutional networks tend to do something, somewhat similar. At their first convolutional layers as well. And what's kind of interesting is that pretty much no matter what type of architecture you hook up or whatever type of training data you are train it on. You almost always get the first layers of your. The first convolutional weights of any pretty much any convolutional network looking at images. Ends up looking something like this with oriented edges and opposing colors. Looking at that input image. But this really only, sorry what was that question? Yes, these are showing the learned weights of the first convolutional layer. Oh, so that the question is. Why does visualizing the weights of the filters? Tell you what the filter is looking for. So this intuition comes from sort of template matching and inner products. That if you imagine you have some, some template vector. And then you imagine you compute a scaler output by taking inner product between your template vector and some arbitrary piece of data. Then, the input which maximizes that activation. Under a norm constraint on the input is exactly when those two vectors match up. So, in that since that, when, whenever you're taking inner products, the thing causes an inner product to excite maximally is a copy of the thing you are taking an inner product with. So, that, that's why we can actually visualize these weights and that, why that shows us, what this first layer is looking for. So, for these networks the first layers always was a convolutional layer. So, generally whenever you are looking at image. Whenever you are thinking about image data and training convolutional networks, you generally put a convolutional layer at the first, at the first stop. Yeah, so the question is, can we do this same type of procedure in the middle open network. That's actually the next slide. So, good anticipation. So, if we do, if we draw this exact same visualization for the intermediate convolutional layers. It's actually a lot less interpretable. So, this is, this is performing exact same visualization. So, remember for this using the tiny ConvNets demo network that's running on the course website whenever you go there. So, for that network, the first layer is 7 by 7 convulsion 16 filters. So, after the top visualizing the first layer weights for this network just like we saw in a previous slide. But now at the second layer weights. After we do a convulsion then there's some relu and some other non-linearity perhaps. But the second convolutional layer, now receives the 16 channel input. And does 7 by 7 convulsion with 20 convolutional filters. And we've actually, so the problem is that you can't really visualize these directly as images. So, you can try, so, here if you this 16 by, so the input is this has 16 dimensions in depth. And we have these convolutional filters, each convolutional filter is 7 by 7, and is extending along the full depth so has 16 elements. Then we've 20 such of these convolutional filters, that are producing the output planes of the next layer. But the problem here is that we can't, looking at the, looking directly at the weights of these filters, doesn't really tell us much. So, we, that's really done here is that, now for this single 16 by 7 by 7 convolutional filter. We can spread out those 167 by 7 planes of the filter into a 167 by 7 grayscale images. So, that's what we've done. Up here, which is these little tiny gray scale images here show us what is, what are the weights in one of the convolutional filters of the second layer. And now, because there are 20 outputs from this layer. Then this second convolutional layer, has 2o such of these 16 by 16 or 16 by 7 by 7 filters. So if we visualize the weights of those convolutional filters as images, you can see that there are some kind of spacial structures here. But it doesn't really give you good intuition for what they are looking at. Because these filters are not looking, are not connected directly to the input image. Instead recall that the second layer convolutional filters are connected to the output of the first layer. So, this is giving visualization of, what type of activation pattern after the first convulsion, would cause the second layer convulsion to maximally activate. But, that's not very interpretable because we don't have a good sense for what those first layer convulsions look like in terms of image pixels. So we'll need to develop some slightly more fancy technique to get a sense for what is going on in the intermediate layers. Question in the back. Yeah. So the question is that for... all the visualization on this on the previous slide. We've had the scale the weights to the zero to 255 range. So in practice those weights could be unbounded. They could have any range. But to get nice visualizations we need to scale those. These visualizations also do not take in to account the bias is in these layers. So you should keep that in mind when and not take these HEPS visualizations to, to literally. Now at the last layer remember when we looking at the last layer of convolutional network. We have these maybe 1000 class scores that are telling us what are the predicted scores for each of the classes in our training data set and immediately before the last layer we often have some fully connected layer. In the case of Alex net we have some 4096- dimensional features representation of our image that then gets fed into that final our final layer to predict our final class scores. And one another, another kind of route for tackling the problem of visual, visualizing and understanding ConvNets is to try to understand what's happening at the last layer of a convolutional network. So what we can do is how to take some, some data set of images run a bunch of, run a bunch of images through our trained convolutional network and recorded that 4096 dimensional vector for each of those images. And now go through and try to figure out and visualize that last layer, that last hidden layer rather than those rather than the first convolutional layer. So, one thing you might imagine is, is trying a nearest neighbor approach. So, remember, way back in the second lecture we saw this graphic on the left where we, where we had a nearest neighbor classifier. Where we were looking at nearest neighbors in pixels space between CIFAR 10 images. And then when you look at nearest neighbors in pixel space between CIFAR 10 images you see that you pull up images that looks quite similar to the query image. So again on the left column here is some CIFAR 10 image from the CIFAR 10 data set and then these, these next five columns are showing the nearest neighbors in pixel space to those test set images. And so for example this white dog that you see here, it's nearest neighbors are in pixel space are these kinds of white blobby things that may, may or may not be dogs, but at least the raw pixels of the image are quite similar. So now we can do the same type of visualization computing and visualizing these nearest neighbor images. But rather than computing the nearest neighbors in pixel space, instead we can compute nearest neighbors in that 4096 dimensional feature space. Which is computed by the convolutional network. So here on the right we see some examples. So this, this first column shows us some examples of images from the test set of image that... Of the image net classification data set and now the, these subsequent columns show us nearest neighbors to those test set images in the 4096, in the 4096th dimensional features space computed by Alex net. And you can see here that this is quite different from the pixel space nearest neighbors, because the pixels are often quite different. between the image in it's nearest neighbors and feature space. However, the semantic content of those images tends to be similar in this feature space. So for example, if you look at this second layer the query image is this elephant standing on the left side of the image with a screen grass behind him. and now one of these, one of these... it's third nearest neighbor in the tough set is actually an elephant standing on the right side of the image. So this is really interesting. Because between this elephant standing on the left and this element stand, elephant standing on the right the pixels between those two images are almost entirely different. However, in the feature space which is learned by the network those two images and that being very close to each other. Which means that somehow this, this last their features is capturing some of those semantic content of these images. That's really cool and really exciting and, and in general looking at these kind of nearest neighbor visualizations is really quick and easy way to visualize something about what's going on here. Yes. So the question is that through the... the standard supervised learning procedure for classific training, classification network There's nothing in the loss encouraging these features to be close together. So that, that's true. It just kind of a happy accident that they end up being close to each other. Because we didn't tell the network during training these features should be close. However there are sometimes people do train networks using things called either contrastive loss or a triplet loss. Which actually explicitly make... assumptions and constraints on the network such that those last their features end up having some metric space interpretation. But Alex net at least was not trained specifically for that. The question is, what is the nearest... What is this nearest neighbor thing have to do at the last layer? So we're taking this image we're running it through the network and then the, the second to last like the last hidden layer of the network is of 4096th dimensional vector. Because there's this, this is... This is there, there are these fully connected layers at the end of the network. So we are doing is... We're writing down that 4096th dimensional vector for each of the images and then we are computing nearest neighbors according to that 4096th dimensional vector. Which is computed by, computed by the network. Maybe, maybe we can chat offline. So another, another, another another angle that we might have for visualizing what's going on in this last layer is by some concept of dimensionality reduction. So those of you who have taken CS229 for example you've seen something like PCA. Which let's you take some high dimensional representation like these 4096th dimensional features and then compress it down to two-dimensions. So then you can visualize that feature space more directly. So, Principle Component Analysis or PCA is kind of one way to do that. But there's real another really powerful algorithm called t-SNE. Standing for t-distributed stochastic neighbor embeddings. Which is slightly more powerful method. Which is a non-linear dimensionality reduction method that people in deep often use for visualizing features. So here as an, just an example of what t-SNE can do. This visualization here is, is showing a t-SNE dimensionality reduction on the emnest data set. So, emnest remember is this date set of hand written digits between zero and nine. Each image is a gray scale image 20... 28 by 28 gray scale image and now we're... So that Now we've, we've used t-SNE to take that 28 times 28 dimensional features space of the raw pixels for m-nest and now compress it down to two- dimensions ans then visualize each of those m-nest digits in this compress two-dimensional representation and when you do, when you run t-SNE on the raw pixels and m-nest You can see these natural clusters appearing. Which corresponds to the, the digits of these m-nest of, of these m-nest data set. So now we can do a similar type of visualization. Where we apply this t-SNE dimensionality reduction technique to the features from the last layer of our trained image net classifier. So...To be a little bit more concrete here what we've done is that we take, a large set of images we run them off convolutional network. We record that final 4096th dimensional feature vector for, from the last layer of each of those images. Which gives us large collection of 4096th dimensional vectors. Now we apply t-SNE dimensionality reduction to compute, sort of compress that 4096the dimensional features space down into a two-dimensional feature space and now we, layout a grid in that compressed two-dimensional feature space and visualize what types of images appear at each location in the grid in this two-dimensional feature space. So by doing this you get some very close rough sense of what the geometry of this learned feature space looks like. So these images are little bit hard to see. So I'd encourage you to check out the high resolution versions online. But at least maybe on the left you can see that there's sort of one cluster in the bottom here of, of green things, is a different kind of flowers and there's other types of clusters for different types of dog breeds and another types of animals and, and locations. So there's sort of discontinuous semantic notion in this feature space. Which we can explore by looking through this t-SNE dimensionality reduction version of the, of the features. Is there question? Yeah. So the basic idea is that we're we, we have an image so now we end up with three different pieces of information about each image. We have the pixels of the image. We have the 4096th dimensional vector. Then we use t-SNE to convert the 4096th dimensional vector into a two-dimensional coordinate and then we take the original pixels of the image and place that at the two-dimensional coordinate corresponding to the dimensionality reduced version of the 4096th dimensional feature. Yeah, little bit involved here. Question in the front. The question is Roughly how much variants do these two-dimension explain? Well, I'm not sure of the exact number and I get little bit muddy when you're talking about t-SNE, because it's a non-linear dimensionality reduction technique. So, I'd have to look offline and I'm not sure of exactly how much it explains. Question? Question is, can you do the same analysis of upper layers of the network? And yes, you can. But no, I don't have those visualizations here. Sorry. Question? The question is, Shouldn't we have overlaps of images once we do this dimensionality reduction? And yes, of course, you would. So this is just kind of taking a, nearest neighbor in our, in our regular grid and then picking an image close to that grid point. So, so... they, yeah. this is not showing you the kind of density in different parts of the feature space. So that's, that's another thing to look at and again at the link you, there's a couple more visualizations of this nature that, that address that a little bit. Okay. So another, another thing that you can do for some of these intermediate features is, so we talked a couple of slides ago that visualizing the weights of these intermediate layers is not so interpretable. But actually visualizing the activation maps of those intermediate layers is kind of interpretable in some cases. So for, so I, again an example of Alex Net. Remember the, the conv5 layers of Alex Net. Gives us this 128 by... The for...The conv5 features for any image is now 128 by 13 by 13 dimensional tensor. But we can think of that as 128 different 13 by 132-D grids. So now we can actually go and visualize each of those 13 by 13 elements slices of the feature map as a grayscale image and this gives us some sense for what types of things in the input are each of those features in that convolutional layer looking for. So this is a, a really cool interactive tool by Jason Yasenski you can just download. So it's run, so I don't have the video, it has a video on his website. But it's running a convolutional network on the inputs stream of webcam and then visualizing in real time each of those slices of that intermediate feature map give you a sense of what it's looking for and you can see that, so here the input image is this, this picture up in, settings... of this picture of a person in front of the camera and most of these intermediate features are kind of noisy, not much going on. But there's a, but there's this one highlighted intermediate feature where that is also shown larger here that seems that it's activating on the portions of the feature map corresponding to the person's face. Which is really interesting and that kind of, suggests that maybe this, this particular slice of the feature map of this layer of this particular network is maybe looking for human faces or something like that. Which is kind of a nice, kind of a nice and cool finding. Question? The question is, Are the black activations dead relu's? So you got to be... a little careful with terminology. We usually say dead relu to mean something that's dead over the entire training data set. Here I would say that it's a relu, that, it's not active for this particular input. Question? The question is, If there's no humans in image net how can it recognize a human face? There definitely are humans in image net I don't think it's, it's one of the cat... I don't think it's one of the thousand categories for the classification challenge. But people definitely appear in a lot of these images and that can be useful signal for detecting other types of things. So that's actually kind of nice results because that shows that, it's sort of can learn features that are useful for the classification task at hand. That are even maybe a little bit different from the explicit classification task that we told it to perform. So it's actually really cool results. Okay, question? So at each layer in the convolutional network our input image is of three, it's like 3 by 224 by 224 and then it goes through many stages of convolution. And then, it, after each convolutional layer is some three dimensional chunk of numbers. Which are the outputs from that layer of the convolutional network. And that into the entire three dimensional chunk of numbers which are the output of the previous convolutional layer, we call, we call, like an activation volume and then one of those, one of those slices is a, it's an activation map. So the question is, If the image is K by K will the activation map be K by K? Not always because there can be sub sampling due to pool, straight convolution and pooling. But in general, the, the size of each activation map will be linear in the size of the input image. So another, another kind of useful thing we can do for visualizing intermediate features is... Visualizing what types of patches from input images cause maximal activation in different, different features, different neurons. So what we've done here is that, we pick... Maybe again the con five layer from Alex Net? And remember each of these activation volumes at the con, at the con five in Alex net gives us a 128 by 13 by 13 chunk of numbers. Then we'll pick one of those 128 channels. Maybe channel 17 and now what we'll do is run many images through this convolutional network. And then, for each of those images record the con five features and then look at the... Right, so, then, then look at the, the... The parts of that 17th feature map that are maximally activated over our data set of images. And now, because again this is a convolutional layer each of those neurons in the convolutional layer has some small receptive field in the input. Each of those neurons is not looking at the whole image. They're only looking at the sub set of the image. Then what we'll do is, is visualize the patches from the, from this large data set of images corresponding to the maximal activations of that, of that feature, of that particular feature in that particular layer. And then we can sorts these out, sort these patches by their activation at that, at that particular layer. So here is a, some examples from this... Network called a, fully... The network doesn't matter. But these are some visualizations of these kind of maximally activating patches. So, each, each row gives... We've chosen one layer from or one neuron from one layer of a network and then each, and then, the, they're sorted of these are the patches from some large data set of images. That maximally activated this one neuron. And these can give you a sense for what type of features these, these neurons might be looking for. So for example, this top row we see a lot of circly kinds of things in the image. Some eyes, some, mostly eyes. But also this, kind of blue circly region. So then, maybe this, this particular neuron in this particular layer of this network is looking for kind of blue circly things in the input. Or maybe in the middle here we have neurons that are looking for text in different colors or, or maybe curving, curving edges of different colors and orientations. Yeah, so, I've been a little bit loose with terminology here. So, I'm saying that a neuron is one scaler value in that con five activation map. But because it's convolutional, all the neurons in one channel are all using the same weights. So we've chosen one channel and then, right? So, you get a lot of neurons for each convolutional filter at any one layer. So, we, we could have been, so this patches could've been drawn from anywhere in the image due to the convolutional nature of the thing. And now at the bottom we also see some maximally activating patches for neurons from a higher up layer in the same network. And now because they are coming from higher in the network they have a larger receptive field. So, they're looking at larger patches of the input image and we can also see that they're looking for maybe larger structures in the input image. So this, this second row is maybe looking, it seems to be looking for human, humans or maybe human faces. We have maybe something looking for... Parts of cameras or different types of larger, larger, larger object like type things, types of things. Another, another cool experiment we can do which comes from Zeiler and Fergus ECCV 2014 paper. is this idea of an exclusion experiment. So, what we want to do is figure out which parts of the input, of the input image cause the network to make it's classification decision. So, what we'll do is, we'll take our input image in this case an elephant and then we'll block out some part of that, some region in that input image and just replace it with the mean pixel value from the data set. And now, run that occluded image throughout, through the network and then record what is the predicted probability of this occluded image? And now slide this occluded patch over every position in the input image and then repeat the same process. And then draw this heat map showing, what was the predicted probability output from the network as a function of where did, which part of the input image did we occlude? And the idea is that if when we block out some part of the image if that causes the network score to change drastically. Then probably that part of the input image was really important for the classification decision. So here we've shown... I've shown three different examples of... Of this occlusion type experiment. So, maybe this example of a Go-kart at the bottom, you can see over here that when we, so here, red, the, the red corresponds to a low probability and the white and yellow corresponds to a high probability. So when we block out the region of the image corresponding to this Go-kart in front. Then the predicted probability for the Go-kart class drops a lot. So that gives us some sense that the network is actually caring a lot about these, these pixels in the input image in order to make it's classification decision. Question? Yes, the question is that, what's going on in the background? So maybe if the image is a little bit too small to tell but, there's, this is actually a Go-kart track and there's a couple other Go-karts in the background. So I think that, when you're blocking out these other Go-karts in the background, that's also influencing the score or maybe like the horizon is there and maybe the horizon is an useful feature for detecting Go-karts, it's a little bit hard to tell sometimes. But this is a pretty cool visualization. Yeah, was there another question? So the question is, sorry, sorry, what was the first question? So, the, so the question... So for, for this example we're taking one image and then masking all parts of one image. The second question was, how is this useful? It's not, maybe, you don't really take this information and then loop it directly into the training process. Instead, this is a way, a tool for humans to understand, what types of computations these train networks are doing. So it's more for your understanding than for improving performance per se. So another, another related idea is this concept of a Saliency Map. Which is something that you will see in your homeworks. So again, we have the same question of given an input image of a dog in this case and the predicted class label of dog we want to know which pixels in the input image are important for classification. We saw masking, is one way to get at this question. But Saliency Maps are another, another, angle for attacking this problem. And the question is, and one relatively simple idea from Karen Simonenian's paper, a couple years ago. Is, this is just computing the gradient of the predicted class score with respect to the pixels of the input image. And this will directly tell us in this sort of, first order approximation sense. For each input, for each pixel in the input image if we wiggle that pixel a little bit then how much will the classification score for the class change? And this is another way to get at this question of which pixels in the input matter for the classification. And when we, and when we run for example Saliency, where computer Saliency map for this dog, we see kind of a nice outline of a dog in the image. Which tells us that these are probably the pixels of that, network is actually looking at, for this image. And when we repeat this type of process for different images, we get some sense that the network is sort of looking at the right regions. Which is somewhat comforting. Question? The question is, do people use Saliency Maps for semantic segmentation? The answer is yes. That actually was... Yeah, you guys are like really on top of it this lecture. So that was another component, again in Karen's paper. Where there's this idea that maybe you can use these Saliency Maps to perform semantic segmentation without direct, without any labeled data for the, for these, for these segments. So here they're using this Grabcut Segmentation Algorithm which I don't really want to get into the details of. But it's kind of an interactive segmentation algorithm that you can use. So then when you combine this Saliency Map with this Grabcut Segmentation Algorithm then you can in fact, sometimes segment out the object in the image. Which is really cool. However I'd like to point out that this is a little bit brittle and in general if you, this will probably work much, much, much, worse than a network which did have access to supervision and training time. So, I don't, I'm not sure how, how practical this is. But it is pretty cool that it works at all. But it probably works much less than something trained explicitly to segment with supervision. So kind of another, another related idea is this idea of, of guided back propagation. So again, we still want to answer the question of for one particular, for one particular image. Then now instead of looking at the class score we want to know, we want to pick some intermediate neuron in the network and ask again, which parts of the input image influence the score of that neuron, that internal neuron in the network. And, and then you could imagine, again you could imagine computing a Saliency Map for this again, right? That rather than computing the gradient of the class scores with respect to the pixels of the image. You could compute the gradient of some intermediate value in the network with respect to the pixels of the image. And that would tell us again which parts, which pixels in the input image influence that value of that particular neuron. And that would be using normal back propagation. But it turns out that there is a slight tweak that we can do to this back propagation procedure that ends up giving some slightly cleaner images. So that's this idea of guided back propagation that again comes from Zeiler and Fergus's 2014 paper. And I don't really want to get into the details too much here but, it, you just, it's kind of weird tweak where you change the way that you back propagate through relu non-linearities. And you sort of, only, only back propagate positive gradients through relu's and you do not back propagate negative gradients through the relu's. So you're no longer computing the true gradient instead you're kind of only keeping track of positive influences on throughout the entire network. So maybe you should read through these, these papers reference to your, if you want a little bit more details about why that's a good idea. But empirically, when you do guided back propagation as appose to regular back propagation. You tend to get much cleaner, nicer images. that tells you, which part, which pixel of the input image influence that particular neuron. So, again we were seeing the same visualization we saw a few slides ago of the maximally activating patches. But now, in addition to visualizing these maximally activating patches. We've also performed guided back propagation, to tell us exactly which parts of these patches influence the score of that neuron. So, remember for this example at the top, we saw that, we thought this neuron is may be looking for circly tight things, in the input patch because there're allot of circly tight patches. Well, when we look at guided back propagation We can see with that intuition is somewhat confirmed because it is indeed the circly parts of that input patch which are influencing that, that neuron value. So, this is kind of a useful to all for synthesizing. For understanding what these different intermediates are looking for. But, one kind of interesting thing about guided back propagation or computing saliency maps. Is that there's always a function of fixed input image, right, they're telling us for a fixed input image, which pixel or which parts of that input image influence the value of the neuron. Another question you might answer is is remove this reliance, on that, on some input image. And then instead just ask what type of input in general would cause this neuron to activate and we can answer this question using a technical Gradient ascent so, remember we always use Gradient decent to train our convolutional networks by minimizing the loss. Instead now, we want to fix the, fix the weight of our trained convolutional network and instead synthesizing image by performing Gradient ascent on the pixels of the image to try and maximize the score of some intermediate neuron or of some class. So, in a process of Gradient ascent, we're no longer optimizing over the weights of the network those weights remained fixed instead we're trying to change pixels of some input image to cause this neuron, or this neuron value, or this class score to maximally, to be maximized but, instead but, in addition we need some regularization term so, remember we always a, we before seeing regularization terms to try to prevent the network weights from over fitting to the training data. Now, we need something kind of similar to prevent the pixels of our generated image from over fitting to the peculiarities of that particular network. So, here we'll often incorporate some regularization term that, we're kind of, we want a generated image of two properties one, we wanted to maximally activate some, some score or some neuron value. But, we also wanted to look like a natural image. we wanted to kind of have, the kind of statistics that we typically see in natural images. So, these regularization term in the subjective is something to enforce a generated image to look relatively natural. And we'll see a couple of different examples of regualizers as we go through. But, the general strategy for this is actually pretty simple and again informant allot of things of this nature on your assignment three. But, what we'll do is start with some initial image either initializing to zeros or to uniform or noise. But, initialize your image in some way and I'll repeat where you forward your image through 3D network and compute the score or, or neuron value that you're interested. Now, back propagate to compute the Gradient of that neuron score with respect to the pixels of the image and then make a small Gradient ascent or Gradient ascent update to the pixels of the images itself. To try and maximize that score. And I'll repeat this process over and over again, until you have a beautiful image. And, then we talked, we talked about the image regularizer, well a very simple, a very simple idea for image regularizer is simply to penalize L2 norm of a generated image This is not so semantically meaningful, it's just does something, and this was one of the, one of the earliest regularizer that we've seen in the literature for these type of generating images type of papers. And, when you run this on a trained network you can see that now we're trying to generate images that maximize the dumble score in the upper left hand corner here for example. And, then you can see that the synthesized image, it been, it's little bit hard to see may be but there're allot of different dumble like shapes, all kind of super impose that different portions of the image. or if we try to generate an image for cups then we can may be see a bunch of different cups all kind of super imposed the Dalmatian is pretty cool, because now we can see kind of this black and white spotted pattern that's kind of characteristics of Dalmatians or for lemons we can see these different kinds of yellow splotches in the image. And there's a couple of more examples here, I think may be the goose is kind of cool or the kitfox are actually may be looks like kitfox. Question? The question is, why are these all rainbow colored and in general getting true colors out of this visualization is pretty tricky. Right, because any, any actual image will be bounded in the range zero to 255. So, it really should be some kind of constrained optimization problem But, if, for using this generic methods for Gradient ascent then we, that's going to be unconstrained problem. So, you may be use like projector Gradient ascent algorithm or your rescaled image at the end. So, the colors that you see in this visualizations, sometimes are you cannot take them too seriously. Question? The question is what happens, if you let the thing loose and don't put any regularizer on it. Well, then you tend to get an image which maximize the score which is confidently classified as the class you wanted but, usually it doesn't look like anything. It kind of look likes random noise. So, that's kind of an interesting property in itself that will go into much more detail in a future lecture. But, that's why, that kind of doesn't help you so much for understanding what things the network is looking for. So, if we want to understand, why the network thing makes its decisions then it's kind of useful to put regularizer on there to generate an image to look more natural. A question in the back. Yeah, so the question is that we see a lot of multi modality here, and other ways to combat that. And actually yes, we'll see that, this is kind of first step in the whole line of work in improving these visualizations. So, another, another kind of, so then the angle here is a kind of to improve the regularizer to improve our visualized images. And there's a another paper from Jason Yesenski and some of his collaborators where they added some additional impressive regularizers. So, in addition to this L2 norm constraint, in addition we also periodically during optimization, and do some gauche and blurring on the image, we're also clip some,. some small value, some small pixel values all the way to zero, we're also clip some of the, some of the pixel values of low Gradients to zero So, you can see this is kind of a projector Gradient ascent algorithm where it reach periodically we're projecting our generated image onto some nicer set of images with some nicer properties. For example, special smoothness with respect to the gauchian blurring So, when you do this, you tend to get much nicer images that are much clear to see. So, now these flamingos look like flamingos the ground beetle is starting to look more beetle like or this black swan maybe looks like a black swan. These billiard tables actually look kind of impressive now, where you can definitely see this billiard table structure. So, you can see that once you add in nicer regularizers, then the generated images become a bit, a little bit cleaner. And, now we can perform this procedure not only for the final class course, but also for these intermediate neurons as well. So, instead of trying to maximize our billiard table score for example instead we can get maximize one of the neurons from some intermediate layer Question. So, the question is what's with the for example here, so those who remember initializing our image randomly so, these four images would be different random initialization of the input image. And again, we can use these same type of procedure to visualize, to synthesis images which maximally activate intermediate neurons of the network. And, then you can get a sense from some of these intermediate neurons are looking for, so may be at layer four there's neuron that's kind of looking for spirally things or there's neuron that's may be looking for like chunks of caterpillars it's a little bit harder to tell. But, in generally as you go larger up in the image then you can see that the one, the obviously receptive fields of these neurons are larger. So, you're looking at the larger patches in the image. And they tend to be looking for may be larger structures or more complex patterns in the input image. That's pretty cool. And, then people have really gone crazy with this and trying to, they basically improve these visualization by keeping on extra features So, this was a cool paper kind of explicitly trying to address this multi modality, there's someone asked question about a few minutes ago. So, here they were trying to explicitly take a count, take this multi modality into account in the optimization procedure where they did indeed, I think see the initial, so they for each of the classes, you run a clustering algorithm to try to separate the classes into different modes and then initialize with something that is close to one of those modes. And, then when you do that, you kind of account for this multi modality. so for intuition, on the right here these eight images are all of grocery stores. But, the top row, is kind of close up pictures of produce on the shelf and those are labeled as grocery stores And the bottom row kind of shows people walking around grocery stores or at the checkout line or something like that. And, those are also labeled those as grocery store, but their visual appearance is quiet different. So, a lot of these classes and that being sort multi modal And, if you can take, and if you explicitly take this more time mortality into account when generating images, then you can get nicer results. And now, then when you look at some of their example, synthesis images for classes, you can see like the bell pepper, the card on, strawberries, jackolantern now they end up with some very beautifully generated images. And now, I don't want to get to much into detail of the next slide. But, then you can even go crazier. and add an even stronger image prior and generate some very beautiful images indeed So, these are all synthesized images that are trying to maximize the class score or some image in a class. But, the general idea is that rather than optimizing directly the pixels of the input image, instead they're trying to optimize the FC6 representation of that image instead. And now they need to use some feature inversion network and I don't want to get into the details here. You should read the paper, it's actually really cool But, the point is that, when you start adding additional priors towards modeling natural images and you can end generating some quiet realistic images they gave you some sense of what the network is looking for So, that's, that's sort of one cool thing that we can do with this strategy, but this idea of trying to synthesis images by using Gradients on image pixels, is actually super powerful. And, another really cool thing we can do with this, is this concept of fooling image So, what we can do is pick some arbitrary image, and then try to maximize the, so, say we take it picture of an elephant and then we tell the network that we want to, change the image to maximize the score of Koala bear instead So, then what we were doing is trying to change that image of an elephant to try and instead cause the network to classify as a Koala bear. And, what you might hope for is that, maybe that elephant was sort of thought more thing into a Koala bear and maybe he would sprout little cute ears or something like that. But, that's not what happens in practice, which is pretty surprising. Instead if you take this picture of a elephant and tell them that, tell them that and try to change the elephant image to instead cause it to be classified as a koala bear What you'll find is that, you is that this second image on the right actually is classified as koala bear but it looks the same to us. So that's pretty fishy and pretty surprising. So also on the bottom we've taken this picture of a boat. Schooner is the image in that class and then we told the network to classified as an iPod. So now the second example looks just, still looks like a boat to us but the network thinks it's an iPod and the difference is in pixels between these two images are basically nothing. And if you magnify those differences you don't really see any iPod or Koala like features on these differences, they're just kind of like random patterns of noise. So the question is what's going here? And like how can this possibly the case? Well, we'll have a guest lecture from Ian Goodfellow in a week an half two weeks. And he's going to go in much more detail about this type of phenomenon and that will be really exciting. But I did want to mention it here because it is on your homework. Question? Yeah, so that's something, so the question is can we use fooled images as training data and I think, Ian's going to go in much more detail on all of these types of strategies. Because that's literally, that's really a whole lecture onto itself. Question? The question is why do we care about any of this stuff? Basically... Okay, maybe that was a mischaracterization, I am sorry. Yeah, the question is what is have in the... understanding this intermediate neurons how does that help our understanding of the final classification. So this is actually, this whole field of trying to visualize intermediates is kind of in response to a common criticism of deep learning. So a common criticism of deep learning is like, you've got this big black box network you trained it on gradient ascent, you get a good number and that's great but we don't trust the network because we don't understand as people why it's making the decisions, that's it's making. So a lot of these type of visualization techniques were developed to try and address that and try to understand as people why the network are making their various classification, classification decisions a bit more. Because if you contrast, if you contrast a deep convolutional neural network with other machine running techniques. Like linear models are much easier to interpret in general because you can look at the weights and kind of understand the interpretation between how much each input feature effect the decision or if you look at something like a random forest or decision tree. Some other machine learning models end up being a bit more interpretable just by their very nature then this sort of black box convolutional networks. So a lot of this is sort of in response to that criticism to say that, yes they are these large complex models but they are still doing some interesting and interpretable things under the hood. They are not just totally going out in randomly classifying things. They are doing something meaningful So another cool thing we can do with this gradient based optimization of images is this idea of DeepDream. So this was a really cool blog post that came out from Google a year or two ago. And the idea is that, this is, so we talked about scientific value, this is almost entirely for fun. So the point of this exercise is mostly to generate cool images. And aside, you also get some sense for what features images are looking at. Or these networks are looking at. So we can do is, we take our input image we run it through the convolutional network up to some layer and now we back propagate and set the gradient of that, at that layer equal to the activation value. And now back propagate, back to the image and update the image and repeat, repeat, repeat. So this has the interpretation of trying to amplify existing features that were detected by the network in this image. Right? Because whatever features existed on that layer now we set the gradient equal to the feature and we just tell the network to amplify whatever features you already saw in that image. And by the way you can also see this as trying to maximize the L2 norm of the features at that layer of the image. And it turns... And when you do this the code ends up looking really simple. So your code for many of your homework assignments will probably be about this complex or maybe even a little bit a less so. So the idea is that... But there's a couple of tricks here that you'll also see in your assignments. So one trick is to jitter the image before you compute your gradients. So rather than running the exact image through the network instead you'll shift the image over by two pixels and kind of wrap the other two pixels over here. And this is a kind of regularizer to prevent each of these [mumbling] it regularizers a little bit to encourage a little bit of extra special smoothness in the image. You also see they use L1 normalization of the gradients that's kind of a useful trick sometimes when doing this image generation problems. You also see them clipping the pixel values once in a while. So again we talked about images actually should be between zero to 2.55 so this is a kind of projected gradients decent where we project on to the space of actual valid images. But now when we do all this then we start, we might start with some image of a sky and then we get really cool results like this. So you can see that now we've taken these tiny features on the sky and they get amplified through this, through this process. And we can see things like this different mutant animals start to pop up or these kind of spiral shapes pop up. Different kinds of houses and cars pop up. So that's all, that's all pretty interesting. There's a couple patterns in particular that pop up all the time that people have named. Right, so there's this Admiral dog, that shows up allot. There's the pig snail, the camel bird this the dog fish. Right, so these are kind of interesting, but actually this fact that dog show up so much in these visualization, actually does tell us something about the data on which this network was trained. Right, because this is a network that was trained for image net classification, image that have thousand categories. But 200 of those categories are dogs. So, so it's kind of not surprising in a sense that when you do these kind of visualizations then network ends up hallucinating a lot of dog like stuff in the image often morphed with other types of animals. When you do this other layers of the network you get other types of results. So here we're taking one of these lower layers in the network, the previous example was relatively high up in the network and now again we have this interpretation that lower layers maybe computing edges and swirls and stuff like that and that's kind of borne out when we running DeepDream at a lower layer. Or if you run this thing for a long time and maybe add in some multiscale processing you can get some really, really crazy images. Right, so here they're doing a kind of multiscale processing where they start with a small image run DeepDream on the small image then make it bigger and continue DeepDream on the larger image and kind of repeat with this multiscale processing and then you can get, and then maybe after you complete the final scale then you restart from the beginning and you just go wild on this thing. And you can get some really crazy images. So these examples were all from networks trained on image net There's another data set from MIT called MIT Places Data set but instead of 1,000 categories of objects instead it's 200 different types of scenes like bedrooms and kitchens like stuff like that. And now if we repeat this DeepDream procedure using an network trained at MIT places. We get some really cool visualization as well. So now instead of dogs, slugs and admiral dogs and that's kind of stuff, instead we often get these kind of roof shapes of these kind of Japanese style building or these different types of bridges or mountain ranges. They're like really, really cool beautiful visualizations. So the code for DeepDream is online, released by Google you can go check it out and make your own beautiful pictures So there's another kind of... Sorry question? So the question is, what are taking gradient of? So like I say, if you, because like one over x squared on the gradient of that is x. So, if you send back the volume of activation as the gradient, that's equivalent to max, that's equivalent to taking the gradient with respect to the like one over x squared some... Some of the values. So it's equivalent to maximizing the norm of that of the features of that layer. But in practice many implementation you'll see not explicitly compute that instead of send gradient back. So another kind of useful, another kind of useful thing we can do is this concept of feature inversion. So this again gives us a sense for what types of, what types of elements of the image are captured at different layers of the network. So what we're going to do now is we're going to take an image, run that image through network record the feature value for one of those images and now we're going to try to reconstruct that image from its feature representation. And the question, and now based on the how much, how much like what that reconstructed image looks like that'll give us some sense for what type of information about the image was captured in that feature vector. So again, we can do this with gradient ascent with some regularizer. Where now rather than maximizing some score instead we want to minimize the distance between this catch feature vector. And between the computed features of our generated image. To try and again synthesize a new image that matches the feature back to that we computed before. And another kind of regularizer that you frequently see here is the total variation regularizer that you also see on your homework. So here with the total variation regularizer is panelizing differences between adjacent pixels on both of the left and adjacent in left and right and adjacent top to bottom. To again try to encourage special smoothness in the generated image. So now if we do this idea of feature inversion so this visualization here on the left we're showing some original image. The elephants or the fruits at the left. And then we run that, we run the image through a VGG-16 network. Record the features of that network at some layer and then try to synthesize a new image that matches the recorded features of that layer. And this is, this kind of give us a sense for what how much information is stored in this images. In these features of different layers. So for example if we try to reconstruct the image based on the relu2_2 features from VGC's, from VGG-16. We see that the image gets almost perfectly reconstructed. Which means that we're not really throwing away much information about the raw pixel values at that layer. But as we move up into the deeper parts of the network and try to reconstruct from relu4_3, relu5_1. We see that our reconstructed image now, we've kind of kept the general space, the general spatial structure of the image. You can still tell that, that it's a elephant or a banana or a, or an apple. But a lot of the low level details aren't exactly what the pixel values were and exactly what the colors were, exactly what the textures were. These are kind of low level details are kind of lost at these higher layers of this network. So that gives us some sense that maybe as we move up through the flairs of the network it's kind of throwing away this low level information about the exact pixels of the image and instead is maybe trying to keep around a little bit more semantic information, it's a little bit invariant for small changes in color and texture and things like that. So we're building towards a style transfer here which is really cool. So in addition to understand style transfer, in addition to feature inversion. We also need to talk about a related problem called texture synthesis. So in texture synthesis, this is kind of an old problem in computer graphics. Here the idea is that we're given some input patch of texture. Something like these little scales here and now we want to build some model and then generate a larger piece of that same texture. So for example, we might here want to generate a large image containing many scales that kind of look like input. And this is again a pretty old problem in computer graphics. There are nearest neighbor approaches to textual synthesis that work pretty well. So, there's no neural networks here. Instead, this kind of a simple algorithm where we march through the generated image one pixel at a time in scan line order. And then copy... And then look at a neighborhood around the current pixel based on the pixels that we've already generated and now compute a nearest neighbor of that neighborhood in the patches of the input image and then copy over one pixel from the input image. So, maybe you don't need to understand the details here just the idea is that there's a lot classical algorithms for texture synthesis, it's a pretty old problem but you can do this without neural networks basically. And when you run this kind of this kind of classical texture synthesis algorithm it actually works reasonably well for simple textures. But as we move to more complex textures these kinds of simple methods of maybe copying pixels from the input patch directly tend not to work so well. So, in 2015, there was a really cool paper that tried to apply neural network features to this problem of texture synthesis. And ended up framing it as kind of a gradient ascent procedure, kind of similar to the feature map, the various feature matching objectives that we've seen already. So, in order to perform neural texture synthesis they use this concept of a gram matrix. So, what we're going to do, is we're going to take our input texture and in this case some pictures of rocks and then take that input texture and pass it through some convolutional neural network and pull out convolutional features at some layer of the network. So, maybe then this convolutional feature volume that we've talked about, might be H by W by C or sorry, C by H by W at that layer of the network. So, you can think of this as an H by W spacial grid. And at each point of the grid, we have this C dimensional feature vector describing the rough appearance of that image at that point. And now, we're going to use this activation map to compute a descriptor of the texture of this input image. So, what we're going to do is take, pick out two of these different feature columns in the input volume. Each of these feature columns will be a C dimensional vector. And now take the outer product between those two vectors to give us a C by C matrix. This C by C matrix now tells us something about the co-occurrence of the different features at those two points in the image. Right, so, if an element, if like element IJ in the C by C matrix is large that means both elements I and J of those two input vectors were large and something like that. So, this somehow captures some second order statistics about which features, in that feature map tend to activate to together at different spacial volumes... At different spacial positions. And now we're going to repeat this procedure using all different pairs of feature vectors from all different points in this H by W grid. Average them all out, and that gives us our C by C gram matrix. And this is then used a descriptor to describe kind of the texture of that input image. So, what's interesting about this gram matrix is that it has now thrown away all spacial information that was in this feature volume. Because we've averaged over all pairs of feature vectors at every point in the image. Instead, it's just capturing the second order co-occurrence statistics between features. And this ends up being a nice descriptor for texture. And by the way, this is really efficient to compute. So, if you have a C by H by W three dimensional tensure you can just reshape it to see times H by W and take that times its own transpose and compute this all in one shot so it's super efficient. But you might be wondering why you don't use an actual covariance matrix or something like that instead of this funny gram matrix and the answer is that using covariance... Using true covariance matrices also works but it's a little bit more expensive to compute. So, in practice a lot of people just use this gram matrix descriptor. So then... Then there's this... Now once we have this sort of neural descriptor of texture then we use a similar type of gradient ascent procedure to synthesize a new image that matches the texture of the original image. So, now this looks kind of like the feature reconstruction that we saw a few slides ago. But instead, I'm trying to reconstruct the whole feature map from the input image. Instead, we're just going to try and reconstruct this gram matrix texture descriptor of the input image instead. So, in practice what this looks like is that well... You'll download some pretrained model, like in feature inversion. Often, people will use the VGG networks for this. You'll feed your... You'll take your texture image, feed it through the VGG network, compute the gram matrix and many different layers of this network. Then you'll initialize your new image from some random initialization and then it looks like gradient ascent again. Just like for these other methods that we've seen. So, you take that image, pass it through the same VGG network, Compute the gram matrix at various layers and now compute loss as the L2 norm between the gram matrices of your input texture and your generated image. And then you back prop, and compute pixel... A gradient of the pixels on your generated image. And then make a gradient ascent step to update the pixels of the image a little bit. And now, repeat this process many times, go forward, compute your gram matrices, compute your losses, back prop.. Gradient on the image and repeat. And once you do this, eventually you'll end up generating a texture that matches your input texture quite nicely. So, this was all from Nip's 2015 paper by a group in Germany. And they had some really cool results for texture synthesis. So, here on the top, we're showing four different input textures. And now, on the bottom, we're showing doing this texture synthesis approach by gram matrix matching. Using, by computing the gram matrix at different layers at this pretrained convolutional network. So, you can see that, if we use these very low layers in the convolutional network then we kind of match the general... We generally get splotches of the right colors but the overall spacial structure doesn't get preserved so much. And now, as we move to large down further in the image and you compute these gram matrices at higher layers you see that they tend to reconstruct larger patterns from the input image. For example, these whole rocks or these whole cranberries. And now, this works pretty well that now we can synthesize these new images that kind of match the general spacial statistics of your inputs. But they are quite different pixel wise from the actual input itself. Question? So, the question is, where do we compute the loss? And in practice, we want to get good results typically people will compute gram matrices at many different layers and then the final loss will be a sum of all those potentially a weighted sum. But I think for this visualization, to try to pin point the effect of the different layers I think these were doing reconstruction from just one layer. So, now something really... Then, then they had a really brilliant idea kind of after this paper which is, what if we do this texture synthesis approach but instead of using an image like rocks or cranberries what if we set it equal to a piece of artwork. So then, for example, if you... If you do the same texture synthesis algorithm by maximizing gram matrices, but instead of... But now we take, for example, Vincent Van Gogh's Starry night or the Muse by Picasso as our texture... As our input texture, and then run this same texture synthesis algorithm. Then we can see our generated images tend to reconstruct interesting pieces from those pieces of artwork. And now, something really interesting happens when you combine this idea of texture synthesis by gram matrix matching with feature inversion by feature matching. And then this brings us to this really cool algorithm called style transfer. So, in style transfer, we're going to take two images as input. One, we're going to take a content image that will guide like what type of thing we want. What we generally want our output to look like. Also, a style image that will tell us what is the general texture or style that we want our generated image to have and then we will jointly do feature recon... We will generate a new image by minimizing the feature reconstruction loss of the content image and the gram matrix loss of the style image. And when we do these two things we a get a really cool image that kind of renders the content image kind of in the artistic style of the style image. And now this is really cool. And you can get these really beautiful figures. So again, what this kind of looks like is that you'll take your style image and your content image pass them into your network to compute your gram matrices and your features. Now, you'll initialize your output image with some random noise. Go forward, compute your losses go backward, compute your gradients on the image and repeat this process over and over doing gradient ascent on the pixels of your generated image. And after a few hundred iterations, generally you'll get a beautiful image. So, I have implementation of this online on my Gethub, that a lot of people are using. And it's really cool. So, you can, this is kind of... Gives you a lot more control over the generated image as compared to DeepDream. Right, so in DeepDream, you don't have a lot of control about exactly what types of things are going to happen coming out at the end. You just kind of pick different layers of the networks maybe set different numbers of iterations and then dog slugs pop up everywhere. But with style transfer, you get a lot more fine grain control over what you want the result to look like. Right, by now, picking different style images with the same content image you can generate whole different types of results which is really cool. Also, you can play around with the hyper parameters here. Right, because we're doing a joint reconstruct... We're minimizing this feature reconstruction loss of the content image. And this gram matrix reconstruction loss of the style image. If you trade off the constant, the waiting between those two terms and the loss. Then you can get control about how much we want to match the content versus how much we want to match the style. There's a lot of other hyper parameters you can play with. For example, if you resize the style image before you compute the gram matrix that can give you some control over what the scale of features are that you want to reconstruct from the style image. So, you can see that here, we've done this same reconstruction the only difference is how big was the style image before we computed the gram matrix. And this gives you another axis over which you can control these things. You can also actually do style transfer with multiple style images if you just match sort of multiple gram matrices at the same time. And that's kind of a cool result. We also saw this multi-scale process... So, another cool thing you can do. We talked about this multi-scale processing for DeepDream and saw how multi scale processing in DeepDream can give you some really cool resolution results. And you can do a similar type of multi-scale processing in style transfer as well. So, then we can compute images like this. That a super high resolution, this is I think a 4k image of our favorite school, like rendered in the style of Starry night. But this is actually super expensive to compute. I think this one took four GPU's. So, a little expensive. We can also other style, other style images. And get some really cool results from the same content image. Again, at high resolution. Another fun thing you can do is you know, you can actually do joint style transfer and DeepDream at the same time. So, now we'll have three losses, the content loss the style loss and this... And this DeepDream loss that tries to maximize the norm. And get something like this. So, now it's Van Gogh with the dog slug's coming out everywhere. [laughing] So, that's really cool. But there's kind of a problem with this style transfer for algorithms which is that they are pretty slow. Right, you need to produce... You need to compute a lot of forward and backward passes through your pretrained network in order to complete these images. And especially for these high resolution results that we saw in the previous slide. Each forward and backward pass of a 4k image is going to take a lot of compute and a lot of memory. And if you need to do several hundred of those iterations generating these images could take many, like tons of minutes even on a powerful GPU. So, it's really not so practical to apply these things in practice. The solution is to now, train another neural network to do the style transfer for us. So, I had a paper about this last year and the idea is that we're going to fix some style that we care about at the beginning. In this case, Starry night. And now rather than running a separate optimization procedure for each image that we want to synthesize instead we're going to train a single feed forward network that can input the content image and then directly output the stylized result. And now the way that we train this network is that we compute the same content and style losses during training of our feed forward network and use that same gradient to update the weights of the feed forward network. And now this thing takes maybe a few hours to train but once it's trained, then in order to produce stylized images you just need to do a single forward pass through the trained network. So, I have a code for this online and you can see that it ends up looking about... Relatively comparable quality in some cases to this very slow optimization base method but now it runs in real time it's about a thousand times faster. So, here you can see, this is like a demo of it running live off my webcam. So, this is not running live right now obviously, but if you have a big GPU you can easily run four different styles in real time all simultaneously because it's so efficient. There was... There was another group from Russia that had a very similar out... That had a very similar paper concurrently and their results are about as good. They also had this kind of tweek on the algorithm. So, this feed forward network that we're training ends up looking a lot like these... These segmentation models that we saw. So, these segmentation networks, for semantic segmentation we're doing down sampling and then many, and then many layers then some up sampling [mumbling] With transposed convulsion in order to down sample an up sample to be more efficient. The only difference is that this final layer produces a three channel output for the RGB of that final image. And inside this network, we have batch normalization in the various layers. But in this paper, they introduce... They swap out the batch normalization for something else called instance normalization tends to give you much better results. So, one drawback of these types of methods is that we're now training one new style transfer network... For every... For style that we want to apply. So that could be expensive if now you need to keep a lot of different trained networks around. So, there was a paper from Google that just came... Pretty recently that addressed this by using one feed forward trained network to apply many different styles to the input image. So now, they can train one network to apply many different styles at test time using one trained network. So, here's it's going to take the content images input as well as the identity of the style you want to apply and then this is using one network to apply many different types of styles. And again, runs in real time. That same algorithm can also do this kind of style blending in real time with one trained network. So now, once you trained this network on these four different styles you can actually specify a blend of these styles to be applied at test time which is really cool. So, these kinds of real time style transfer methods are on various apps and you can see these out in practice a lot now these days. So, kind of the summary of what we've seen today is that we've talked about many different methods for understanding CNN representations. We've talked about some of these activation based methods like nearest neighbor, dimensionality reduction, maximal patches, occlusion images to try to understand based on the activation values of what the features are looking for. We also talked about a bunch of gradient based methods where you can use gradients to synthesize new images to understand your features such as saliency maps class visualizations, fooling images, feature inversion. And we also had fun by seeing how a lot of these similar ideas can be applied to things like Style Transfer and DeepDream to generate really cool images. So, next time, we'll talk about unsupervised learning Autoencoders, Variational Autoencoders and generative adversarial networks so that should be a fun lecture.
Medical_Lectures
23_Biochemistry_Glycolysis_III_Lecture_for_Kevin_Aherns_BB_450550.txt
Kevin Ahern: How's everybody doing today? Student: Wunderbar. Kevin Ahern: How was the exam? Student: Good. Student: It was fine. Kevin Ahern: Did I hear, "Good," "Good," "Good"? Is that what I heard? Wow! All right. Student: We'll see if numbers bear that out. Kevin Ahern: We'll see if numbers bear that out. Comments or questions or... ? Student: Good extra credit question. Kevin Ahern: You liked the extra credit question? [laughs] I thought you might like the extra credit question. No comments on the exam? Student: When are we getting it back? Kevin Ahern: The exam, as I said last time, will not be available until Monday. I apologize, but the TAs are just too busy with exams this week, themselves, to do that. I fully intend to make it available Monday morning first thing, and I have told the TAs that I expect that's how they're going to spend their Thanksgiving break, so not everybody has it as bad as they do, I guess. So you will have it back on Monday. I will put a note out when it's available, as I always do, but it will be available in the BB office, as before. Student: So extra points if it comes back with cranberry sauce on it? Kevin Ahern: Extra points gets what? Student: If it comes back with cranberry sauce all over it. Kevin Ahern: I may see cranberry sauce on the turn-ins, I don't know. You guys might see them as you pick them up. Actually, that reminds me. I always invite my classes over to my house, so if you guys are in town on Thursday and you would like to come over for turkey, seriously, give me a holler. I'd be happy to have you over. So if you're not going home and you have no other plans, give me a holler and I'll try not to food poison you with turkey or something. [laughter] I'm serious. I'm serious. I usually get one or two people who take me up on it. We have usually a big get-together of students and faculty at our house, so that'd be kind of fun. Student: Thank you. Kevin Ahern: Well, I guess without any comments for exams, we'll move forward. We are nearing the end, believe it or not. After today, there are only four lectures left in the term. Kevin Ahern: Where did our term go, right? It's hard to believe. Yeah! After today there's... are you upset? Student: Yes! Kevin Ahern: We need some counseling on the front row here, I think. [laughter] Try not to take it too hard, but I can promise you we'll have more of it next term. I guess we're going to have class in here next term, as well. We used to have it in Gilfillan but they decided to give us this classroom again, so we'll have to bear with it. I like this classroom, though, now. What's that? Kevin Ahern: 451, yes. I've had a couple of questions... 451 does not have a recitation. "Where's the recitation?" “Where's the recitation?" There's no recitation in 451 because we don't work through problems like we do in 450, so there's no calculations, as such, in 451. The first half of 451 is that of metabolism, the kind of things that we're doing right now, and the last half is on molecular biology—DNA synthesis, RNA synthesis, protein synthesis, gene expression and one brief thing on sensory, the senses, and one on the immune system. So that's what's there. The last thing I'll say is, I will tell you that for the final exam I allow you to have a note card. I'm kind of picky on that note card. You have to get the note card from me and you have to turn it in with your final exam. That's the two rules about the note card. Even if you don't want a note card, you have to turn in a note card that you get from me. So not having a note card is going to cost you points. Having a note card besides the one that I give you is going to cost you points, as well. So make sure you get a note card from me. I'll make those available next week. It's a fairly large note card, and, yes, you can use both sides and so on and so forth, the primary rule being that everything on the note card must be in your own handwriting. You cannot paste figures on there. You cannot print on there. You have to use it as it is. I had to institute that rule, I've told that story to a few of you, I think, but I had to institute the rule a few years ago when I had a young man who had the really brilliant idea that if he printed the note card in red and then printed over it in green, that if he used red-green glasses he would double the capacity of his card. And it worked! [laughter] It worked! Student: Wow. Student: Oh, my gosh. Kevin Ahern: Yeah, It was a clever idea. I said, well, maybe that's just taking it a little too far. I honestly think there's some benefit from writing things with your own hand, anyway. So that's the rule. It has to be in your own handwriting. No printed cards. No copy and paste. No figures pasted on there. The card has to be as it is. I'll say more about that when I hand the cards out on Monday. The final exam in here is on Monday at 9:30 a.m. It's the first day and it's one of the first finals. I've never had that happen with this class before, so we'll actually have that final done with fairly early. So that's good news, maybe bad news. I don't know. Last time I talked about... shh! People are doing a lot of talking. When you're talking, it's hard for people around you to hear. Last time I talked about my own, Kevin Ahern's, pet theory about why Americans are getting obese, and I hope that I made a reasonable case for you about why that's the case. The next thing I'm going to talk about is something that's going to reinforce something else I've been saying during the term and probably you never thought about it that way, either. I've been telling you all along that glucose is a poison, and, in general, sugars are poisons for cells. In general, sugars are poisons for cells. This next example I'm going to give you, you're going to see how it actually, this poison can manifest itself. Before I tell you about that, though, I have to tell you a little bit about how galactose is normally metabolized. Galactose is a sugar. It's a monosaccharide. It's very closely related to glucose. It's actually an epimer of glucose, and that epimer of glucose we get in our body by drinking dairy products. Galactose, as I maintained previously, is half of the disaccharide known as lactose, the other half being glucose. Our body has to deal with galactose because, like glucose, galactose is a poison and if we don't deal with it, we've got problems. Well, in our body, we deal with glucose being a poison by making glycogen. Question? Student: You said lactose was an epimer of glucose. Kevin Ahern: I'm sorry. If I said "lactose," I meant "galactose" is an epimer of glucose. Galactose is an epimer of glucose. Galactose we have to deal with, but we don't make polymers of galactose. That's one of the things we don't do. So let's look to see, first, how galactose is normally metabolized in our bodies. Galactose is first converted from the free sugar form to galactose 1-phosphate by this enzyme known as galactokinase. Galactokinase catalyzes the reaction here. It's rather similar to the reaction that you saw with hexokinase, the difference being that it's working with a galactose and it's putting the phosphate on position 1 instead of putting it on position 6, but pretty much everything else is the same. So this is the first step that we have in detoxifying galactose. It would be nice to be able to use galactose for energy and so forth, and it turns out that I mentioned last time that glycolysis is a very useful pathway because it allows us to metabolize many sugars. One of the ways that that happens is that these sugars can get converted into fructose or, more commonly, into glucose, and that's what we see happening on this figure. Now, I'm going to step you through this figure and try to hopefully ease some of the confusion or concerns that students have about what's happening in this process. It's not nearly as complicated as it looks. How many times have I told you that, this term? You're never going to believe me again, right? Here's our product of the last reaction, galactose 1-phosphate. Galactose 1-phosphate plus UDP-glucose—so that's just glucose linked to a UDP and we'll see that molecule is important in glycogen metabolism later but if I take these two and I combine them with this enzyme—whose name I'm not even going to mention, just simply because it's not really important for our purposes—what happens? Well I see that the glucose gets released, as glucose 1-phosphate, and the galactose becomes linked to the UDP. So instead of having UDP-glucose, I have UDP-galactose and I have some glucose 1-phosphate. Everybody see what's happened there, so far? We've just swapped the galactose 1-phosphate for a glucose 1-phosphate and we're left with UDP-galactose. Now, glucose 1-phosphate, as we will talk about next week, is readily convert into glucose 6-phosphate. There's an enzyme that's known as phosphoglucomutase that will convert glucose 1-phosphate into glucose 6-phosphate, and, of course, you know glucose 6-phosphate can get burned in glycolysis. So already we see how this pathway is contributing things that can be used in glycolysis. But, we're not done yet because we have to convert UDP-galactose into something useful. All we've done, so far, is just put the galactose onto a UDP. That comes up with the next reaction here, UDP-galactose 4-epimerase. Well, that last enzyme name should tell you something about what's going on here. An epimerase is going to make an epimer, and galactose is an epimer of glucose and guess what it does? It converts UDP-galactose into UDP-glucose. So, now, we're right back where we started. So, in essence, every time this wheel turns we're seeing a little wheel turning here, every time this wheel turns, a glucose 1-phosphate is kicked out. In essence, we're bringing in galactose and we're kicking out glucose... galactose 1-phosphate, kicking out glucose 1-phosphate. All right? So this pathway that you see on the screen is allowing us to metabolize galactose in glycolysis. Now, there's always confusion as to what's happening here, so I'll stop and take questions on that. Yes, Connie? Student: Could you go over how glucose 1-phosphate is used in glycolysis again? Kevin Ahern: Glucose 1-phosphate is not used in glycolysis, but glucose 1-phosphate can be converted readily into glucose 6-phosphate that is used in glycolysis. Kevin Ahern: We'll talk about the enzyme that does that next week. It's involved in glycogen metabolism, actually. But this is only one step away from glycolysis, basically. Yes, sir? Student: Is there a beta form of glucose 1-phosphate? Kevin Ahern: Is there a beta form of glucose 1-phosphate? There probably is, but the product here is an alpha. Enzymes are always specific for what they will make. So, in this case, you're only going to get the alpha out of this guy. That's it? Was I that clear or were guys that much asleep? Yes, sir? Student: Just [unintelligible] would it be glucose 1-phosphate [unintelligible] would it be a mutase, to change that from 1 to 6? Kevin Ahern: Yeah. The enzyme that will convert that from 1 to 6 is a mutase, and again, I'll talk about that next week. Kevin Ahern: It's a good question, but it is actually a mutase that does that. When I say "mutase," what comes to mind? What's the intermediate? Student: [unintelligible] Kevin Ahern: Glucose 1,6-bisphosphate, right? [laughing] Not 2,3-bisphosphoglycerate. Not every intermediate is 2,3-BPG. Shannon? Student: So I think, I don't know if you said this before, but what's the difference between a mutase and isomerase? Kevin Ahern: What's the difference between a mutase and isomerase? Isomerase simply does the rearrangement by moving one piece to another piece. A mutase has an intermediate where we add and then we subtract. In the case of 2,3-BPG, we started with 3-PG, then we had 2,3-BPG, and then we took off the phosphate to make 2-PG. In this case, you guys are getting ahead of me, but since you've asked the question, I'll answer it, in this case, we have glucose 1-phosphate. The mutase puts a phosphate on, so we have glucose 1,6-bisphosphate, and then it takes off position 1 and we're left with glucose 6-phosphate. So the mutase, that name "mutase" will always tell you it's putting both of them on in the process of doing what it does. Okay? So, this is a very useful pathway. It's a very important pathway for us because it allows us to metabolize galactose if we are, like most of us are, fairly, relatively enriched, our diets are relatively enriched in dairy products. We're getting plenty of lactose and if we don't have a way of metabolizing that galactose, we've got a problem, and that comes up next. So if we have, for example, a genetic problem where we're lacking either of these enzymes, galactose 1-phosphate uridyl transferase, whose name I don't expect you to know, or UDP-galactose 4-epimerase, if we're lacking either one of these enzymes, we can't do this cycle. Well, if we can't do this cycle, what's going to happen? Galactose 1-phosphate is going to accumulate, and when galactose 1-phosphate accumulates, then, as this accumulates, so, too, is free galactose going to accumulate. Now, free galactose is the problem, as I said. It's a poison. When it accumulates, one of the problems that we experience is a product of this reaction. Our body recognizes, or our cells recognize, that galactose is a poison so it does something to convert it into something that's less poisonous. What it does is it reduces the aldehyde to an alcohol to make galactitol. If you recall, aldehydes are fairly reactive, so this is a way of making this molecule much less reactive. It's a protective mechanism. Unfortunately, certain places in our body, this guy is a problem. It turns out that in the lens of our eye this will form crystals. Galactitol will form crystals in the lens of our eye, and can lead to the formation of cataracts. It's not the only cause of cataracts, but it's a potential cause of cataracts. So if you're not metabolizing galactose properly, you make this compound here that becomes a crystal in the lens of your eye. So, again, reinforcing the poison nature of galactose. Yes? Student: Are people who are lactose intolerant... Kevin Ahern: Yes. Student: ... lacking these enzymes? Kevin Ahern: Are people who are lactose intolerant lacking these enzymes? It turns out, no. So you're anticipating my next thing, which I'll talk about in just a second, but lactose intolerance involves something else and other problems. Student: So do people with diabetes get cataracts? Kevin Ahern: Do people with diabetes get cataracts? People with diabetes get a lot of things, but I'm not aware of them getting cataracts at any higher level than anybody else, no. I always tell this story at this point, actually, because I watched young people do some things that are really stupid. As my capacity as advisor, or as instructor, is I see people do things that are stupid. There's one that people do that they don't realize that it's stupid, and I'll tell you about it because, when I was your age, I did the same thing. How many of you have ever popped something in the microwave and gone and just watched that little puppy cook in there, right? Have you done this? You want to do something really fun, take an egg. Have you done this? You take the egg and you put it in the microwave and, at some point, it goes, kapoof! Yeah, right? Now, I'll tell you a statistic that will surprise you. One of the greatest incidences of cataracts happens among people who work in the fast food industry. Did you know that? You know why? The thinking is that, because of all the microwaves that are used in food prep, that they're getting exposed to microwaves and that's causing some crystals of some sort to form in their eye and leading to cataracts. You should reduce your exposure to microwaves. Just because it's got a screen on there doesn't mean that there aren't microwaves that are coming out there. And, yes, cell phones use microwaves, too. Whatever you do with microwaves, you should be reducing your exposure to them, not increasing them. So don't go press the eyeball up against the microwave oven, watching that thing happen. When I use the microwave at home, I usually stay at least six to eight feet away from it at all times. Seriously. Yeah. "Whoa, look at Kevin, he's over there!" Right? But, seriously, why should you increase your exposure to it? Just like I wouldn't go and expose myself to X-rays any more than I need to, I wouldn't expose myself to microwaves any more than I need to, either. All right, last thing on lactose. The question about lactose intolerance always comes up at this point, and you guys are on top of this, as well. Lactose intolerance arises from a different problem. Lactose intolerance arises because we lack the enzyme... I shouldn't say we “lack." Our body changes over time the amount of the enzyme called "lactase" that it synthesizes. When we're young, when we're an infant, we're drinking milk, we need a lot of lactase because that's the primary source of sugars and so forth that we're getting in our diet. It's coming through the milk. Over evolutionary time, what's happened is, we look at animals drinking milk or we look at humans drinking milk, over evolutionary time, the only times that really we needed to make that enzyme is during that infant stage. As humankind has developed organized farming and we've had availability of dairy products, and so on, and so forth, we have tended to eat those dairy products longer in our lives. While our body does continue to make some lactase as we get older, it varies considerably from one culture to another, from one ethnic group to another, in terms of how much lactase is made. If you don't make sufficient lactase, what will happen is you're left with lactose. You don't break it down into glucose and galactose, and, as a consequence, lactose is metabolized by bacteria in your stomach in a different way than these guys are which get dumped into your bloodstream. So the bacteria get a hold of this guy and they go crazy. Of course, one of the byproducts of metabolic action is carbon dioxide. Well, gas, some severe problems relating to discomfort and so forth, happen as a result of a deficiency of lactase. It's happening because, again, as we're getting older, we're making less of that enzyme. There are commercial versions of lactase that are available for people who are what are called "lactose intolerant," that they can actually just simply swallow when they have dairy products and help to relieve that problem. Other questions, comments? Yes? Student: So what happens to people who are, like, deathly lactose intolerant, like, they can't stand the smell of cheese, it just makes them feel bad? Kevin Ahern: What happens to people who are, what she says, "deathly" lactose intolerant? I've never heard of such a thing, so I don't know about that. A lot of people have either mental notions of problems or there's other things that may relate. So people who have issues with gluten intolerance and so forth sometimes are diagnosed mistakenly with lactose intolerance, and that may be a problem. But I don't know of any deathly afflicted with lactose intolerance. Do you? Student: Yes. Kevin Ahern: You do? Again, it could be an allergy or something else. It's not directly lactose intolerance. Lactose intolerance mostly comes about as a result of the discomfort, the most common thing that's happening is the discomfort that's there. Student: I guess as long as that is related in some form to, like, celiac disease? Kevin Ahern: Celiac disease is related to, actually, gluten intolerance, and there are some connections, although they are distinct diseases, but there are some connections between those two, yeah. But, again, lactose intolerance, different area, different area. I had a student who was originally diagnosed with lactose intolerance and then she just got the paranoia. "I'm absolutely not going to have anything," “I can't be around milk," and so on and so forth ane discovered that it wasn't lactose that she was intolerant to. It actually was she was exquisitely sensitive to gluten and was getting that in a variety of ways. She could actually get gluten through milk, that the cow was eating. So she was very sensitive. So when they got diagnosed they realized what it was that was causing her problem. Now we've talked about the metabolism of the sugars. We've talked about how glucose gets broken down. I want to spend a little bit of time talking about regulation of the pathway and then, when we come back on Wednesday and I know you all are going to be here on Wednesday when we come back on... [scattered laughing] Kevin Ahern: You're not all going to be here on Wednesday? I know I'll be here on Wednesday. When we come back on Wednesday I will then talk about regulation again, in view of gluconeogenesis. So we're going to get sort of a cursory look at the regulation of glycolysis and then on Wednesday we'll see how that ties to the regulation of the synthesis of glucose, as well, and the two are actually coordinated. Let's start talking, first, about a really, really interesting enzyme [coughing] excuse me, that, as I said before, is the most important regulatory enzyme that we see in glycolysis. This is the enzyme phosphofructokinase, or as you probably memorized it, PFK. PFK is molecule that has a, I'm sorry, is an enzyme that has a very interesting structure. You see it is actually existing here as a tetramer, and that enzyme has a very unusual behavior. Let's take a look at this figure. What we see is we're plotting, on the y-axis, the velocity of the reaction that it catalyzes. Now, you may not remember the reaction, so I will tell you. The reaction that PFK catalyzes is as follows. Fructose 6-phosphate plus ATP makes fructose 1, 6-bisphosphate plus ADP. So it's using the energy of ATP to put the phosphate onto fructose 6-phosphate to make fructose 1,6-bisphosphate. Okay, simple enough, right? If we do this reaction and we're plotting V versus S, where the substrate that we're using is fructose 6-phosphate, we see something very odd happening. Now this is one of the few enzymes I know that behaves in this way. You might say, "What's so unusual?" We look at this and we say, "Okay, so it's got sigmoidal nature." There's sort of an S shape there, and in the presence of ATP, it's sigmoidal. So ATP is an allosteric effector, right? Well, ATP is a substrate, kind of like we saw with ATCase. Remember, aspartate affected the enzyme, right? Very similar. However, if we do the same reaction with a small amount of ATP, look how much the velocity goes up. Now, remember, ATP is a substrate. How do we increase the velocity by decreasing the amount of a substrate? We haven't seen that happen before. So the enzyme is getting turned on by having a less amount of one of its substrates. That's very odd. How would that manifest itself? Any thoughts? Yes, sir. Student: Possibly the concentration is less important than the effect of feedback inhibition from high ATP concentration? Kevin Ahern: He says possibly the effective concentration is less important than the feedback inhibition resulting from ATP concentration. Well, yes, that's true, but that doesn't tell us how that can happen. You're right. Yes? Student: Is it possibly due to the Gibbs equation? Kevin Ahern: Like what? Student: Does it have to do with the Gibbs free energy? Kevin Ahern: Does it have to do with the Gibbs free energy? No. Remember that enzymes never change the overall Gibbs free energy. So that's not it. Yes? Student: How many sides does the enzyme have? Kevin Ahern: How many sides does the enzyme have? The enzyme has, it's a tetramer, so there are four different subunits that are there. Student: Then the other subunits would be available if low ATP was there. Kevin Ahern: Well, you're getting there. You're getting there. She says the other subunits would be available if low ATP was there. Student: Are there two sites to bind, for ATP? Kevin Ahern: Ahhhhh!! Over here! He's hit it on the nose. It turns out the enzyme has two places to bind ATP. One is an allosteric site and one is a catalytic site. Which one do you suppose has the higher Km? Which one is the enzyme going to have greater affinity for? Student: Catalytic? Student: Allosteric? Kevin Ahern: You two want to duke it out? [laughing] Let's think about this. When we have low amounts of ATP, the enzyme is more active. What does that tell us? Something. So ATP is inhibiting the enzyme, right? We all agree on that, right? That's what we see here. ATP is inhibiting the enzyme in some way. Student: It's allosterically regulated, right? Kevin Ahern: It's allosterically regulated. So I'm asking you, does the allosteric site have a higher affinity or does the catalytic site have a higher affinity? Student: The catalytic site. Kevin Ahern: The catalytic site's got to have a higher affinity. Right. Because only when the ATP concentration is high does it start banging into the allosteric site and turning it off. That's a really cool enzyme, a very cool enzyme. Student: [inaudible] Kevin Ahern: Yeah. It's responding to two signals. This turns out to be really important because, why do we want the enzyme turned off if there's high ATP? Well, think about it. Do we want to be burning gasoline, do we want to be burning our furnace when it's summer? No. When we have plenty of energy, do we want to be burning our glucose? No, we don't, and we have plenty of energy when we have high ATP. High ATP should be turning this enzyme off, and that's basically what we're seeing. It's turning the enzyme, it's turning the volume of that enzyme down. That's really important. On the other hand, what happens if we don't have much ATP? Well, you betcha we want this enzyme going, because we want to burn glucose so that we can get pyruvate and we can get ATP and we can get the citric acid cycle and we can get all these things going. Low ATP is turning that enzyme on. Really cool! It's a very cool thing. Now, phosphofructokinase turns out to be affected by several things, as we shall see. PFK gets affected by several allosteric effectors. One of them is ATP. Another very important one is this molecule called F2,6BP, which stands for "fructose 2,6-bisphosphate." One of the things you're going to see as we talk about the regulation of the metabolic pathways relating to sugars is, many of the names are going to sound similar. "Fructose 2,6-bisphosphate" sounds an awful lot like "fructose 1,6-bisphosphate" and because the numbers ain't equal, you know that ain't the same thing. Right? You're going to have to spend some time getting straight numbers and names, and you're going to see enzyme names are going to overlap with these, as well. For the moment, we're going to focus simply on this molecule, fructose 2,6-bisphosphate. Look what happens with this molecule. Relative velocity, and we're doing the same plot that we did before. Here's the enzyme with no fructose 2,6-bisphosphate. Here's the enzyme in the presence of 0.1 micromolar. That's a very, very tiny amount. Here's the enzyme in the presence of 1 micromolar. Look at that! Bang! It's on! Fructose 2,6-bisphosphate is present in very vanishingly tiny quantities in our cell, such vanishingly tiny quantities, it wasn't even discovered until like the 1980s. But in very tiny amounts, it turns this enzyme on superbly. Yes, sir? Student: Should the x-axis also be labeled "F2,6BP" instead of "F6P"? Student: No. Kevin Ahern: This is a substrate. No. Student: Okay. Kevin Ahern: This is an allosteric effector. Student: Gotcha. Kevin Ahern: Now, a very, very sensitive switch to turn that enzyme on. If we do the same plot and, instead of measuring the substrate in fructose 6-phosphate, we measure it with ATP we know ATP's got some weird relation with this enzyme. We see that this activates the enzyme, even in higher concentrations of ATP. In other words, fructose 2,6-bisphosphate is a more important regulator than ATP, itself, is. Cells use fructose 2,6-bisphosphate as a way of controlling this enzyme at very, very, very sensitive levels, and we will see next week, when I talk about glycogen metabolism, how this ties into all of this. It's a master picture that we think about with respect to sugar metabolism that is a really interesting and elaborate control. But fructose 2,6-bisphosphate is probably the most important regulator of this enzyme. Now, the synthesis of fructose 2,6-bisphosphate is a little complicated, and I'm going to save that until I talk about gluconeogenesis, but suffice it to say, cells have interesting ways of making this molecule and it doesn't take very much to get glycolysis going. For those of you who wonder about structure, there's the structure of fructose 2,6-bisphosphate. We look at this and we remind ourselves because I had to, myself, this morning, when I looked at this, that we always want to number our carbons. That will help us keep track of things. Carbon 1... 2... I'm sorry. Carbon 1... 2... 3... 4... 5... 6. That's carbon-2 with the phosphate on it, right there. Now, I've described situations to you where we have muscle and where we have liver, and we have to think about the different conditions that the body has to respond to, relative to those situations that muscle and liver find themselves in. This schematically shows us the overall pathway of glycolysis. I told you that there are three enzymes that play important regulatory roles. The first is hexokinase and, as I said, its regulation is a little odd. Its product actually helps to turn it off. That's not totally surprising. It's known as "substrate-level regulation." But that product affects this enzyme, hexokinase. So hexokinase is the first regulated enzyme. PFK is the second regulated enzyme, and we see, for example, that ATP can turn it off. This is a little confusing. AMP can actually turn it on. That makes sense. AMP is an indicator of low energy inside of cells. Low energy, you want to turn this enzyme on. We also see that ATP, of course, as I mentioned earlier, turns this enzyme off. Fructose 2,6-bisphosphate turns the enzyme on. Pyruvate kinase is the third regulated enzyme, and I'm going to show you something about that enzyme in a minute. But I want to sort of remind you of something that I talked about earlier, but I haven't had a chance to finish the story on it. Pyruvate kinase is regulated, there goes our bouncing ball, pyruvate kinase is regulated in two ways. One, by phosphorylation. Phosphorylation tends to turn it down or turn it off. The other is by allosteric regulation, and the allosteric regulation involved in controlling pyruvate kinase is fructose 1,6-bisphosphate. Fructose 1,6-bisphosphate will activate that enzyme. It activates that enzyme. And it activates that enzyme in a mechanism I refer to as "feed forward activation." Feed forward, that's the opposite of feedback inhibition. Feedback inhibition said that the last molecule turned off the first enzyme in the pathway. Feed forward activation says a molecule in the pathway turns on an enzyme further ahead. Let's think about how this actually works in our body, because it's kind of cool. Let's imagine that we are sitting here and, all of a sudden, the fire alarm goes off. I've actually had this happen in this class. The fire alarm goes off and we have to all go racing outside. When that happens, the first thing that we have to do is we have to get out. We had to run a long ways, not in this building, but we had to run a long ways to get out. We need energy to get out, right? What's going to happen? Well, if I'm sitting here resting, what's happening with my pyruvate kinase reaction? Very little. It's not doing much. I'm not burning energy. I'm not going through glycolysis much. Things are just kind of sitting there doing their thing, right? However, when I get up and I start running, my body starts, through epinephrine, dumping glucose into my bloodstream. My muscles take up that glucose and now their glucose concentration is high. What's going to happen with glycolysis? It's going to start. Great! Glycolysis starts. Glucose goes to glucose 6-phosphate. Glucose 6-phosphate goes to fructose 6-phosphate. Fructose 6-phosphate goes to fructose 1,6-bisphosphate. And then we hit a wall. Anybody remember what the wall is? There is one enzyme that had a high positive Delta G. Aldolase. Aldolase is a wall. We've got a high positive Delta G zero prime at the aldolase and, all of a sudden, what starts accumulating? The substrate for aldolase. Look what it is, fructose 1,6-bisphosphate. As fructose 1,6-bisphosphate accumulates, trying to get over that energy hump, what do you suppose it's going to do? It's going to start binding to pyruvate kinase and what's pyruvate kinase going to do? It's going to take whatever phosphoenolpyruvate that's there and convert it into pyruvate. Now, because there's none of this, there was less of this, this reaction goes forward. This reaction goes forward. This reaction goes forward. And, finally, we've decreased the product of the aldolase reaction, which is glyceraldehyde 3-phosphate [stuttering] and DHAP and G3P. [laughing] We've decreased the products of the aldolase reaction. We've increased the reactants, we've decreased the products, and that's how we get over the energy hump! Feed forward activation is important for helping us to get over that aldolase barrier. So this occurs in the phenomenon I think I've talked about previously, called "pushing and pulling a reaction." We push a reaction when we increase substrate. We pull a reaction when we decrease product. The feed forward activation is decreasing product. That's what it's ultimately doing. As a result of that, this now goes all the way down to pyruvate and we start making those ATPs we need to run away. Student: You said decreasing the product is the pull part? Kevin Ahern: Is what? Student: The pull part? Kevin Ahern: The pull part is decreasing the product. That's right. So, if we wanted to get our automobile out of the street, if we had us pushing it, that's one thing. But if there's only one or two of us pushing it, that's not so good. But if we had somebody on the other side with a truck and a cable pulling the car and us pushing at the same time, it's much more likely to move. Same thing occurs with reactions. Questions about that? Let's go and take one quick look at I think, yeah, let's not mess with that, talk about pyruvate kinase. Pyruvate kinase is the last enzyme in the glycolysis pathway. As I said, it's catalyzing the big bang...the big bang. So we want to have this enzyme under control because, if we don't, as soon as we have PEP it's going to be going straight to pyruvate. One of the ways that we regulate this enzyme is with phosphorylation. Phosphorylation occurs as a result of action of our friend protein kinase A. Protein kinase A will catalyze the addition of a phosphate to pyruvate kinase and cause it to become into the less active state. Pyruvate kinase is also regulated allosterically. It's regulated by alanine, for one, alanine. Alanine's an amino acid. Why is alanine important? Well, alanine is actually a good measure of pyruvate, because pyruvate is readily converted into alanine. When we have high alanine, we have high pyruvate, we don't want this enzyme working, we will actually allosterically turn this enzyme off with alanine. ATP will also turn this enzyme off. We have too much ATP, what's going to happen? Well, we don't want to keep dumping more and more of this stuff. We're going to stop making pyruvate and we're going to clog up the enzyme. The allosteric activator, of course, as I said earlier, is fructose 1,6-bisphosphate. So we can turn the enzyme off, we can turn the enzyme on, allosterically and by phosphorylation, and, of course, dephosphorylation, to turn it on. Now, this theme of allosteric regulation superimposed on covalent modification is one that we will see a lot of in glycogen metabolism. So this is a little complicated, but the important thing is understanding what all the pieces are, not how they all play together. What if I have a phosphorylated enzyme and I have fructose 1,6-bisphosphate? Well, it's kind of hard to mentally balance all of that. I'm not going to go through and do that. But I do think that you should know the effects that phosphorylation has. You should know the effect that fructose 1, 6-bisphosphate has, and you should know the effects that ATP have on this enzyme. Student: So phosphorylation, the alanine and the ATP are offs? Kevin Ahern: ATP and alanine are offs. Student: And phosphorylation? Kevin Ahern: And phosphorylation, too. Right. That's a lot of stuff with regulation there. I want to say a little bit about GLUTs. We talked a little bit about GLUTs earlier. You may not remember, but when I talked about the insulin signaling pathway, I told you that the way that the body deals with high blood glucose is by synthesizing insulin. insulin went through that mulit step pathway that involved the Hulk, that you recall. The end result of the insulin signaling pathway, at least, one of the end results, was that the movement of GLUT proteins from the cytoplasm to the cell membrane was affected. It favored the movement of those GLUTs and the "GLUTs" stands for "glucose transport proteins." There are many different GLUTs, as you can see on the screen, and they're located in various places in our body. These GLUTs turn out to play some very important roles, from a human health perspective, and they may have some very, very important considerations with respect to treating cancer. So with that introduction, I want to tell you about a very interesting phenomenon. When we think of the development of a tumor, as many of you have noted in here, there's a multi-step process it takes to get to being a tumor, and that multi-step process is probably different for every different tumor that forms. There's no one way of making a tumor. There are many ways of making a tumor, but there's no one single step that gets us to a tumor. What people have noted about tumors is the following, that tumors do tend to grow more rapidly than do other cells. Tumor cells tend to grow more rapidly. They are not organized. They grow as a clump. So what happens is, the needs and demands of a tumor cell are greater than those of a non-tumor cell. What happens in the process of that is really interesting and cool. It turns out that tumors, because they are just growing in a place, they don't have good access to blood supply, normally. One of the things that tumor cells probably have to do in order to survive, is they have to stimulate the growth of blood vessels to supply them with blood. There's a protein known as angiogenin that stimulates the growth of blood vessels, and many tumors will, in fact, activate or synthesize or accumulate angiogenin and favor the growth of blood vessels to supply them with blood. Yes, sir? Student: Is the angiogenin a direct result of VEGF? Kevin Ahern: I do not know. Don't know. Student: Because that's one thing we went over in cell biology, the vascular endothelial growth factor. Kevin Ahern: I don't know the answer to that question. I can look it you up for you, though. Now, angiogenin is favoring the growth of blood vessels supplying tumors with blood. Blood contains glucose, blood contains oxygen, all the things that tumor cells, or any cell, would like to have. One strategy for treating tumors is to inhibit the growth of the blood vessels and there are some promising drugs that appear to do very well at basically starving tumors to death. That's kind of cool. What is interesting is, well, if the tumor, as it's growing, it takes a while to get these things synthesized, it takes a while to get these blood vessels there. As it's growing, if this tumor cell is growing faster than its surrounding tissues, its energy needs are greater. There's not a blood supply. There's not an oxygen supply. These tumor cells are going to be what we call "hypoxic." They're going to be low in oxygen. I've already told you oxygen is necessary for rapidly metabolizing cells. We think of this tumor cell as a rapidly metabolizing cell. It's taking up oxygen. It's taking up oxygen faster than it's getting it from the environment in which it finds itself. [student sneezes] Gesundheit! Now, hypoxia is a normal phenomenon. Our body goes through hypoxia all the time. Our body has a response to hypoxia. When we are hypoxic, we make a protein called "hypoxia-induction factor." It's a protein. It's a protein that is a transcription factor. A transcription factor activates transcription of certain genes. I want you to look at the genes that hypoxia-inducing factor—"hypoxia-induction factor" is what I call it—makes. Look at this! It's making GLUTs. It's making hexokinase. It's making phosphofructokinase. That's PFK. It's making aldolase. It's making... look at all these enzymes of glycolysis! This makes a lot of sense. Let's think about what we've learned about glycolysis. I told you that when we had plenty of oxygen we produced a heck of a lot more ATP, right? When we don't have plenty of oxygen, we've got to burn more glucose to get the same amount of ATP. These hypoxic cells are recognizing that. They're making more glycolysis enzymes so they can take in more sugar so they can keep that cell alive. What you know about glycolysis says, "This makes perfect sense!" And it also means maybe I've got a strategy for how I might stop a tumor from growing. If I can find ways to starve it to death, by perhaps stopping this transcriptional activity, affecting any of the enzymes preferentially in tumor cells, I got a very cool way to knock out cancer. Almost at the end. Any questions about what I've just told you? Question, yeah? Student: What's the difference between a benign and a malignant tumor? Kevin Ahern: A malignant tumor is growing uncontrollably. It will metastasize and kill you. A benign tumor is growing controllably. Other questions? Was there another hand back there? Okay, so I'll see all of you on Wednesday, I know. Captioning provided by Disability Access Services at Oregon State University. [END]
Medical_Lectures
Hemostasis_Lesson_1_An_Introduction.txt
[Music] hello I'm Eric strong a clinical assistant professor at Stanford University and this is the first video in a course on hemostasis the overall learning goals for this course will be to understand the normal process of hemostasis to identify likely disorders of hemostasis to use diagnostic tests appropriately in order to diagnose these disorders and last to know the various medications used in the treatment of hemostatic disorders including their mechanisms indications and side effects the primary target audience for the series is graduate students in the Health Sciences including medical students as well as medical and Pediatric Health staff The General public may find the course interesting and helpful as well though some of the videos particularly the second and third will be relatively technical for the course I'll be assuming some knowledge of college level biochemistry and very basic cell biology but not assuming any specific knowledge of physiology or medicine essentially if you've completed undergraduate premedical training that would be more than adequate background the learning objectives specifically of this first video covering the course inter introduction will be to describe the main learning goals and organization of this 14 video course to list the major challenges students encounter while trying to understand hemostasis to list the major stages of hemostasis and describe their relationships to one another and last to contrast typical presentations of patients with platelet disorders versus coagulation disorders here's the outline for this course course as you know this first video is the introduction lessons 2 and three will cover the normal physiology of hemostasis Lesson Four will cover tests of hemostasis such as the PTT and INR along with many other less well-known tests lessons 5 through 7 will cover the various classes of medications relevant to the topic 8 through 10 will'll discuss platelet disorders 11 and 12 will cover disorders of the coagulation Cascade and fibron neis and finally in lessons 13 and 14 I'll go over several example cases starting with a clinical vignette of a patient with a disorder of coagulation and then work through what the appropriate diagnostic tests would be for that individual and how to interpret a hypothetical set of results so with all of those Logistics out of the way let me ask why is learning hemostasis so hard when I was in medical school as well as in residency this was among my least favorite topics it seemed seemed like no matter how much I studied it I could never fully understand it and I forgot the details so quickly it made me wonder why I bothered to learn those details in the first place looking back on my training now many years later I think there are five basic reasons why hemostasis is so difficult to learn first the most obvious hemostasis is a very complex process there are countless receptors enzymes and co-actors involved Each of which is given a numerical designation or acronym making it hard to even remember their names let alone their relationships to one another second a relatively large amount of this complexity is relevant for the routine care of patients with hemostatic conditions without a firm grasp of the fundamental biology it's impossible to treat hemostatic conditions appropriately so in other words there's no cliffnotes version of the topic the third reason why learning hemostasis is hard is that different resources provide information which superfici Al appears inconsistent with one another consider the general topic of platelet activation which we'll talk about in the next video If one performs a Google image search for platelet activation you'll find dozens of diagrams similar to these some of the same acronyms will show up on different diagrams but if you look more closely there are many differences in how they illustrate what is supposed to be the same phenomenon fourth many clotting factors receptors and even processes have more than one name here's a table of the body's clotting factors all of which are assigned a r and numeral designation but also which have one or more older historical names in common practice generally clinicians default to one choice only for example I've literally never heard someone refer to fibrinogen as Factor 1 or refer to factor 10 as Stuart prow Factor nevertheless the alternative names are still out there in the literature in textbooks worse than the factors however are platelet receptors with two completely different nomenclatures that are used interchangeably in clinical practice the last reason hemostasis is hard to learn is that our Collective understanding of it has evolved significantly in recent years with most areas of physiology the details necessary to know for routine clinical medicine have been well established for a generation however with hemostasis this is not the case consider this diagram outlining the intrinsic and extrinsic Pathways in the coagulation Cascade I'll talk much more about this in the third video but if you've ever looked at a review book to study hemostasis or found another video online that discusses it you almost certainly came across a picture similar to this one unfortunately the parallel intrinsic and extrinsic Pathways don't represent actual physiology but rather their existence seems to be a consequence to the limitations of lab testing methodology now this is a fairly recent Revelation and although modern medicine understands this and it's well explained in recent review articles most textbooks that are even just 5 or 10 years old likely fail to describe this problem and also failed to describe coagulation in the most accurate way possible at this point I'm going to provide an extreme General overview of hemostasis as a taste of what's to come in lessons 2 and three this varies depending on how one chooses to divide it up but I think of hemostasis as having overall three separate phases after vascular injury the very first response which is essentially instantaneous is local vasil constriction this is immediately followed by something called platelet activation which consists of a series of changes in the expression and activity of certain receptors on the uh platelet membrane as well as changes in platelet shape a few minutes later the coagulation Cascade kicks in simultaneous with anti-thrombotic control mechanisms to prevent coagulation from running away too far since they occur extremely rapidly the processes of Basil constriction and platelet activation are sometimes lumped together under the term primary hemostasis the coagulation Cascade is referred to as secondary hemostasis last there is the phase of fibrinolysis when the body breaks down the blood clot and normal vessel patency is reestablished this next diagram will go through those components in slightly greater detail and will introduce the seven key proteins involved in hemostasis so once again the very first reaction to vascular injury is basil constriction followed by platelet activation which is largely mediated by exposure of collagen to receptors in the platelet surface platelet activation results in a change in platelet shape and allows platelet aggregation which is mediated largely by proteins called Bond willbrand factor and fibrinogen the end result of platelet activation is something called the platelet plug which is a shortlived temporary patch over a defect in the vessel wall it's sufficient for stopping bleeding from very minor injuries the second phase of hemostasis is largely triggered by exposure of tissue Factor during Vas ular injury which triggers the coagulation Cascade the end result of which is thrombin's conversion of fibrinogen to fibrin fibrin polymerizes generating fibrin strands which are superimposed on the platelet plug and trap red blood cells to form a blood clot this is secondary hemostasis there are many critical points at which the process of platelet activation and the coagulation Cascade rely on one another in addition there there are important anti-thrombotic control mechanisms as mentioned before which prevent both spontaneous intravascular coagulation as well as runaway coagulation in response to actual injury finally the enzyme plasmine is responsible for Cleavage of the fibrin strands and eventual clot degradation so in addition to our four to five phases we now see the seven key proteins collagen vonbrand Factor tissue Factor thrombin fibrinogen fibrin and plasmine there are certainly many other proteins which hemostasis depends and which I'll be talking about a lot more in the next two videos but these are the seven proteins you absolutely cannot forget an important concept when thinking about hemostasis is that of hemostatic balance imagine a seesaw in which one side is occupied by procoagulant forces and the other is occupied by combination of anti-coagulant forces and pro- fibrinolytic forces in order for hemostasis to work successfully in which the body clots in a rapid and limited manner when injured but does not clot at all when uninjured there must be a perfect balance between these if the procoagulants are too active the patient will start to develop spontaneous thrombosis such as dbts and Pulmonary emilii when affecting veins and heart attacks and strokes when affecting the artery in the heart if either the anti-coagulants or fibrinolytics are too active or if the procoagulants are too inactive the patient will suffer from bleeding problems which can be equally life-threatening I'll end this introductory video on a quick overview of how to clinically distinguish platelet disorders from coagulation disorders specifically those which cause bleeding for platelet disorders which may be either a fun functional platelet defect or a deficiency petii which are very small non-blanching red spots on the skin are common this is a classic physical finding in thrombocytopenia which is another name for low absolute platelet count eimos which is a fancy term for bruises are usually small excessive bleeding after minor trauma or with menstruation is common bleeding during or after surgery is usually immediate and last spontaneous hemarthrosis which is bleeding into a joint space and soft tissue hematomas are rare to contrast that for coagulation defects or deficiencies ptii are uncommon though eimos may be very large excessive bleeding after minor trauma or with menstration is uncommon unless the problem is specifically with excessive fibrinolysis bleeding after surgery may be either immediate or delayed by as much as a day and spontaneous hemarthrosis and soft tissue hematomas are common when the disorder is severe most notably occurring in hemophilia which is a deficiency in one of the several proteins of the coagulation Cascade so that concludes this first video on an introduction to hemostasis the next two videos will cover normal physiology in much more depth
Medical_Lectures
20_Biochemistry_Metabolic_ControlsEnergy_Lecture_for_Kevin_Aherns_BB_450550.txt
[students groaning] Happy Friday. I will schedule a review session, as I have done previously for the first exam. I will announce that next week sometime. I would guess it would likely be on Tuesday evening. With respect to where the material will go on the exam, it will likely go through Monday. So we won't finish glycolysis, but we will start glycolysis. So however for we get, likely, I haven't decided for sure but likely we will finish with where I finish the material on Monday. Today I'm going to talk most of the period about things relating to energy and metabolic control and, their yet another level of control that exists in cells. You've been seeing things like covalent modification of enzymes. You've seen the allosteric regulation of enzymes. But yet another consideration we have with metabolic processes and controlling reactions is the concentration of the substrate and the product. Those have an enormous influence because those are things that we have no other way around. Cells have to work within the confines of the concentration of materials that they have. The reason is because the concentration of those things determine the favorability of a reaction. If a reaction is not favorable, it's not going to go. So even if we control the enzyme, we have the enzyme turned on, if the concentration of substrate and product isn't sufficient to allow the reaction to proceed in the desired direction, the cell has no control over that. It's important, therefore, that we understand the role of energy, the Gibbs free energy relative to the control of metabolism, ultimately. Well, metabolism is, as we will see starting on Monday, a very sequential process. We describe metabolic processes as occurring in pathways. You've seen a little bit of a couple of pathways, already. We're going to see them up close and personal on Monday. This schematically shows us the process by which glucose is converted from a six-carbon molecule into two three-carbon molecules known as pyruvate. That process is a pathway we call glycolysis. We see that pyruvate has two possible fates in our cells, and those two possible fates depend upon the conditions in which the cell finds itself. Is there plenty of oxygen or is there not plenty of oxygen? So we see in this very simple schematic two things. First of all, that there's a series of steps that gives us a product, and we also see that pathways have forks. They have different directions that they can go. That's not unlike what we would see if we had a road map. When we see a road map, there's several ways of getting to Portland. The easiest way is probably I-5, but if there's congestion on I-5, we could go 99 and do some zigzag way to get up to Portland and we would get there. So having alternate ways to get places or having alternate responses that cells can have relative to the conditions that they find themselves in are very, very important. So that's just a very general thing relating to pathways. Yeah! You guys want to learn all that before the end of the term? No, No. We're not going to. But this is actually a very nice schematic of metabolic pathways that occur in almost every cell on the face of the Earth. We don't really see giant fluctuations in this schematic that's there. What you see at each place on there, each place you see a little node, a little knob there, that's an enzyme, an enzyme catalyzing a reaction, and so every place on there we see the complexity of metabolic pathways and we realize that what cells have to do in controlling metabolism is extraordinarily complicated, extraordinarily complicated. We've been tackling it a piece at a time, regulating the enzymes, now we're going to talk about how the concentration of products regulates those things. But I want you to have a feeling that, "Wow, this is really complicated stuff in terms "of how coordination of all this happens." The individual reactions you'll see are not complicated. We're going to take them slowly and we're going to take them one at a time. But coordinating this to get a response, overall, of the cell, is greater than we can model on the computer right now. We can't model the complexity of this system adequately in a computer at the present time. That tells you a little bit about how complicated this process is. Well, when we think about energy, one of the first things that comes into people's minds is ATP. I refer, and I've referred to this in the past, of ATP being the sort of gasoline of the cell. That's one way people refer to it because it powers many things that cells could not otherwise do. What does that mean? Well, some reactions, on their face value, are energetically not very favorable. The first reaction of glycolysis, for example, putting a phosphate onto glucose, if we just take phosphate and glucose, and we take an enzyme and we try to put them together, what we discover is that energetically it's not very favorable. It doesn't go very far forwards. In fact, it mostly goes backwards. But if we take that same enzyme and we use ATP instead of just phosphate by itself, and we use the energy of ATP to put that phosphate onto glucose, what we discover is that that reaction becomes much more favorable. So ATP, the energy that's stored in ATP is used by cells where it's necessary to drive reactions. I want to say just a little bit about that to hopefully give you the right impression about how this works. One of the ways that we frequently envision ATP working is as follows. Well, here's a reaction. I need to make this reaction go, so I'm going to go light some ATP on fire, just like I would light a candle on fire, and the energy coming off of this ATP magically makes a reaction happen. That's decidedly what does not happen. It decidedly does not happen. Energy released from ATP, if all we do is release energy from ATP, all we will get is heat. We will get nothing happening. There's nothing to capture that energy. So when we look at how the energy from ATP is used to make a process occur, we see that the hydrolysis of ATP is coupled to the desired reaction, meaning that the enzyme that's catalyzing this reaction is binding both to the molecule that the reaction is being catalyzed on and to ATP at the same time. Both of these are binding. The hydrolysis of ATP then can cause a change in the enzyme. It might cause something to be transferred, as in the example I gave you with the phosphate moving onto glucose. In this case, ATP is transferring a phosphate onto glucose. But whatever the mechanism is, the important thing is that the hydrolysis of ATP is coupled to the undesirable reaction. It's coupled. They're both occurring in the same place at the same time. When I say "hydrolysis of ATP," I'm talking about breaking that phosphodiester bond between the third phosphate and the second phosphate. That hydrolysis yields ADP. So when we say ATP goes to ADP, we've just described a hydrolysis reaction. That's very important to understand. Using energy from ATP, we can make unfavorable reactions become favorable, and that's a very, very powerful thing for cells, as we will see as we get into metabolic pathway, because we will see that metabolic pathways are grouped into two categories. One category involves what we call "catabolism." Catabolism is the breakdown of bigger things into smaller things. Catabolism usually involves energy release. It usually involves oxidation and usually doesn't need assistance of ATP. We'll see minor exceptions to that, but that's basically what catabolism is involved in. Anabolism, on the other hand, is a process where smaller molecules are made into bigger ones. It usually involves reduction and it usually requires input of energy of some source, ATP being a very common one. We'll talk about that as we get going along. What I want to focus on now is the Gibbs free energy that your TAs have been talking to you about in recitations. In some cases, I'm probably going to be repeating things that you already know, but I want to make sure that everybody's on the same page with what I have to say here. I trust everybody has learned from freshman chemistry, or if not from freshman chemistry, from the recitations, that Delta G is the ultimate determinant of the direction of a reaction. Yes, ma'am? Student: I'm sorry, could you go backwards a little bit? Kevin Ahern: Yeah. Student: Could you say what anabolism is again? Kevin Ahern: Anabolism is the taking of small molecules and building bigger molecules. It involves reduction and it usually requires input of energy. Other questions on that, yeah? Student: Does it usually need ATP also? Kevin Ahern: ATP can be one of those sources of energy. It doesn't have to be, but it can be one of those sources of energy. The change in the Gibbs free energy for a reaction is the ultimate determinant of the direction of a reaction. We can say there's nothing else that's going to get around that. Nothing, zero, will get around the Delta G. If the Delta G for a reaction is negative, the reaction will go forwards, as written. If the Delta G for a reaction is positive, the reaction will go backwards, as it's written. Those are rules that we're not going to be able to change. If the Delta G for a reaction is zero, the system is at equilibrium. As I mentioned in class before, you've got to get out of your head that equilibrium means equal concentrations. It doesn't. It means that the concentration of product and reactant over time does not change. The forward reaction is going at the same rate as the backwards reaction when we're at equilibrium. No change in concentration of products and reactants. Now, those three principles right there are things that we will use to understand metabolism. Because one of the things that a cell can do to make a reaction go forwards or make a reaction go backwards is catalyze reactions that increase or change the concentrations of products and reactants. Cells can do that by controlling their enzymes. They can actually manipulate concentrations of products and reactants for a given reaction by controlling enzymes being on or off. That's a very powerful thing. When a cell needs to make glucose, for example, being able to manipulate those concentrations makes the synthesis of glucose a favorable process where otherwise it might not be. A very, very powerful thingfor a cell to be able to do. Well, Delta G, we recall, is defined by this reaction, and I've simplified the equation. Delta G is equal to the standard Gibbs free energy, also known as Delta G zero prime, plus RT times the natural log concentration of products divided by the concentration of reactants. This equation has parallels, as I've mentioned before, to the Henderson-Hasselbalch equation. Delta G zero prime is a constant, just like pKa was a constant. It's a constant for a given reaction. So if I'm talking about glucose going to glucose 6-phosphate, the Delta G zero prime for that reaction will always be the same at the conditions that we use, always the same, and we're going to assume we're using the same conditions all the time. RT, R is the gas constant. T is the temperature. For our purposes, we're going to assume constant temperature to keep things simple. We're a warm-blooded organism. Our temperature is pretty much constant. Times the natural log of the concentration of products over reactants. Concentration of products and reactants are variables there, and we can see how the ratio of products to reactants can change. We remember from the Henderson-Hasselbalch equation the log term. If we had more salt than acid, we had a ratio greater than 1, meant the log term was positive. Same is true of natural logs. If the concentration of products is greater than the concentration of reactants, that term is greater than 1, it means the log term is positive. If, on the other hand, the concentration of reactants is greater than the concentration of products, then that ratio is less than 1 and the log term is negative. So we see this positive and negative nature of this log term will have a pretty good effect on the overall Delta G. But remember that the overall Delta G is the sum of two things. It's the sum of a constant term, the Delta G zero prime, and a variable term, the RT times natural log of products over reactants. Connie? Student: What happens if you have more than one product or more than one reactant? Kevin Ahern: What happens if you have more than one product or more than one reactant? We actually have to take that into consideration. I'm going to keep it simple here, so we're basically going to work with simple considerations of this. But if we had more than one product or reactant, we would have to take the concentration of each one into consideration in this equation. Student: Okay. Kevin Ahern: Okay? Yes, sir? Student: On the exam, will you provide us with numeric values for R and T, or just have R and T in the appropriate places? Kevin Ahern: It's a good question. Will I give you, on the exam, values for R and T, or let you just use that as a constant? If I remember, I will give you values, but the important thing is just recognizing that they're constant. So if I forget, for some reason, just assume they're just a plain constant. Okay? Yes, ma'am? Student: I should probably remember this, but G naught prime, is that room temperature? Kevin Ahern: G naught prime is defined for a specific set of conditions, yes, and it's not room, it's 25 something, 25 degrees, whatever that turns out to be in Kelvins. Student: But it's not zero T. Kevin Ahern: It's not zero, no. Student: Yeah. Kevin Ahern: But, again, we're going to assume that we've got everything at that one set of conditions, just to keep it simple, but, yes, Delta G zero prime is a constant, but it's a constant for a given set of conditions. That's important to recognize. Yes? Student: So kind of, with the solving of any Gibbs equations on the test, is that going to be very much like solving our pH/pKa equations where...? Kevin Ahern: So her question is, is the solving of Gibbs free energy equations going to be like Henderson-Hasselbalch equations? I would be amazed if it weren't, because, again, what I want you to get is the big picture. I'm not having you chase numbers. I'm not expecting you're going to memorize logarithms or any of that, but you should know how that log term's going to change and how that's going to affect the value of Delta G. Yeah, absolutely. Student: Just quick question. Is temperature in Kelvin? Kevin Ahern: Temperature's in Kelvin, yeah. I'm not going to trip you up and say, "Here is it in centigrade, "and snicker, snicker, snicker, "You didn't put it in Kelvin and now you're wrong "and now you're stupid." Okay? [laughing] The important thing is getting the big picture, not tricking students. I really don't want to trick anybody. I could lie, then I'd really trick you! Ha-ha-ha-ha-ha! I'm not going to do it. Student: Is there only [unintelligible] for the second midterm? Kevin Ahern: His question is, is there only one calculation question for the second midterm? The answer is, I'm not going to say anything about it. I'm not going to tell you what the second midterm is. The format will be exactly the same as the first midterm. You saw there was a section that related to calculations and that section had a certain number of points. I can't tell you how many calculations will be there. You should know how that equation alters as the concentration of products and reactantsóor as I put on the thing there, B over Aóactually changes. I think, again, you learned that, hopefully, with Henderson-Hasselbalch. It's fairly straightforward to understand. One of the places where students trip up is they forget that Delta G zero prime is a constant and Delta G is not a constant. So remember that. Delta G is a variable, and it varies as the concentration of products and reactants change. But the constant in the equation is the Delta G zero prime for the given set of conditions that we're going to be using. Now, look through the problems that I've put online for you. The TAs have worked through some of those problems and there's also problems in the book. If you're confused or you have issues or questions, as always, please feel free to come and see me. Believe it or not, my schedule, which has been pretty impossible to catch me, is actually lightening up next week, so I should have more time available to meet with anybody if you have questions. Anytime you have questions and I'm not available at the times it's convenient for you, you're always welcome to send me an email and I will schedule a time to meet with you. So I want you to have opportunity to connect, as necessary. Let's take a little diversion here, thinking about energy in another way. Molecules we can sort of think of as having a sort of inherent energy associated with them. We sort of intuitively think of this. We think of, again, gasoline. Gasoline has a fair amount of energy in it. When we oxidize that gasoline, we generate heat. In an automobile, of course, that heat is used to move cylinders and to, ultimately, give motion to the vehicle. In our muscles, ATP has a lot of energy, and the energy of ATP is actually used to favor muscular contraction. It's because of that type of gasolineóin this case, ATPóthat we're able to move. We will see as we move through biochemistry that there are molecules that have various energies. This is not a great example, but it's an example of a molecule that has a phosphate on it. As I've sort of alluded to in class, molecules that have phosphates on them tend to have more energy in them than the same molecule without the phosphate. So if I take off the phosphate here, I'm left with glycerol, and glycerol 3-phosphate has more energy than glycerol by itself does. Well, if I want to make a more energetic molecule, I have to take that into consideration, starting with a glycerol. So one of the ways I can do that, in making a glycerol 3-phosphate, is by coupling the addition of a phosphate to glycerol by the hydrolysis of ATP, kind of like I described earlier. And that energy of hydrolysis of ATP will favor the putting of the phosphate onto that glycerol and making a glycerol 3-phosphate. You might wonder how the energy arises from ATP. While I'm not totally fond of this figure, we notice that when we think of ATP, we think of the fact that we've got this adenosine molecule and on its 5-prime end we have three phosphates. one attached to the other, attached to the other. So we've got phosphate, phosphate, phosphate, and these three phosphates are all negatively charged. They really don't like each other. So we could imagine that, given the choice, if they have the opportunity, they will, in fact, repel each other and get away. It's that repulsive nature of the negative charges within those phosphates that ultimately give rise to the energy of ATP. ATP doesn't go flying apart because the electrons that are found in the phosphates can be rearranged in a resonant fashion, as you see on the screen. So they can sort of swap the electrons back and forth, and because of that, the triphosphate bond is not going to fall apart automatically. But when it is hydrolyzed, that repulsive nature of the phosphates is going to yield energy. This is a table. I'm not going to expect you to memorize this or anything, but I just show you this to show you the various energies associated with some high energy molecules inside of cells. The energies associated here may surprise you a little bit. There's the energy of ATP. That is, the hydrolysis of ATP to ADP gives this much energy in terms of kilojoules per mole, or this much energy in terms of kilocalories per mole. They're just difference of units is all they are. That negative number tells you that, first of all, it's a favorable reaction. This is the Delta G zero prime, the standard free energy I may have said "Delta G, I meant, Delta G zero prime" of the hydrolysis of these. What we see is that there are molecules in the cell that have a higher energy of hydrolysis than ATP. Now, if ATP is the gasoline that powers the cell, how in the world does a molecule like ATP, which doesn't have that much energy in it, favor the synthesis of molecules that have even more energy inside of them? Well, you might look at this and say, "Well, maybe they use two ATPs or three ATPs," and the answer is, cells don't have that option. Cells have to make molecules that have high energy, higher energy than ATP does, and to do that, they have to be able to take other things into consideration. The number one thing they'll take into consideration, as we will see, is the Delta G equation itself. Look at this reaction. This is an interesting reaction. Here, this reaction is showing us how the body synthesizes one of those high energy molecules. If you go back and you look at that table I just showed you, there's creatine phosphate, right there. It's got more energy in it than ATP itself does. How do we put that in there? Well, here's the reaction that the cell goes through. We can see ATP is, in fact, an energy source for this reaction, and we can see that the overall Delta G zero prime for this reaction is positive. What that means is, if I start with equal concentrations of products and reactants... let's plug the numbers in here. If I have an equal concentration of B, which is creatine phosphate, and I have an equal let's say creatine phosphate and ADP, and I have an equal concentration of creatine ATP, this value is 1. The log term is zero, right? That means that the Delta G will equal the Delta G zero prime, which is a positive number, which means the reaction goes which direction? Student: Left. Kevin Ahern: Backwards. How do I make that reaction go forwards? How do I make the overall Delta G be negative? The only thing I can change is change the concentrations of the products and reactants. So if I dump in a bunch of reactant, then I make the reaction go forwards, because that's going to make this log term up here be more negative, and if I make it negative enough, the overall Delta G is going to be negative. Now, this turns out to have great physiological relevance. The great physiological relevance is this, okay? So creatine phosphate is used in our muscles. I'm going to bitch about creatine in a minute, but creatine phosphate is used in our muscles, and it's used kind of like myoglobin is used for oxygen. Remember I said myoglobin was a great way of storing oxygen? And when the oxygen concentrations got low, what happened? That's when myoglobin gave up its oxygen, but only when it got very low. It turns out creatine phosphate is used to make ATP when cells run out of ATP. Let's think about this. I am going out and I am going to run a 100-yard dash. I get to the starting line. The gun goes off, and I take off and I start running, as fast as I can. What's going to happen? Well, muscular contraction requires ATP. I start out, my ATP concentration is fairly high, so this reaction has been driven fairly far to the right, but I've got enough ATP to get started. I go a few yards and, before metabolism starts kicking in and epinephrine starts flowing and all that adrenaline starts running, before all that can happen, my ATP levels inside of my muscles fall very quickly, because I'm burning it as fast as I can run. Which, for me, isn't very fast, but I can still burn it fairly fast. What happens when my ATP concentrations go down? What happens to the Delta G of this reaction? It starts going more positive, and it starts going more positive, the reaction starts going back to the left, and when it goes back to the left, look what we make, ATP. We don't have to do anything. Our cells don't have to have any controls. They don't have to have any brains. All they have to have is this equation, right here, such that when I take off and I start unbalancing this equation by running, this equation rebalances itself by making ATP, because it uses this stuff right here and this stuff right here to drive it backwards. That's really cool! Then, when I finish my race and I grab that piece of pizza and mug of beer to celebrate the fact that I just won that race, I'm not burning ATP anymore and I'm putting all kinds of energy in my body that's going to make ATP. ATP concentrations start going high. What's going to happen? Well, the reactants are going to get larger in concentration and this reaction is going to move to the right, and I will go back and I will store creatine phosphate. Thus, cells can make a high energy molecule that's higher than the energy of ATP simply by concentration. It tells us that concentration is absolutely critical for making molecules, absolutely critical, and it's magical enough that we can actually make higher energy molecules simply by altering concentrations of products and reactants. That's a really phenomenal thing. Now, I promised I was going to bitch about creatine, so I will. One of the most common questions I get is, "So, creatine, I hear about that I can really improve "my athletic performance by taking creatine and all this! "And all my friends are taking this stuff and it's really great! "They say it makes them feel really like they've "got a lot of energy!" I say, "Okay, well, let's think about this. "I'm going to go run this race, so, in about an hour, "so I'm going to go take a whole bunch of creatine. "Wow, man! "I'm going to be so winning this race, right?" Well, let's look at this equation. When I start taking a whole bunch of creatine in my system, which way is this reaction going to go? [laughing] Student: Right! Ahern: And if I've got a whole bunch of creatine sitting there, is it going to go back? No! Duh-uh. Then I get the second question. "Well, what if you took a whole bunch of creatine phosphate?" [laughing] Well, if you took a whole bunch of creatine phosphate, wouldn't you ultimately be increasing your concentration of creatine over here, as well, so in the longer term you're going to have more of a problem? Yes. Should you be playing with Mother Nature here? No! Might you feel differently? Probably. The brain's a very easily malleable thing. If you think that something is going to happen, you may very well feel that happen. Does it alter athletic performance? It probably does, to some extent. Is it good for you? I would probably say "no." But you could certainly see, in the context of this reaction, that taking creatine just before a race might not be the smartest thing for you to do. It just might not be the smartest thing for you to do. Questions about that? I'm rambling and griping and all that sort of stuff. Connie? Student: Okay, so you have increased amounts of creatine ATP despite the fact that [unintelligible] it will go in that direction? Kevin Ahern: If I make enough of this stuff, it's going to favor it going to the right. Absolutely. Student: But you also said, earlier, that creatine phosphate has more energy than ATP, and you can't go, maybe you'll use two ATPs, so where does that extra energy come from? Kevin Ahern: Where does the extra energy come from? It comes simply from the concentration. That's what this reaction, that's what this equation is actually telling us, that we gain energy as a result of concentration. There's energy from concentration. Yes, sir? Student: What about when you add creatine to your body, like, way before you exercise, say you've just... Kevin Ahern: See, this is a guy that's been talking to these people. "What happens if you do it way before? "Hey, I'm going to figure out the right time to take "this stuff and it's going to go." As I say, you probably do have, I'm not trying to pick on you, here. It probably does have an effect on performance, but it's hard to predict where you're going to get that sweet spot and where you're not going to get that, and that's why I'm saying it's probably not a good idea to mess with it, but you're right. There are considerations with that, and there are some studies that suggest that you may increase it somewhat. The thing that I say to those is, you know, there's all kinds of things that you can do for athletic performance, but increasing athletic performance does not mean increased health. If you look at the lifespan of professional athletes, it's lower, on average, than that of nonprofessional athletes. Students frequently have the notion that maximizing athletic performance is the best thing that you can do for yourself, and it's not. It's only good for running footballs, and it's only good for running races, and it's good for hitting baseballs, but it may not be good for health. So that's important to keep in mind. That's why I gripe about it. Other comments or questions? Everybody's going to go see if they can find some creatine phosphate now, I can see. Student: Can I ask you a question? Kevin Ahern: Yeah. Student: On that list you showed us of high energy molecules, one of them was a 1,3-bisphosphoglycerate. Student: Is that related to the 2, 3- bisphosphoglycerate we had earlier? Kevin Ahern: Good eyes! His question was, I showed this table that had 1,3-bisphosphoglycerate in it. Is it related to 2,3-bisphosphoglycerate that I talked about before and where did i show it there? It's right there. It turns out that it is indirectly related to it, yes. And we'll see, when we watch how this guy is metabolized, actually, not this guy, but the product of this guy is metabolized, we'll see how 2, 3- bisphosphoglycerate comes about. It doesn't come directly from this, no, but it is related. Student: Is there any known enzyme that transfers one of those phosphates from a 1 to a 2 position, or vice versa, so it could be used as an energy source? Kevin Ahern: His question is, are there enzymes that convert 1 to 2 to make a 2,3-bisphosphoglycerate from 1,3-bisphosphoglycerate? There are some people who say that's the way that it actually forms, and so there are enzymes that may be invovled in that, but I will show you a much more important consideration when we look at glycolysis itself. Because of this consideration in glycolysis, you'll see that you don't have to worry about this enzyme converting 1 into 2. Kevin Ahern: Oh, there we go. I've gotten pretty good at recognizing the damn thing. All right, other questions? How are we doing on time? There's our reaction I've just finished there. What I've been telling you all along are the considerations that we have for our body. This, in summary, this is what our body is always concerned about. The cells of our body are always concerned about this. Our body has to do certain things and these things that it has to do require input of energy. They include moving, in the form of muscular contraction. They include active transport, as we'll talk about next term, where they're moving things across a membrane. Biosynthesis, if we want to make glucose from simple starting materials, we've got to put energy in to do that. And signal amplification, we're transmitting information down a nerve cell, we have to have energy to be able to do that. Ultimately, the energy for all of these processes is coming from ATP going to ADP, or something equivalent. Well, how do we get ATP? We get ATP from the processes on the bottom, and these largely involve oxidation or photosynthesis. Since we don't have the option of photosynthesis, we're stuck with oxidation, which means that we're eating things that plants have made, ultimately. So oxidation makes ATP. These processes use ATP. We have to balance these if we hope to be effective. When I say "oxidation" it's important to understand what that means. Oxidation means the loss of electrons. The process of losing electrons is the process of oxidation. Now, as you'll hear me say many times, electrons don't just disappear. In chemical reactions, we can't create or destroy matter. So when I say "loss of electrons," I'm not talking about them evaporating. Those electrons have to go somewhere. We'll see cells have some very, very cool ways of handling those electrons. The handling of those electrons turns out to be very critical for making ATP. Some very cool ways that cells do it, but, for the moment, we're just going to concern ourselves with the loss of those electrons. If I go from methane to methanol, I've gone through an oxidation. By the way, "oxidation" doesn't equate with "oxygen." In this case, we see an oxygen getting put on. But we don't see another oxygen getting put on here, yet it's an oxidation. Oxidation simply means, as I said, the loss of electrons. In going from here to here, this carbon has lost electrons. Losing electrons. If I go from here to here, I've lost electrons. I go from to here, I lose electrons. I go from here to here, I lose electrons. And, at this point, I have carbon at its highest oxidative state, meaning I can't oxidize this guy any further. The Delta G zero primes for each of these reactions goes from minus 820, telling me there's a lot of energy in here, minus 703, there's a lot of energy in here, minus 523, there's a lot of energy in here. Each time we go down, we see there's less energy because what's happened? Some of that energy was given up in making this. Some of that energy was given up in making this. Some was given up in making this, and, finally, some was given up in making this. Carbon dioxide is the ultimate oxidation product of metabolic processes... the ultimate oxidation product of metabolic processes. That's why we exhale carbon dioxide. It's of no more use to us, folks. It does us no more good. We can't get any more energy out of it, so let's get it out of our system and get something else that's going to get us some more energy. An alcohol is at a more reduced state. As we go from right to left, we're more reduced; as we go from left to right, we're more oxidized. Methanol is more reduced than formaldehyde, but it is more oxidized than methane. What you see on the screen are two of the most important energy sources for cells: glucose and fatty acids. Fatty acids, of course, are stored in fats. These guys have very, very different ways of being handled in our body. They both get oxidized. They both get oxidized. In fact, fatty acids have more energy in them, per carbon, than glucose does. If we calculate energy per carbon, there's more energy in fatty acids than there is in glucose. You could look at this and think, well, that sort of makes sense. Most of the carbons here are carbon hydrogens. Most of the carbons here are carbon hydroxides. This is starting out at a higher oxidized state than this one is. But I said they're handled in the body very differently than that is, the two are handled very differently from each other. Why is that? Well, it turns out glucose is water soluble. Our body can dump glucose into our bloodstream and do nothing more with it. It dissolves. It flows in the blood nicely and everybody's happy, and since the blood is flowing through our body rapidly, it gets to its targets very quickly. and we need to escape from that grizzly bear that's chasing us, our muscles have that glucose in seconds. Fatty acids, on the other hand, aren't very water soluble. They're usually tied up with glycerol to make fat. Fat is completely water insoluble. But fat, also, if it wants to give us the energy that we need, it has to travel through our bloodstream. But moving something through our bloodstream that is not water soluble is a real problem. Fat has to be packaged up into bundles. You've heard of LDLs and HDLs? These are the bundles that fat and fatty acids are carried in in our body. It takes a while to make those. Fatty acids are not very good sources of quick energy. Glucose is a wonderful source of quick energy. So our body burns glucose very readily. I'll talk about that later. There's the 1, 3- bisphosphoglycerate. This is a really interesting example about how cells are combining a couple of things in one process. I'll talk about it when I talk about glycolysis on Monday, but suffice it to say that this reaction is a very important reaction in our cells because it involves oxidation. We see this going from an aldehyde to an ester. That oxidation transfers electrons to an electron carrier, known as NAD, to make NADH, and, yes, electron carriers are the magic that cells have for dealing with those electrons. The energy of this oxidation is used to put a phosphate, all by itself, onto this molecule over here. Those three things really turn out to be very cool when we talk about how glycolysis works. Because we've made this molecule, right here, that has high energy, this guy, now, you saw in that table I showed you before, had more energy in it than ATP did, it becomes really easy for this guy to transfer its phosphate onto ADP and make ATP. This is one of the ways in which we make ATP in the cell. It's not a common way, but it's one of the ways in which we do it. More importantly in our cells, and we'll talk about this next term, the way that we make ATP is by the use of mitochondria and gradients of protons. It's a phenomenon known as "electron transport" and "oxidative phosphorylation." The best analogy I can give you for this is that of charging a battery. We'll see that cells use the process of oxidation to charge, literally, charge a battery. Student: That's electron transport and what else? Kevin Ahern: Oxidative phosphorylation. So we charge the battery and then the charge of that battery is used to make ATP. Electron transport is the process whereby we charge the battery. Oxidative phosphorylation is the process where we use the charge of that battery to make ATP. That's how the vast majority of ATP in our bodies is actually made. I talked about catabolism briefly earlier in the lecture. Suffice it to say that catabolism involves breaking down large molecules into smaller molecules. Here are some metabolic pathways in the process. The upshot of all of this is we get ATP out. In general, catabolic processes, as I said, take large molecules, break them into small molecules. It involves oxidation and it releases energy that's captured in the form of ATP. Anabolism is the opposite of this. We take small molecules. We build them into larger molecules. It requires reduction and it requires input of energy. Now, let's spend a minute talking about electron carriers. Cells are set up in a very interesting way so that the oxidations that occur in cells are fairly small in nature, relatively small. What does that mean? It means that the energy released in any given oxidation in a cell doesn't give up too much energy. Another consideration is, if oxidation involves loss of electrons, handling those electrons is critical, because if the electrons are simply lost, they go onto molecules and make very reactive molecules that may cause problems reacting with things that we don't want. Cells are control freaks. They don't want to have molecules reacting on their own, so rather than letting those electrons go onto whatever the first thing happens to be that gloms onto them, cells transfer electrons to specific carriers that hold onto those electrons and keep them from creating other reactive molecules. That's a very important consideration. The electron carriers that cells use, there are three main ones that we will talk about. One of these I just showed you. It's known as "NAD." No, you don't have to know the structure. NAD is the oxidized form, meaning it is lacking a couple of electrons. If I transfer two electrons to NAD, I usually transfer one proton, as well, and that gives me NADH. When you see NADH, you're seeing the reduced form. It's already gotten a proton and two electrons. There's a related molecule known as NADP. NADP is the oxidized form and when it gets two electrons and a proton, it becomes NADPH, NADPH being the reduced form, NADP being the oxidized form. The third category of molecules involved in carrying electrons are the flavins, FAD, flavin adenine dinucleotide. FAD is the oxidized form, lacking those two electrons. If I put two electrons onto FAD, I usually put two protons, as well, and I make FADH2. So FADH2 is the reduced form and FAD is the oxidized form. We'll see next term what happens to those molecules once they've gotten the electrons in them. There's actually energy being stored in these molecules by holding onto those electrons. One of the ways in which those molecules can use those electrons is to reduce something else. Look at this reaction here. Here is an alcohol. An alcohol is being oxidized to a ketone. That involves loss of electrons. Where are those electrons going? Well, they're being put onto NAD+ and making NADH. This is the most reduced form of the carbon is here. The most oxidized form is here. The most oxidized form of the carrier is here, and the most reduced form of the carrier is here. I always like to say that for every oxidation there's an equal and opposite reduction. It's true. This guy's getting oxidized. This guy's getting reduced. What if I go backwards? Can I use those electrons of NADH to make this? I certainly can. So one of the things I can do is use this as a repository for holding onto electrons. That's a good place to have a song, I think, and call it a week. What do you guys say? What's that? Student: [unintelligible] good part. Kevin Ahern: The good part. So this is a very short song. It's about Delta G. It's to the tune of "Danny Boy." [all singing] Lyrics: Oh, Delta G, the change in the Gibbs free energy, can tell us if a process will advance. 'Cause if the value's less than naught it translates that reverse reactions haven't got a chance. But when the sign is plus, it is the opposite, and then the backwards happens all the time. A factor is the standard Gibbs free energy. So don't forget about the Delta G naught prime. See you guys on Monday. [indistinct conversations] [no audio] [END]
Medical_Lectures
12_Biochemistry_Catalytic_Mechanisms_I_Lecture_for_Kevin_Aherns_BB_450550.txt
Kevin Ahern: Exam prep's coming along? I've got two announcements, well, actually maybe three. So we'll get everything in order here. First, I do have a review session scheduled. It will be in ALS 4001 on Saturday at 3:00 p.m. I will videotape that. We're set there. That was number one. I said "three," didn't I? Number two, I've had several people comment about the extra credit question that I have thrown out at you and, yes, it will be on the exam. There is a sort of a solution to it in the book, but I'm looking for more than what that book says. The book talks about how the change in structure that can occur is similar for allosterism as well as what's happening in cooperativity. But there's actually a little bit more to it than that, and it's that little bit more that I'm looking for. So there's something else that's similar that is important for that phenomenon that I described to you. So think about that. The last thing is the logistics of getting it set up are such that there are 250 of you. We aim, as best we can, to get the exam in your hands as quickly as possible. To do that, we need your cooperation. I'm going to tell you how I want you to sit when you come into the room, okay? Starting with this aisle, right here, you are Number 1, on this far edge. I want everybody sitting at least two away from everybody else. In fact, I want everybody sitting in the odd-numbered ones, so 1, 3, 1, 3, okay? For this one over here, it starts 1, 3, 5, 7, 9, et cetera. Student: This is 11. Kevin Ahern: I'm sorry? Student: This is 11. Kevin Ahern: Well, I'm just saying, count from the end is all you have to do. Just count from the end, and count in an odd number, okay? And that is the end I want you to count in from. Same thing here. So start 1, 3, 5, and 7. Everybody got that? So if you come in and you do that, and I won't have to get you up and moving you around and so forth, we can get the exam out quicker. The same things hold up there. So 1 over there, 1 there, and then 1 over here. Everybody clear on that? So it's important to do that. I will also tell you that you'll find that I'm very picky about time on the exam. I do that to give everybody an equal chance. I can't have some people taking two minutes to fill out their exam while they're waiting in line to come up and turn their exam in after I've called time off. So when I call time off, I will expect everybody will immediately stop writing. If I see anybody writing after I say "stop," then I will take points off. I'll warn you about that during the exam, but it's important that you stop when I say "stop." I don't want anybody having a time advantage over others. The first exam there are sometimes time issues. Don't spend too much time on any one question. I've tried to make it shorter so that you won't have the issues with time. But, nonetheless, problem solving sometimes takes some people longer than others. So don't spend too much time on any one question. Again, I want time to be equalized for everybody as much as I can, and that's why I do it. I don't do it to be mean, much as you might think I do. What else can I say? Be sure, obviously, one of the biggest recommendations I have is put your name on your exam as soon as you get it. This happens every year. I get somebody and they've got to write their name on their exam after I've said time to stop. "It's just my name!" Well, I can't tell if you're writing your name or you're writing answers. So if I see you writing, even if it's your name, after time has expired, you're going to lose points. So get your name on there. Don't wait to do that. Yes? Question? Student: Well, I was going to ask if we're required to use pen and then I realized it might be on the syllabus. Kevin Ahern: Can you use pen? You can use pen. You can use pencil. You can use crayon, as far as I'm concerned, okay? As long as... [laughter] Student: Yes, crayon! Kevin Ahern: Yes, you're welcome to use crayon. We may get a good laugh out of it as we're grading it, but as long as we can read it. The biggest issue that we have is reading what you've written. If we can't read your name and we have to figure out what your name is, you'll lose some points. I tell everybody every time, "Print your name clearly," and I still see these scrawls people put on there for their name. I'm thinking, "You want a zero?" You don't want a zero, right? Make sure we can read your name. Put 'em in big block letters. I mean, that's just a no-brainer. So that's pretty much the stuff for the exam. As I say, come in, get seated where I told you to get seated appropriately. If you get down here and there's no seats down here, then the logical thing to do is [whispering] go upstairs. [loudly] "Where should I sit?" Well, [unintelligible] it out up there, okay? It's pretty straightforward. So that's the logistics for the exam. Hopefully everybody aces it. I'll be delighted if that happened, absolutely. I want to finish up the stuff on enzymes. I went through that Lineweaver-Burk business with the inhibition and so forth last time, so I've only got a couple of things to talk about and they actually relate to our very next topic, anyways. So this breakpoint for the exam was a very good breakpoint that I made there. First of these is a chemical modification that can be done to proteins. You've seen a couple of chemical modifications already. You saw cyanogen bromide, for example, could cut a polypeptide at methionine residues. That was a covalent bond that was broken. You saw that mercaptoethanol could reduce disulfide bonds between cysteines, and that was a covalent bond that was changed. Well, the next one I want to describe to you is a covalent bond in which something is added to a protein. This addition of things to proteins can serve some useful purposes. This compound here is called DIPF, and it's got a longer name that I'm going to require you to know, since I can't even recall it off the top of my head, as it is. Diisopropyl fluorophosphate? Phosphofluoro... I don't know. Who knows. DIPF, right? This is the compound, right here. You don't need to know the structure, but the important thing about this compound is it reacts with the side chains of serines in proteins. It reacts with the side chains of serines in proteins. Since the side chains of serines are OH's, this is what we end up making, over here. Well, why do we care about this? As we will see, sometimes serine plays a very important role in catalysis of an enzyme. There's a whole class of enzymes known as serine proteases. If that serine plays an essential role, that is, the OH plays an essential role, and I basically destroy the OH, you could imagine I would have a pretty serious effect on an enzyme that used or that had that serine, if I destroyed it by adding DIPF. One of the ways that I can tell, or one of the easy tests I can do about does this enzyme have any serines that are important for catalysis is I can take an enzyme and I can treat it with DIPF. Then I can use that enzyme in a reaction and say does it still work. Is its activity affected? If its activity is affected, then I've got some kind of evidence that serine may, in fact, have some important role in the catalysis, catalytic action of the enzyme. Yes, sir? Student: Would you be able to tell if it's a competitive inhibitor or if it's a non-competitive inhibitor with just this? Kevin Ahern: So your question is, I'm not sure. An inhibitor is something different, now. I'm actually modifying the enzyme here. So I think you're asking me if DIPF is an inhibitor of some sort. Is that what you're asking? Kevin Ahern: Okay. DIPF, per se, is not an inhibitor. Anything that would covalently bind to an enzyme would not fit into either of those categories because, remember, that competitive and non-competitve inhibition are both reversible things. They're not covalent bonds. When we have covalent bonds, we have other things going on, and that actually leads me to my next topic, in just a second. But everything I've talked about in terms of inhibition, so far, are reversible reactions. And as I say, that's important to have reversible because remember the example I gave where I treat a cancer patient with methotrexate. That goes onto the enzyme. It inhibits the enzyme, but if I don't flush that out, I'm going to kill the patient. The fact that it's reversible I can flush it outóallows the patient to live. If I can't flush that out, as I would have with a covalent reaction, then the patient's going to die with the treatment. So I don't want to have that. So these are reversible inhibitions that we do, at least what I've talked about so far. There's one that's not. But, anyway, to answer your question, this is not a specific inhibitor. It might end up inhibiting the enzyme, but not by a competitive or non-competitive mechanism. I'm just using this now as a tool. This will work on many enzymes because if the enzyme has a serine that's essential for activity and I cover that hydroxyl group up, the enzyme isn't going to work. Oh, yes. Question? Student: Just a question... even if we know that it reacts at that particular side chain, how do we know that it's the inactivation of the side chain and not the creation of stereo hindrance at the active site that deactivates the enzyme? Kevin Ahern: His question is a little bit more detailed, which is, how do I know that it's the inhibition of the serine side chain and not some other secondary effect, like maybe it's blocking access to the active site or the binding site of the substrate and so forth? We don't. So it's only evidence, is all it is. It's not proof that we have that. So when we go study an enzyme, we get many, many pieces of things together before we make a decision in terms of what actually has happened with that. Good question. Yes? Student: With the process you just talked about, with the serine modification, is that reversible? Kevin Ahern: No. Remember, this is a covalent bond. So when we've got covalent bonds, we're essentially talking about irreversible processes. Speaking of covalent bonds, that brings us to our last type of inhibition. The last type of inhibition is known as "suicide inhibition." It's kind of a very visual image that you get with suicide inhibition. Suicide inhibition occurs, first of all, when the substrate makes a covalent bond to the enzyme. In this case, the substrate is this. The difference between suicide inhibition and the DIPF inhibition is that a suicide inhibitor resembles the substrate. it resembles the substrate. The enzyme binds it as if it's the substrate, and only after it has bound this suicide inhibitor does the covalent bond occur, and the covalent bond links the molecule to the enzyme. Now, suicide inhibitors will be specific for specific enzymes. DIPF will work with any enzyme that's got serines. Alright? Suicide inhibitors will be specific for specific enzymes. This inhibitor right here, bromoacetol phosphate, will only inhibit this enzyme. It's not going to affect other enzymes. And the reason? It has a specific shape, and that specific shape fits in the active site of the enzyme, and only that will fit. It's called "suicide inhibition" because it's not reversible. Once we've made that link, here, this enzyme is dead in the water. So people say, "Well, suicide inhibition, is it non-competitive or is it competitive?" And the answer is, it's neither, again, because those are reversible processes. We can wash it away, we can get it away, and we can see the phenomenon that we see because they're reversible. We can't see it when we've got a covalent suicide inhibitor a suicide inhibitor, in general, period. Everybody with me? Yeah? Student: So what happens to the enzyme after it's no longer useful? Kevin Ahern: What happens to the enzyme after it's no longer useful? Cells have a garbage cleaning mechanism, as it were, that will take non-functional proteins and break them down. There's a structure called a proteasome. We don't talk about it much in this class, but a proteasome is designed to basically recycle your proteins. It will take those proteins, digest them with proteases to recover the amino acids back out of them. So cells are pretty efficient. Yes, sir? Back in the back. Student: What happened to the inhibitor? Is it destroyed when it destroys the enzyme? Kevin Ahern: What happens to the inhibitor? It's all going to depend on the chemical stability of the inhibitor itself, and I don't have an answer for that. It's going to depend from one to another. Yes, sir? Student: So will the net effect just be moles of inhibitor versus moles of enzyme? Kevin Ahern: The net effect? Oh, yeah, actually, that's a good question, also. So the net effect of this, will it be, if I have excess inhibitor compared to enzyme, that's going to be the maximum effect I'm going to have, and the answer is, yes. So it's just a concentration phenomenon, that's all it is. A really good example of a suicide inhibitor is penicillin. Penicillin works by inhibiting an enzyme that bacteria need to make their cell wall. It's a suicide inhibitor. It binds to that enzyme and bacteria got no chance. Bacteria make more enzyme, as long as you have excess penicillin which is what he's talking about here as long as we have excess penicillin, it binds to that, too. Bacteria can't make their cell wall, they can't divide, bacteria will die. So penicillin is the prime example of a suicide inhibitor that I think about when I use it to describe the material here. I thinkóthere it is. Well, unfortunately, in the old book, they had the structure of the penicillin where you could see the bonding form and they don't have it here, so you can't see it. Penicillin has an odd four-member ring that is very reactive, and it binds very much like the natural substrate to the enzyme, but it makes a covalent bond to the enzyme once it is bound. That's pretty much what I want to say about suicide inhibitors and the very last about what I want to say specifically about enzymes, in general. I will turn to mechanisms of catalysis if there are no other questions. Okay, let's do that. You saw up close and personal how hemoglobin worked, and we've talked, in general, about how enzymes work. Now we're going to spend a couple of lectures looking up close and personal at how a couple of enzymes work. This means that we're going to get a little mechanistic, and I'm usually the first to say I'm not overly fond of being mechanistic, but there are some mechanisms that we need to go through in order to understand at least general principles for how enzymes work. So that's what I'm going to do here. I'm going to spend most of the mechanistic considerations in today's lecture. I'm going to give you most of those today, so bear with me. I will try to give you a very clear overview of what, out of the mechanisms, I think are clearly important. I start by talking about this class of enzymes we've mentioned a few times already, and these are the proteases. Proteases are enzymes that break peptide bonds in other proteins. There's a class of proteases that has a very, very similar mechanism from one protease to the other, and this class of protease is called "serine protease." As its name would suggest, serine proteases have a very important serine residue, and, as we shall see, the serine proteases have this serine residue play a role in the catalytic process. It's actually in the active site and doing its thing. What we're talking about in any kind of proteolytic degradationóthat is, breaking peptide bonds is we're breaking bonds between amine groups and carboxyl groups making the peptide bond. These are hydrolysis reactions. When I say a hydrolysis reaction, I'm talking about a reaction in which water is added across the bond to break it. So water is being added across the bond to break it. You will notice that adding water across the bond recreates the carboxyl and it recreates the amine. Those were joined together and we don't have a free carboxyl when we made the peptide bond, but when we break it with a protease we go back to the carboxyl and the amine. That's an alpha carboxyl and an alpha amine. The enzyme that we're going to focus mostly on, at least upfront, is called "chymotrypsin." It's one of the enzymes I mentioned earlier, in a table. I said you didn't need to know the specificity of it because it actually can work on a variety of different enzymes. So, for example, chymotrypsin will cut this polypeptide right here. It will also cut it over here. So it will cut, in this case, adjacent to phenylalanine and adjacent to methionine. For our purposes, if we think about these as being mostly hydrophobic side chains, we'll be in good shape, and I'll show you a little bit more detail about that later. So this guy will cut in both of those places. Did you have a question? Student: Oh, I was thinking, it can cut on either side, right? Kevin Ahern: It cuts on the carboxyl side whenever it cuts. So you see the carboxyl side. There's the carboxyl group. It's not cutting here, but it's cutting on this side. We know that chymotrypsin has a serine that plays a very important role in the catalytic process, partly because if we treat chymotrypsin with DIPF its activity essentially goes away. So this is one piece of evidence that this serine residue is very, very important in the catalytic process of chymotrypsin. There's other things that we know, but that was one piece of evidence that that was the case. We can see that same reaction going on that we talked about before. There's that covalent intermediate, and chymotrypsin is knocked out of action as a result of this. I always like to stop at this point and say something about my profession. I'm a biochemist and I can say, with all honesty, that biochemists are lazy people. We're really lazy people and most scientists are lazy people. We like to do things the easy wayóI think it's human natureóif we can, compared to the hard way. If I try to study the breaking of a peptide bond by an enzyme, it's very difficult to do. There it is, but I don't have any easy way of determining that peptide bond got broken. Maybe I have to run a gel and see that I create fragments or something when I treat it with the enzyme, but that takes hours and so forth to do. I want a very simple assay to tell me in seconds, literally, is the enzyme working, is the enzyme doing its thing, because if I can do it in seconds, then I can do a lot more analyses very quickly. So what biochemists have come up with in the case of this enzyme is they've created an artificial substrate that chymotrypsin recognizes. Chymotrypsin will actually act, right here, on this bond. Even though that isn't exactly a peptide bond, it fits in the enzyme's active site well enough that the enzyme will actually break that bond. When it does, it creates these two molecules right here: this guy and this guy. You notice this guy is written in yellow, and the reason is because this guy makes a yellow color when it's freed from the other guy. So by measuring how much yellow color I get and how fast I get it, I can study the kinetics of chymotrypsin's action. Otherwise, I've got to spend days doing a single data point and it's really a pain to do, and I don't want to have to do that if I don't have to. Yes, Shannon? Student: So if you did a spectrophotometric assay of that, then it could tell you how much it was working? Kevin Ahern: Yes. So her question is, if I did a spectrophotometric assay which is basically just measuring the amount of yellow color producedóI can measure the reaction, and that's exactly what I do. So it's very simple for me in the lab to measure how much color I produce. I do that. So that's what happens and that's what people did when they started studying the mechanism of chymotrypsin's action. When they did that and they looked at a very short time scaleólook at this, milliseconds, thousandths of a second they discovered something very odd. The odd thing that they discovered was that this is product, its absorbance, so in this case it's basically the concentration of product that's being made, so we're thinking about velocity going up on the y-axis and we're thinking time going on the x-axis they see this two-phase curve. There's two things that happen here. They see, first of all, that the reaction occurs in what they call a "burst phase," meaning that a lot of product is produced very quickly. You see the steepness of this line. And then it sort of bends over and goes at more of what we call a "steady state," meaning, well, it's going up but we're just not seeing that rate kind of like we saw at the start. When scientists saw this for the first time, they recognized that there'd have to be two things happening in the catalytic action of this enzyme: a slow phase and a very fast phase (the fast phase, of course, coming first). So there's a fast phase to the catalysis and a slow phase. This tells us that the catalytic process has more than one step. To have a fast phase and a slow phase, I have to have at least two steps. We'll see there's actually seven or eight steps. But the fast phase and the slow phase... so that was a very important step in beginning to understand how it is that chymotrypsin does what it does. Now, we know today that chymotrypsin does something and many enzymes do this, as we shall seeóchymotrypsin does something very interesting. I've talked already about how enzymes are flexible and that flexibility gives rise to extreme increase in catalytic action. That's one way that we can explain that enzymes accomplish the magic that they accomplish. I'm getting ready to show you another. Enzymes, remember I said, when we talked about the Koshland induced fit model, I said that the substrate transiently changes the enzyme? And then it goes back? Well, this is a very important transient change. We see in the catalytic action of chymotrypsin that it becomes covalently attached to part of the substrate during its catalysis. It's transient. It gets released later, but that is an important step. If we look at what's happening here, here's that artificial substrate that we had. I'm sorry, it's actually over here, the artificial substrateóback up. The artificial substrate is right here. The enzyme acts on it and it spits out this yellow thing very quickly. The enzyme gets trapped in a covalent bond with the rest of the molecule, and only more slowly gets released. So now we see fast step, slow step. While the enzyme is waiting to get rid of this, it's not catalyzing anything, it has to come back over here. So this accounts for the slow part, the regeneration of the enzyme. Yes? Student: So this is a covalent bond... Kevin Ahern: Yes. Student: ...forming? So how is that different than a suicidal? Kevin Ahern: How is it different than a suicidal? It's transient. So the enzyme, as part of its action, gets released from that. I guess it was a little confusing when I said a covalent bond you don't get released from, right? But those are chemical changes. That is that there's not a catalytic action that's involved here. As we'll see, this is a part of a big catalytic action, so it's only a very transient thing, but it's a good question. Student: Okay. Kevin Ahern: Okay? So I guess this is the exception. This and many enzymes that use this as their catalytic action have this exception. That is, they will become transiently covalently linked, but they get released. DIPF, it's not part of a catalytic action, so it can't get released. Yes, sir? Student: The first portion looks like an SN1 nucleophilic attack, but what about the deescalation of the second part that's slower? Kevin Ahern: Let me go through the mechanism. I'll just talk about the mechanism. So this is just a general scheme of the overall reaction. Before I show you the mechanism, I've got to tell you a little bit about the active site of chymotrypsin. Chymotrypsin has three amino acids that play very important roles in the catalytic process... very important roles in the catalytic process. They're called the "catalytic triad." They are aspartic acid, histidine and serine. The numbers that you see by these are their position in the primary structure. This is amino acid number 102, this is amino acid number 57, and this is amino acid number 195. The fact that they're all three brought into close proximity to each other, that happens because of folding, right? Folding brings together things that aren't close in primary sequence, and they're brought into very close proximity to each other. We'll see that all of the serine proteases have, in fact, this catalytic triad. All the serine proteases have the same catalytic triad. They have the same basic geometry. In the catalytic action of this enzyme, there are several things to consider. Notice what's happening here. We have the enzyme in its starting state. Here's the OH of serine. Here's histidine. Here's aspartic acid. Over here, this histidine has pulled the proton off of this serine and created a nucleophile. This guy over here, this O that has lost its proton, still has electrons. It's negatively charged. It is a nucleophile. It seeks a nucleus. This is extraordinarily reactive and it's called the "alkoxide ion." Well, how did we get from here over to here? What's the difference? The answer is the binding of the substrate... the binding of the substrate. When the proper substrate binds in the enzyme, what's going to happen to the shape of the enzyme? It's going to change very, very slightly, right? These slight changes make big differences, because that slight change in position now repositions these guys so that this guy, the aspartic acid gets closer over here. This pushes electrons over to this half of the ring, as it were. This is slightly more negative, which is the mechanism for pulling this proton off. So this slight rearrangement has changed the geometry of these very slightly so that this proton on the hydroxyl can get pulled. Everybody with me? The reason that that happens and I get asked this question all the time, "Why does it happen?" it happens because the proper substrate has bound in the active site. Once that happens, that slight change happens, and bang! Yes, Connie? Student: What was the nucleophile's name? Kevin Ahern: The nucleophile is called the alkoxide ion. Let's now look at the overall mechanism and I'm going to step you through it. There. Bear with me. I'm going to go through it. Then I'll come back and I'm going to go through it again, okay? So let me go through it and tell you where we are with this stuff. In this case, the proper substrate has bound into the active site. This change has started to happen and we see that this guy here is going to grab this proton and make the alkoxide ion. We're looking right in the middle of that process as we look right here. Now, the proton here has been removed, and they haven't even shown you the intermediate of the alkoxide ion, which I think is unfortunate. But there's the alkoxide ion, I told you, is a nucleophile. That nucleophile seeks a nucleus and the nucleus it seeks is the carbon of the carbonyl group. It literally attacks that carbon. That creates this unstable intermediate, right here. It actually has a tetrahedral structure, which isn't really important for our purposes, but this unstable intermediate falls apart in the next step. It falls apart. Do I want to have an unstable intermediate in my enzyme? Well, I may want to be careful about that, because if I'm not careful, that unstable intermediate may react with my enzyme. So the enzyme protects itself with something called an oxyanion hole. It's basically a stabilizing structure that keeps the intermediate from reacting with itself and allows the intermediate to fall apart on its own. So the oxyanion hole is doing that. One of the questions I get is, "Where's all this happening?" This is happening in the active site. The active site is a chamber of the enzyme. Active site. Oxyanion hole is a little room off the active site. It's right there at the active site. At this point, we have very quickly broken this bond. So we've just broken the peptide bond. You see what's left right now, which is one half of that peptide is linked covalently to that hydroxyl. The other one is only hydrogen bonded and leaves. The hydrogen bond is not strong enough to hold it, so one half of it is gone. The other half is stuck to the enzyme. And the question about why is this the slow step, it's the slow step because we've got to go back and we've got to regenerate another nucleophile and we've got to wait for water to get in there. And that's what's happening here. So now we've got to get this guy released. Water has to diffuse into the active site. When it comes in, here's our nitrogen that likes pulling protons. Guess what it does? Grabs a proton off of water, makes a reactive hydroxide, which is also a nucleophile, attacks this guy, and look what happens. It breaks the bond. We go back to our starting material. This step takes a while because we have to get the water in there and everything positioned appropriately. Those are the general steps that are happening in that process. I'm going to go through them again, but before I do that, I'll take questions. Yes, sir? Student: The oxyanion hole, is it just a portion, like a clamshell top that comes down and it's dealt a positive to stabilize the negative charge on the carbonyl carbon? Kevin Ahern: It is there, yeah, basically to stabilize it. That's correct. You're right. I'll show you an example of an oxyanion hole in just a little bit. So let me step through it one more time. "What do I need to know about this?" I'm not going to ask you to draw this whole structure. There's a relief. There's a big sigh of relief that goes through the room. Put this on the exam for Monday, right? Extra credit, draw the... Student: Oh, god. Kevin Ahern: No, no. I would get killed if I did that. So let's think about this. There are several key steps that I think are important. First of all, we have the catalytic triad. The catalytic triad consists of aspartic acid, histidine and serine. They work together to create a nucleophile. The way they do that is by the proper binding of the substrate causes the histidine, ultimately, to pull the proton off of the serine. That creates this O-minus. The O-minus is very reactive. It attacks the carbonyl carbon. We can think of that as sort of Step 1 or 1.5. That attack on the carbonyl carbon creates an unstable intermediate that is stabilized by the oxyanion hole. Because it is not now going to react with the enzyme, the instability it takes out on itself and it cuts its own leg off. Right? The leg goes flying away and we're left with the other half stuck there. We can think of that as Step 2 or 2.5. What's that? Student: The intermediate is stabilized by the hole? Kevin Ahern: The intermediate is stabilized in the hole, that's correct. So we've just finished the fast step of the process. Now we're in the slow step. The slow step, we've got to get water in here, we've got to get it oriented, and we've got to get it activated. That happens here, as you can see. Water comes in. Here's our proton puller. There's the nucleophile we create. And, again, the carbonyl carbonóyou think you get picked on... think about the carbonyl carbon. It's getting picked on twice here. It's getting picked on here. That creates an unstable intermediate that, in this case, falls off of the enzyme. When it falls off, it's released and we're back where we started. Mechanisms are things that you can spend thousands of words on, but the reality is, if you sit down and analyze what's going on with them they will, hopefully, make much more sense than words can tell you. So look at the major features that I've talked about here the catalytic triad, the binding, the alkoxide ion creation. What did the oxyanion hole do? What was the fast step? What was the slow step? What role does water play in that process?óand basically, you've got the mechanism. That's basically what happens here. Yes, sir? Student: I hesitate to ask you another question, but... Student: ...at the same point, it looks like with the dependence on [unintelligible] like a nucleophilic attack with the different steps, this could really be screwed up by pH going plus or minus in either direction. So what is an approximate given range? Kevin Ahern: That's a very good question. For chymotrypsin, its range, as I recall, is fairly physiological, and I don't know how wide that range is, but that's a very good question. There are serine proteases, for example, trypsin, that can work in a fairly acidic environment, and chymotrypsin in a not-so-acidic environment. So there are ranges that this mechanism will work in, but I can't tell you the full range. I don't know that. So I want you to sit down and look at that. Write things out. I find it's really helpful to write things out. These are all just individual steps, up close and personal, so I'm not going to go through each one of those. I think that's a bit of overkill. Here's the oxyanion hole that's there, and you can see this is sort of a representative structure. There's some stabilization of this negative ion right here by these protons, but it's not an overly charged structure, no. There's something else that's important for us to understand about this enzyme. It's kind of cool and it's not difficult to understand. I said that these changes happen when the proper substrate binds to the enzyme. How does the enzyme know the proper substrate? Well, you have this idea in your head, and it's correct, that the enzymes have a specific shape and they will only accommodate certain shapes properly into there. In the case of this enzyme, it has something called the "S1 pocket." Like the oxyanion hole, the S1 pocket is right there at the active site. The S1 pocket is the one place that different serine proteases differ from each other. They have different shapes and will bind different molecules, as a result. You're looking at one of those pockets, right here, and this pocket is kind of nice and deep. What you see is the side chain of a phenylalanine, I think, that's in here. This will accommodate phenylalanine very nicely. It may not accommodate something in here that's very charged. This is a relatively non-polar environment. That is, there's no plus or minus charge in there. You saw some serines, but no plus or minus charges in there. Trypsin has a very different S1 pocket, as I will show you in a bit, but it uses the same catalytic mechanism. It's a serine protease, just like chymotrypsin is, but instead of cutting next to hydrophobic amino acids, trypsin will cut next to lysine and arginine, as I hope you got from my lecture before. How is that difference accomplished? Well, when we look at that S1 pocket that trypsin has and we compare it to the S1 pocket that chymotrypsin has, trypsin has a carboxyl group at the base of it. The negative charge attracts the positive charge and they make a nice little bond, and that's what determines that it's got the right thing bound. Now, I show you this structure to show you the important structural similarities of these two enzymes. I mean, that might look like there's some difference. There's a difference out here. There's a little bit of a difference there. But, overall, those two structures are not very different from each other. That's not totally surprising because they're going to have similar mechanisms of action. Structure makes function. So if they have similar mechanisms, it's not surprising they would have very similar structures. The place where we would expect that they would differ would be in the S1 pocket, and that's indeed exactly what would happen. Somebody's going to ask me where is the S1 pocket on here and I don't know off the top of my head. I think it's down here, but I'm not sure of that. Student: This is chymotrypsin and trypsin overlapping? Kevin Ahern: That's right, chymotrypsin and trypsin overlapped. This shows the S1 pockets, chymotrypsin, trypsin, elastase, showing them very schematically, very schematically. Chymotrypsin, fairly non-polar, no pluses or minuses down here, so it accommodates phenylalanine nicely. It'll accommodate methionine nicely. It doesn't like pluses and minuses down there. Here's trypsin. It's got a carboxyl group at the bottom. It really likes if it's got a positively-charged side chain like lysine or arginine has. Elastase is sort of like chymotrypsin, except it's got some things jutting into it that keep big side chains from fitting in. So this guy here can't take a phenylalanine. It won't cut next to a phenylalanine because these things are blocking the access of the ring. This guy likes to cut next to alanines, a very tiny hydrophobic group that fits in there. So the S1 pockets really help us to understand how the specificity of an enzyme is set up. Questions about this? Okay. So that's cool. Kevin Ahern: Put it back up there? Yeah, sure. I'm going to tell you one more thing of medical implication, then I've got a song that I think you'll enjoy. Actually, no, I don't have that. Maybe we'll just do the song. We'll finish early one day, how about that? So I have a song. This song, I have never sung to a class before. So it's a brand new song about serine proteases and I hope that you will sing loud because, I will tell you what. I'll make a deal. If I hear you sing loudly today, we will have a second, not just one, but a second extra credit on the exam. Student: Whoo! Kevin Ahern: Are we set? Okay, let's go then. It's to the tune of "Rudolph the Red-Nosed Reindeer." [singing "The New Serine Protease Song"] Lyrics: All serine proteases work almost identically, using amino acid triads catalytically. First they bind peptide substrates, holding onto them so tight, changing their structure when they get them in the S1 site. Then there are electron shifts at the active site. Serine gives up its proton as the RE-ac-tion goes on. Next the alkoxide ion, being so electron rich, grabs peptide's carbonyl group, breaks its bond without a hitch. So one piece is bound to it. The other gets set free. Water has to act next to let the final fragment loose. Then it's back where it started, waiting for a peptide chain that it can bind itself to go and start all o'er again. Kevin Ahearn: Okay. Have fun. [applause] Captioning provided by Disability Access Services at Oregon State University [END]
Medical_Lectures
The_Medical_H_and_P_Comparative_Examples.txt
hello everyone this is a by request supplemental video to the previously two-part series on the medical HMP the learning objective here is simply to demonstrate the subtleties of presentation in order to elevate a merely adequate medical HMP to a great one I'll first be demonstrating a presentation that is at the level of what I would expect from a good preclinical medical student or possibly from a clinical student on his or her first couple of days during their rotation in internal medicine I would encourage viewers to make note of the various sub optimal features of the presentation some of these suboptimal features will be obvious and some will not be I'll then play the same presentation again this time with annotations to the side indicating what should be changed in order to make the presentation much better finally I'll play a revised version of the presentation incorporating the changes so viewers can get a sense of how an adequate presentation sounds differently from a great one this revised presentation is what I would expect from an above-average student near the end of his or her medicine rotation because the point of the video is a comparison of presentation skills between levels of training and not a comparison of clinical reasoning skills the presentations will only extend through the impression or through what I refer to as the linking statement for the chief complaint Ms Brown is a 58 year old woman presenting with shortness of breath for three weeks so Miss Brown has a history of hypertension type 2 diabetes and hyperlipidemia who was brought to the ER at two o'clock in the morning this this morning by ambulance because of sudden onset of severe shortness of breath she first noticed she was having a problem three weeks ago when she noted increasing shortness of breath when completing household chores and walking up steps and began sleeping on two to three pillows at night was continued until 2:00 a.m. on the day of admission when she woke up after it acutely worsened her husband noted that she was breathing rapidly was only comfortable sitting upright or standing and was slightly blue in color so naturally he called 9-1-1 she received oxygen in the ambulance along with some nitroglycerin aspirin and lasix in the ER when she got here and now she's feeling a little bit better Ms Brown denies a history of asthma she denies pleuritic chest pain she does not have not had any fever or weight loss she has noticed a non-productive call for several weeks though she has a prior smoking history but stopped in 2006 her father had an mi aged 54 she denies to contacts or unusual travel she was diagnosed with type 2 diabetes ten years ago and it's currently on metformin and glyburide her past past medical history is notable for hypertension diagnosed in 2004 diabetes which are dimensioned diagnosed in 2003 and hyperlipidemia 2004 or surgical history includes an appendectomy in 1977 and a cholecystectomy in 1998 her OBGYN history is notable for her being g3p 3 and she is status quos a hysterectomy in 2008 her psych history suitable for multiple episodes of depression but she currently denies any symptoms of that medications include HTC's e lisinopril metformin glyburide zocor and aspirin she's allergic to penicillin and iodine so sure stream is kind of stinks she she's notable was she quit smoking six years ago she's drinking one-two alcoholic drinks per day sometimes the glass of wine sometimes a mixed drink she's married mother of two she completed her associate's degree at De Anza College and works full-time as administrative assistant for our Cisco Systems she's got two dogs at home but no other strange animals or animal exposures she enjoys movies and knitting our dietary habits are pretty normal she cooks for herself doesn't eat out for all that much she sexually active with one partner her husband no travel except a recent trip to Disneyland with her children on her view systems she denies fever chills weight loss or change in appetite in the respiratory system as our dimension she reports the shortness of breath with exertion a lying-down cardiovascularly she denies chest pain palpitations lightheadedness in the abdomen she has no pain nausea vomiting or changes in her bowel habits I know this urea or hematuria no joint pains between swelling but she has had some swelling in her ankles during the past three weeks no skin rashes or bruises and no neuro symptoms in general on exam she is overweight and immodest respiratory discomforts her pulse is 100 and regular respirations are 24 blood pressure 130 over 105 temperature 37 point 1 now heent exam shows her pupils are equally round and reactive to light sclera are anak Taric she had normal extraocular muscles oral pharynx is clear with good temptation from the scopic exam was unremarkable pulmonary she her lungs are have occasional crackles but no ron-karr wheezes on heart exam she's of regular rates in rhythm with a displaced PMI and normal s1 s2 she had a loud murmur that was the stomach I'm not sure how to grade it but I suppose since I could hear it it's probably at least a 2 or 3 out of 6 abdomen is protuberance I was soft without tenderness no Organa megali know and trying normal bowel sounds her skin shows no rashes Okemo sees or other lesions extremities with one-plus pitting edema with no sign no sister clubbing I did really throw no exam one it's a really kind of practice today so she's she's oriented times three cranial nerves 2 through 12 her intact her motor strength is 5 out of 5 in all extremities in all major muscle groups sensation to a light touch and proprioception and vibration all that's intact in all extremities her upper extremity reflexes are 1 plus and lower extremity reflexes are 2 plus I didn't get up walk around because she was looking kind of uncomfortable and stuff from the breathing issue and a lot of wires on her and stuff so moving under our labs her white count let's 8780 Neches 12 and 37 with the MCV of 85 her platelets are 202 her chem 7 shows a sodium of 141 potassium 3.8 chloride 102 bicarb 28 B 124 creatinine 1.1 and glucose 186 or SDLT or in the 40s or total bilirubin was 1.2 Alphonse was 110 troponin is still pending about what's your why labs taking a while not one I know at one point wine the you age has no wets white cells no red cells no Lucas trace 2 plus protein no ketones 2 plus glucose and no bacteria I'm her chest x-ray shows cardiomegaly pulmonary vascular redistribution she has some fluid in the right costophrenic sulcus and possibly some curly B lines but you know I'm not really sure her EKG shows sinus tach occasional PVCs left axis deviation and lbh so in summary Ms Brown is 58 year old woman who has history of hypertension diabetes and hyperlipidemia who is now presenting with shortness of breath for three weeks tense and lower extremity edema for the chief complaint Ms Brown is a 58 year old woman presenting with shortness of breath for three weeks so Miss Brown has a history of hypertension type 2 diabetes and hyperlipidemia who was brought to the ER at two o'clock in the morning this this morning by ambulance because of sudden onset of severe shortness of breath she first notice she was having a problem three weeks ago when she noted increasing shortness of breath when completing household chores and walking up steps and began sleeping on two to three pillows at night this continued until 2:00 a.m. on the day of admission when she woke up after it acutely worsened her husband noticed that she was breathing rapidly was only comfortable sitting upright or standing and was slightly blue in color so naturally called 9-1-1 she received oxygen in the ambulance along with some nitroglycerin aspirin and lasix in the ER when she got here and now she's feeling a little bit better Ms Brown denies a history of asthma she denies pleuritic chest pain she does not have not had any fever or weight loss she has known as a non-productive call for several weeks though she has a prior smoking history but stopped in 2006 her father had an mi aged 54 she denies to contacts or unusual travel she was diagnosed with type 2 diabetes ten years ago and it's currently on metformin and glyburide her past past medical history is notable for hypertension diagnosed in 2004 diabetes which already mentioned diagnosed in 2003 and hyperlipidemia 2004 or surgical history includes an appendectomy in 1977 and a cholecystectomy in 1998 her OBGYN history is notable for her being g3p 3 and she is status post a hysterectomy in 2008 her psych history suitable for multiple episodes of depression but she currently denies any symptoms of that medications include hgtc lisinopril metformin glyburide zocor and aspirin she's allergic to penicillin and iodine social extremes it was kind of think she she's notable was she smoking six years ago she's drinking one two alcoholic drinks per day sometimes the glass of wine sometimes a mixed drink she's married mother of two she completed her associate's degree at De Anza College and works full-time as administrative assistant for our Cisco Systems she's got two dogs at home but no other strange animals or animal exposures she enjoys movies and knitting our dietary habits are pretty normal she cooks for herself doesn't eat out all that much she sexually active with one partner her husband no travel except a recent trip to Disneyland with her children on review systems she denies fever chills weight loss or change in appetite in the respiratory system as our dimension she reports the shortness of breath with exertion a lying-down cardiovascularly she denies chest pain palpitations lightheadedness in the abdomen she has no pain nausea vomiting or changes in her bowel habits I know this urea or hematuria no joint pains between swelling but she has had some swelling in her ankles during the past three weeks no skin rashes or bruises and no neuro symptoms in general on exam she is overweight and immodest respiratory discomforts her pulse is 100 and regular respirations are 24 blood pressure 130 over 105 temperature 37 point one now heent exam shows her pupils are equally round and reactive to light sclera are anak Taric she had normal extraocular muscles oral pharynx is clear with good temptation funduscopic examination our lungs are of occasional crackles but no ron-karr wheezes on heart exam she's of regular rates in rhythm with a displaced PMI and normal s1 s2 she had a loud murmur that was the stomach I'm not sure how to grade it but I suppose since I could hear it it's probably at least a two or three out of six abdomen is protuberance I was soft without tenderness no Organa megali no and sorry normal bowel sounds her skin shows no rashes Okemo sees or other lesions extremities with one plus pinning edema with no cyanosis or clubbing I did really thorough neuro exam kind of practice that today so she's she's oriented times three cranial nerves 2 through 12 are intact her motor strength is 5 out of 5 in all extremities in all major muscle groups sensation to a light touch and proprioception and vibration all that's intact in all extremities her upper extremity reflexes are 1 plus and lower extremity reflexes are 2 plus I didn't get up walk around because she was looking kind of uncomfortable and stuff from the breathing issue and a lot of wires on her and stuff so moving under our labs her white count with 8780 Neches 12 and 37 with the MCB of 85 her platelets are 202 her chem 7 shows a sodium of 141 potassium 3.8 chloride 102 bicarb 28 be 124 creatinine 1.1 and glucose 186 for SDLT or in the 40s or total bilirubin was 1.2 Alphonse was 110 troponin is still pending about what sure why labs taking why on that one I know at one point wine the you age has no what's white cells no red cells no Lucas trace 2 plus protein no ketones 2 plus glucose and no bacteria I'm her chest x-ray shows cardiomegaly pulmonary vascular redistribution she had some fluid in the right costophrenic sulcus and possibly some curly B lines but you know I'm not really sure her EKG shows sinus tach occasional PVCs left axis deviation and lbh so in summary Ms Brown is 58 year old woman who has history of hypertension diabetes and hyperlipidemia who is now presenting with shortness of breath for three weeks and some lower extremity edema you the chief complaint Miss Brown is a 58 year old woman with diabetes hypertension and hyperlipidemia presents with shortness of breath for three weeks Ms Brown reports being in her usual state of average health until three weeks ago at which time she developed a gradual onset of shortness of breath while walking up the stairs and completing some routine housework over the next three weeks for a shortness of breath gradually worsened such that its onset required less and less exertion during this time she also began experiencing trouble breathing while lying down at night prompting her to begin sleeping on two to three pillows instead of her usual one finally on the day of admission she abruptly woke at 2:00 a.m. with severe breathing difficulties and was noted by her husband to have be slightly blue who then immediately called 911 she currently describes her shortness of breath as constant and severe she describes it kind of in her own words as quote I can't get enough oxygen in fast enough and she's most concerned that she may be having a heart attack because that's what her father passed away from approximately the same age as herself her other cardiovascular risk factors include a prior smoking relevant review of systems is notable for ankle swelling and a non-productive cough for the past two three weeks but she otherwise denies chest pain palpitations lightheadedness fever changes in weight or hemoptysis she has had no sick contacts or unusual travel her past medical history is notable for the April mentioned hypertension diabetes and hyperlipidemia all of which were diagnosed 10 years ago and all of which she's getting treatment for currently she has a surgical history of an appendectomy in 1977 a cholecystectomy in 1998 and a hysterectomy in 2008 for dysfunctional uterine bleeding psychiatric history is notable for several episodes of depression but no reported history of psychiatric admissions or suicidal ideation her medications at home include hgtc 25 milligrams daily likes in apparel 20 milligrams be ID metformin 850 tid glyburide 10 B ID simvastatin 40 daily and aspirin 81 milligrams daily for compliance is reported to be very good she has an allergy to penicillin which causes a rash and radio contrast which causes Ehrlich area social history is notable for 30 pack years smoking history she quits in 2006 she consumes 1 to 2 alcoholic drinks per day she is currently married and works as an administrative assistant her family history is notable for her father who as I mentioned had @mi on when he was 54 and he died at 69 from a stroke a signed book from what was reported in the HPI a complete review of systems with otherwise unremarkable on physical exam she is overweight and appears in modest respiratory discomfort her vitals at the time of my exam showed a pulse of 100 she had a respiratory rate of 24 blood pressure 130 over 105 and a temperature of 37 point one focusing on just a pertinent positives and negatives of her exam her pulmonary exam revealed by basilar crackles left greater than right with no B zhing her cardiac exam was notable for a regular rhythm a PMI in the fifth intercostal space in the anterior axillary line a normal s1 and s2 with a three out of six this taluk crescendo decrescendo murmur best heard at the second right intercostal space with radiation to the carotid or jvp with 8 centimetres abdominal exam was normal extremities with mild bilateral and symmetric pitting edema to the ankles without calf tenderness or erythema a thorough neuro exam with notable only for 1 plus biceps and triceps reflexes bilaterally regarding her labs her chemistry panel I was notable for AB you end of 24 and creatinine up 1.1 CBC had a white count of nine and a hemoglobin of 12 her troponin is still pending at this time a UA showed only two plus protein and two plus glucose her her chest x-ray revealed cardiomegaly a separation of vascular markings a subtle curly B lines and a small right pleural effusion her EKG showed sinus tach and occasional PVCs she had a little bit of left axis deviation and lbh was suggested by conventional voltage criteria and the precordial leaves but she had no s T or T changes of any kind on any of the leads in the ER prior to my examination she had already received oxygen therapy via face mask of sublingual nitroglycerin 325 of aspirin and 40 milligrams of IV puros amide because the ER was concerned about pulmonary edema and possible acute coronary syndrome afterwards after she received his treatment she reported feeling slightly improved so in summary Ms Brown is a 58 year old woman with multiple cardiovascular risk factors with a subacute presentation of progressive dyspnea mild symmetric lower extremity edema and a non-productive cough she has objective evidence of modest volume overload and lbh along with a murmur consistent with aortic stenosis so that's the two versions of an H&P I hope you found the comparison helpful if I were to summarize the comparison in two points I would first say that although both presentations conveyed most of the same facts about the patient the second version was able to say a little more while simultaneously using fewer words and second the improved impression in the second version will greatly aid the viewer when trying to understand the subsequent differential diagnosis and plan of care for the presenting problem you you
Medical_Lectures
09_Biochemistry_Hemoglobin_IIEnzymes_I_Lecture_for_Kevin_Aherns_BB_450550.txt
Captioning provided by Disability Access Services at Oregon State University. Kevin Ahern: Okay, folks, let's get started! I can't hear, there I go! You guys are the quickest-to-quiet-down class I've ever had, and that's good. Just think how much more biochemistry we can squeeze in now when you quiet down quickly. I hope everybody's doing well and ready for a big weekend. We will have an exam in here a week from Monday, not today. For those of you who thought we were having it today, I know you'll be disappointed. I will announce later where that stopping point will be. I sort of decide on, it depends on where I get in my lectures, and it varies a little bit from term to term. Today I'm going to finish up hemoglobin and start talking about enzymes. I hope I gave you the impression or the understanding that hemoglobin is a remarkable protein. It has a remarkable number of functions built into it, and those functions are directly related to the structure of the protein. We saw structural things about proteins, in general, when I talked about primary, secondary, tertiary, et cetera. But here's a protein where you get a real live, up close and personal look at how those structures that we see in proteins give proteins specific functions. We'll see more of that when we talk about enzymes. I want to emphasize, speaking of enzymes, that hemoglobin is not an enzyme. People commonly think that, but it's not. Enzymes catalyze reactions and hemoglobin isn't catalyzing anything. So it's an oxygen-carrying protein. That's really its only function. We'll see that the way that it binds oxygen is not unlike the way that enzymes bind their substrates, but hemoglobin is not an enzyme. Last time I finished by talking about fetal hemoglobin and fetal hemoglobin has the very interesting property of having the slightly different subunits. It has the two gammas instead of the two betas, and that sort of makes a structure that doesn't have that doughnut hole that fits the 2,3BPG in the same way. As a result of that, as I had noted, the fetal hemoglobin stays in the R state almost all the time, and that's why the fetal hemoglobin has greater affinity for oxygen than adult hemoglobin. There are yet other things that we need to understand about hemoglobin. I always like to think about hemoglobin as having, obviously, structures that give it the functions that it need, that those structures correlate very well with the needs of the body. They correlate very, very well with the needs of the body. You saw 2,3BPG was being produced by cells that were actively metabolizing, and it was providing a signal that, "Hey, here's the place where I need the oxygen. Let go of the oxygen." And they cause hemoglobin to let go of the oxygen. There are other signals that hemoglobin can respond to with respect to actively respiring cells. One of these is pH. When we talk later about, it'll actually be next term, but when we talk about actively respiring cells next term, one of the things we'll discover is that actively respiring cells have a higher concentration of protons around them than non-actively respiring cells, which means that the pH around an actively respiring cell is lower than that of a non-actively respiring cell. Again, when I think of an actively respiring cell, you can almost always think of muscle. Muscles really change a lot. When muscles are contracting, they're really needing energy a lot, and things that need energy need oxygen. They go hand in hand. A scientist named Bohróand no, that's not B-O-R-E, that's B-O-H-Rómade a very interesting observation about hemoglobin many, many years ago. The observation was, if he took hemoglobin and he did this oxygen-binding curve that we did before, where we see the percent of the hemoglobin saturated with oxygen and the concentration of the oxygen on the x-axis, what he saw was, he did the plot and he saw that nice sigmoidal plot that we did before and if he dropped the pH the curve actually shifted downwards. Well, that shift downwards corresponds to less affinity, meaning that the hemoglobin is releasing oxygen, and it's releasing oxygen as a function of the pH environment in which it finds itself. Now, again, this is a functionality built into hemoglobin that is directly responding to the body's needs: pH drops around actively respiring cells, hemoglobin will tend to give up more oxygen around actively respiring cells. It's a very cool phenomenon. Now, the chemical basis of the effect is not too surprising. When we look inside of the hemoglobin molecule, we see that there are various amino acid side chains that can be charged. In this case, we see a lysine up here that is attracted to a portion of a histidine, and here's the functional part of the histidine where it can gain or lose a proton. If it gains a proton, it is positively charged and it will attract a negatively charged side chain, in this case, of an aspartic acid. If that proton is off of there, then it will not attract that, and we could imagine there would be some slight shape changes that would happen, whether it's being attracted or not being attracted. And that, actually, that very subtle difference there, is the molecular basis for the change in affinity of hemoglobin for oxygen. So a slight shape change happening according to whether or not we put a proton onto a histidine, and that histidine changes its interaction with another side chain of aspartic acid, and, a result of that, causes the protein to actually change its configuration and its affinity for oxygen. Another thing that we see around actively respiring cells, and it's actually one of the causes of the drop in pH, is carbon dioxide. Carbon dioxide is the final oxidative product of metabolism. When we go and we burn sugar, or we go and we burn fatóI'm on a diet right now, so that burning fat is really on my mind. If you see me exhaling a lot of carbon dioxide, that's good. I wish I could do that. Carbon dioxide is an end product of those kinds of processes. So, not surprisingly, if we examine the environment around actively respiring cells, we discover there's more carbon dioxide there. It turns out that carbon dioxide also affects hemoglobin. I thought I had a graph there. That's not the thing. If we look at this now, what we see is, now we're going back to these plots that we did before, the first plot was the, this is not getting my voice. The first plot was the pH 7.4, no CO2. Then if we take no CO2 and that same hemoglobin and we drop the pH, this is what we saw before, the affinity for oxygen drops. But now look at the bottom line. If we have a pH 7.2 and we add carbon dioxide, the affinity drops even more. That means, therefore, that hemoglobin is releasing oxygen in response to both protons and to carbon dioxide. Again, these are things that are both present in higher concentrations around cells that are actively respiring. Questions about that? Question over here? Student: Could you go back to the figure real quick with the, where you had the histidine structure? Kevin Ahern: Okay. Student: I had a question about that. Kevin Ahern: Okay. Here, yeah. Student: When you say "add a proton," Kevin Ahern: So, as the pH changes, protons will come on or come off. So that's the variable that's there. That's always true, yeah. Student: So it's the pH drop, then? Kevin Ahern: A pH drop, so a pH drop would be more likely to put a proton on there. Exactly. Okay? Yes, sir? Student: When we're talking about CO2 contributing in the same way that a more acidic environment around the respiring cells contributes, is that actually in the form of CO2 or as carbonic acid? Kevin Ahern: Good question. His question is, "How does CO2 manifest its effect?" I'm going to show you that in just a second. Is that your question? Student: My question was similar. Kevin Ahern: Okay, so CO2 exerts its effect. How does CO2 exert its effect? Well, one of the ways it exerts its effect is by forming a covalent bond with amine side chains. CO2 can be carried in the blood in two ways. One is it can actually be dissolved in the blood, and we'll talk about that later. The other way it can be carried is by this covalent bond to hemoglobin. We could imagine, looking at this structure here, that we have a carbon dioxide. We've got an amine, in this case that is shown with no charge on it. We put a CO2 on it and we develop something that has a negative charge. Again, we're introducing a charge where there wasn't one before. We could expect that we would, in fact, see some changes that would happen in structure of hemoglobin. Again, that's the basis for the change in affinity. Very, very subtle changes that are happening to the protein, but they're having big effects on its affinity for oxygen. In this case, it actually, you'll notice it releases a proton, and that actually enhances the effect that we saw before, because, more protons, of course, now we're going to affect hemoglobin in its own way, as well. So these really work together to make hemoglobin give up oxygen at the places where it's needed. So that's what I want to say about the Bohr effect. The last thing I want to talk about are some genetic considerations. It's a disease that we hear a lot about, and there are some interesting, or at least one very interesting aspect of it, and that disease is known as sickle cell anemia. Sickle cell anemia is a disease. It's a genetic disease where there are mutations in one or more of the subunits of hemoglobin, and there are different forms of sickle cell anemia that correspond to, of course, different mutations. Some may affect alpha, some may affect beta, et cetera. So sickle cell anemia is not just one genetic mutation. What happens when an individual has sickle cell anemia is that the hemoglobin, and by the way, the changes that can happen, the mutations that happen, can change a single amino acid, and if that single amino acid is in the appropriate place, it will cause the disease. So why do the cells get sickle shaped? The reason that the cells get sickle shaped is, under low concentrations of oxygen, the hemoglobin inside the cells will actually form a polymer. Multiple subunits will start joining, joining, joining, joining together. Now, normally hemoglobin doesn't do that, and, in fact, regular hemoglobin, that is, unmutated hemoglobin, does not form polymers like that. But sickle cell, people who have sickle cell anemia will have their hemoglobin do that, and the polymers actually cause the shape of a blood cell to change. Normal blood cells look like this. Sickled cells look like this. Now, I want to emphasize, if you have sickle cell anemia, all of your blood cells don't look like this. They only look like this when the cell encounters low concentrations of oxygen. So if you are, for example, exercising heavily and you have sickle cell anemia, you will find, people who do that find that their muscles will get excruciatingly sore. In some cases, it can be life threatening, because what's happening is, the regular blood cells are starting to sickle. They get in the muscle, in the tissues, where they're dumping all their oxygen. They get in the muscles and when these are dumping their oxygen they're actually in little, tiny capillaries. Most of the exchange of oxygen occurs in capillaries of the body. Normally the rounded blood cells go through those capillaries very smoothly like Teflon and don't have any problem. But when they form sickle cells, they don't, and they get stuck there. Not only do they get stuck there, but they stop the blood flow in that capillary, which is one of the reasons that people get this very intense pain, because their muscle cells are starving for oxygen and they can't get any because it's all blocked there. You might wonder why it's called "anemia." The reason it's called anemia is because our body has a way of recognizing damaged blood cells, and when it sees misshapen blood cells it takes them out of action. So even though we might be able to get this to revert to some extent, and by the way, I can't tell you that that happens, but if we might get this to revert to this form, before that happens our body can take this out of action and say, "That's a damaged blood cell. "I don't want to have that floating around. "It's going to cause a problem." So the more your blood sickles, the more blood cells you lose, and of course that's exactly what anemia is, a lowered concentration of blood cells in your body. People have studied sickle cell anemia for a long time. Sickle cell anemia actually has a very interesting historical component. It was the first disease that was proposed as a genetic disease, the first disease proposed as a genetic disease. Does anybody know who made that proposal? Student: Linus Pauling. Kevin Ahern: Linus Pauling made that proposal, a very cool thing. So there's an Oregon State connection there, again. One of the questions people ask when they see a genetic disease that persists in the population for a long time, and there's been evidence that this has been around for a long time, is that we think that there must be a reason why it persists. Why doesn't it just die out? Why don't people who have sickle cell anemia eventually have trouble reproducing or don't reproduce as well, and it would die out of the population? But sickle cell anemia persists and it has persisted over human evolution for a long time. There's a very interesting observation that people made about the distribution of the genes in sickle cell anemia. If you overlay the incidence of sickle cell type with the prevalence of malaria in the world, you'll see a disproportionate amount of sickle cell genes present in locations where there's very high incidence of malaria. People have done epidemiological studies and found that, in fact, there is an advantage for survival for people in malarial infected areas to have the heterozygous form of sickle cell anemia, that is one normal and one mutant. They have an increased incidence of survival compared to people, for example, who have both wild type or both mutant. So there is, apparently, a genetic basis for why sickle cell anemia persists in the population. That's the next-to-last thing I want to leave you with. The last thing I want to leave you with is, one of the things that we're interested in with sickle cell anemia is what kind of treatments can we offer. It's a disease that is being investigated very intently. I've had actually several of my own students who've gone off to summer internships working on sickle cell anemia. One of the interesting treatments that has been experimented with and I think is still being experimented with, is actually trying to get around the mutant component of hemoglobin, whether it's the alpha or the beta. What they do, this thing keeps popping out on me, what they do is they treat patients with a drug that will induce the fetal hemoglobin to start being expressed. Now, the fetal hemoglobin, of course, didn't have that mutation. The fetal hemoglobin is perfectly good, and so by doing this, they flood the blood or the blood cells with a normal hemoglobin and in some cases it appears to help alleviate the disease. That fetal hemoglobin, of course, normally stops being made around the time we're one or two years old. But with proper drug treatment you can actually induce it to be made again and for some people that provides relief. So it's, again, another connection that we have to one of the hemoglobin genes. That said, I will take any questions that you might have. Yes, Shannon? Student: So I'm not sure if I understand the correlation between someone having one allele for sickle cell and their survival in malaria areas. Is that implying that someone with sickle cell survives malaria better, or... Kevin Ahern: So her question is, does the person who's heterozygous for sickle cell anemia, are they more resistant to malaria? And the answer is exactly that. They are. Yes. Yes, sir? Student: Does the increased affinity of fetal hemoglobin affect the person? Kevin Ahern: That's a really good exam question. Did you hear what he said? He said, "Does the presence of that fetal hemoglobin "change anything for that person?" What do you guys think? Describe to me what you think might happen. Student: They will have a lower net oxygen capacity, as far as the ability to dump it off. Kevin Ahern: Yeah. Student: But their overall capacity, because the functional gamma units will probably be increased as compared to anemia. Kevin Ahern: Yeah. So they'll have less capacity, basically, is what'll happen, because, if we think about it, the more of that fetal we have, the more hemoglobin is going to be in the R state. And the R state's really good for binding oxygen, but it's not so good for giving it up. But it's probably better than not being able to get any at all. But you're exactly right. Maybe we should sing a song to summarize all of this. Okay? Let's do that. Lyrics: Oh, isn't it great what proteins can do, especially ones that bind to O2, hemoglobin's moving around. Inside of the lungs, it picks up the bait, and changes itself from T to R state. Hemoglobin's moving around. The proto-porphyrin system, its iron makes such a scene, arising when an O2 binds, pulling up on histidine. The binding occurs cooperatively, thanks to changes qua-ter-nar-y. Hemoglobin's moving around. It exits the lungs, engorged with O2, in search of a working body tissue. Hemoglobin's moving around. The proton concentration is high and has a role, between the alpha betas it finds imidazole. Kevin Ahern: That's histidine. Lyrics: To empty their loads, the globins decree, "We need to bind 2,3BPG". Hemoglobin's moving around. The stage is thus set for grabbing a few cellular dumps of CO2 Hemoglobin's moving around. And then inside the lungs it discovers oxygen, and dumps the CO2 off to start all o'er again. So see how this works, you better expect to have to describe the Bohr effect. Hemoglobin's moving around. [applause] Kevin Ahern: Thank you. Okay, So you better expect to have to describe the Bohr effect. That's a hint there, right? We turn our attention from hemoglobin now to enzymes. Enzymes, of course, are proteins that catalyze reactions. Hemoglobin, as I said, didn't catalyze any reaction, but enzymes do. We're going to spend a fair amount of time thinking about how enzymes act as catalysts and what they do. Enzymes are remarkable, and they are remarkable especially when we compare them to chemical or other chemical catalysts. If I have a chemical catalyst that I use to catalyze a reaction, it might not be unreasonable for me to expect a hundred or a thousandfold enhancement using a chemical catalyst. When I use an enzyme, an enzyme can provide up to 10 to the 17th enhancement. I believe that's 170 quadrillion. Now we start to see that enzymes are catalysts, but they're really incredible catalysts, absolutely incredible catalysts. How in the world can something work like that? Well, let me just give you some, maybe some things that you can think about or feel the magnitude of this. If we look at this top enzyme we'll actually talk about this next term, the enzyme has a half-life, meaning if... not the enzyme, the reaction that this enzyme catalyzes has a half-life of 78 million years, meaning that if we took a mixture of it and we let it sit in a test tube, it would take 78 million years for half of it to react. If I treat this with an enzyme, I make this enzyme catalyze the conversion of 39 molecules per second per molecule of enzyme. Now, that's pretty incredible. The rate enhancement, if you do the math, corresponds to this 140 quadrillion that I told you about here. Now, that's mind boggling, okay? That may sound very rapid, and that is, in fact, very rapid. But there are other enzymes that are even more incredible in terms of what they do. We're going to spend a fair amount of time talking about this enzyme, right here, carbonic anhydrase. Carbonic anhydrase does something that, to me, I can't get my head around. It only does things by about 7.7 millionfold greater. That's nowhere near the 140 quadrillion. But when I look at how many molecules of product each molecule of enzyme makes per second, it's mind boggling. Each molecule, I take one enzyme one, a single enzyme, and I put it with its substrate the substrate's what an enzyme acts on, and I discover it makes one million molecules of product per second! Now, I don't know about you, but I can't think of something happening that rapidly. One million molecules of product per second every enzyme is making. Imagine that I was running a factory, and a factory has an assembly line, and the assembly line is putting products out the end. I don't care how fast or how many people you have working in that factory, you are not going to make a million products per second. This tells us that the nanoscale world, "nanoscale" being the level of molecules, the world that exists at the level of molecules is very different than the world we know out here. The nano world is very different than the macro world. There's no way I can do a million things a second. No matter how hard I try, I'm not going to do that. Yes, sir? Student: As long as it's not high enough to denature them, would a higher temperature increase these reaction rates, as in inorganic chemistry? Kevin Ahern: His question is, will temperature affect enzymatic rate? And the answer is, yes, it will, to a point. You could imagine that if we raise the temperature we may favor the reaction, and then we'll actually see it fall off rapidly. Any ideas why it falls off rapidly? We denature the enzyme. Yeah. Yes, sir? Student: Is this one million per second only when the substrate is in excess? Kevin Ahern: So, yes, and it's a good question. His question is, does the substrate have to be in excess? And the answer is, yes, it does. So you're actually getting a little ahead of me, but I will address that directly in what I'm going to say in just a little bit, but it's a very good question. So, pretty cool stuff. This sort of sets the stage for enzymes. Enzymes are proteins, and what you've seen in a protein so far is its structure is critical. Proteins have very, very specific structures, and, consequently, they have at least fairly specific molecules that they will bind to and catalyze reactions on. Notice I said "fairly specific." Some enzymes are more specific than others. Some are really rigid, they only want one thing and that's it. But enzymes have a specificity. They don't catalyze a reaction on everything because they can't bind to everything, and the reactions that would be catalyzed would differ from one molecule to another. This shows a reaction that we're going to spend a fair amount of time talking on, and it's a reaction that involves the cleavage of a peptide bond. To cleave a peptide bond, you have to add water across it, and that adding water causes the bond to split. It's called proteolysis, P-R-O-T-E-O-L-Y-S-I-S. Proteolysis breaks peptide bonds, and enzymes that catalyze proteolysis are called proteases, P-R-O-T-E-A-S-E-S, proteases. We're going to spend some time talking about those, but before I talk too much about those, I'll come back to that later, I want to say a few words about energy. I'm going to talk about delta G. You guys have heard the change in Gibbs free energy in your basic chemistry classes. I'm going to introduce it here only in very general terms, and I will tell you right now that you're not, underline "not," going to do delta G calculations on this exam. Student: Yay! Kevin Ahern: You will, later. [laughing] Students: Ohhh. Kevin Ahern: But I'm not, I figure you've got enough for this exam. They're actually not very relevant for us right now, except for to understand the beginnings of enzymes. But later we'll see that they actually will be important. But on this exam you will not have to do delta G calculations. But I'm going to say a few words about delta G because it's relevant for understanding how enzymes do what they do. We know that there's a standard Gibbs free energy. The change in the standard Gibbs free energy is known as "delta G." Delta G tells us the direction of a reaction. If the delta G is negative, the reaction proceeds forward as it's written. If the delta G is positive, the reaction proceeds backward as it's written. If delta G is equal to zero, the reaction is at equilibrium. Equilibrium does not mean equal concentrations of products and reactants. Get that in your head, okay? That's the number one mistake that students make. They didn't learn what equilibrium meant, back when. It does not mean equal concentrations of products and reactants. It means that they're unchanging over time. You might have ten times as much of one or than the other. But it doesn't mean equal concentrations. There's a related quantity called "delta G zero prime." Delta G zero prime is called the "standard Gibbs free energy." The prime is on there for biologists, like biochemists, because, for a regular delta G zero that one would calculate, that would correspond to everything being present at a concentration of 1 molar. Well, if we have a reaction that involves protons, we don't want that being at 1 molar because it'll kill our enzyme. Right? It would have a pH of zero. That would not be good. So the delta G, the prime on there indicates that everything is at 1 molar, except for the protons. So we've got a pH 7, basically, that we're doing this at. So what is the standard Gibbs free energy? The standard Gibbs free energy is the standard Gibbs free energy change under standard conditions. That's all it is. Under standard conditions, that's what it is. So delta G tells us the Gibbs free energy change under any conditions. The delta G zero prime is what corresponds to standard conditions. There's a calculation that we're not going to do. We will do it later. But just to remind you from your delta G equation, delta G equals delta G zero prime plus the gas constant R, times the temperature in Kelvin, times the natural log of the concentration of products over the concentration of reactants. That's a simplification of the actual equation, but for our purposes that's fine. At equilibrium, delta G equals zero, so the delta G zero prime must equal these two things must be the opposite sign of each other, so they cancel out. That's not really essential for us to understand to understand to understand enzymes. But we do understand and recognize that delta G is a very important parameter to understand for directions of reactions. It tells us some very important things. So I want you to keep that in mind and I'll show you a couple of things here. Enzymesóno surprise from that first table I showed youóspeed reactions and they can speed reactions immensely. They're very, very important for speeding reactions. In this case, we're calculating, we're determining the concentration of product versus time on the x-axis. We see there's more product being made with time. When we measure velocities of reactions, we measure the concentration of product per time. That's what velocity is. When we measure the velocity of a car, we measure distance per time. When we measure a reaction rate, we calculate concentration of product per time. Okay, everybody got that? So this should be concentration of products going up, and that's concentration of product over time. This schematic introduces a sort of unusual delta G, and we can think of it as an activation energy. It doesn't really relate to the equation that we had before, but we can think of the importance of this activation energy. Activation energy is an energy that has to be put into a reaction before the reaction will proceed. It has to be put into the reaction before the reaction will proceed. If we look at an uncatalyzed reaction, we see here is the free energy of the starting materials and here is the free energy of the product. The change in the Gibbs free energy is the difference between this and this, and we see that this will give us a negative delta G. This reaction is favorable and it will go forwards. Now, on the x-axis, we're plotting what's called "reaction progress," and we're just sort of seeing how this thing is going, what's happening to the energy of this reaction over time. Okay, well, the top line corresponds to an uncatalyzed reaction. For an uncatalyzed reaction, I don't have an enzyme that's helping me out. I've got a molecule A over here, and I've got a molecule B over here. They're in solution, and they're bouncing around and they're bouncing around. All of a sudden, they bounce and, if they hit the right way, they will, in fact, react and give a product. Alright? Uncatalyzed. We could imagine that these two things bouncing around in here, if there's only one here and one over here, the likelihood they'd bounce and hit each other in the right orientation is low. If we increase the concentration, we've got a bunch of these. The more concentrated it is, the more likely it's going to go. Right? If we measure the energy it takes to put these guys together, when they do work right, that's what's being plotted here. It's called the "transition state." It's also called the "activation energy." I'll take either one on an exam. Once we get to that hump, that reaction can either fall backwards to where it started from, or it can fall forwards and go down this hill. There's our delta G for the reaction. Now, what happens if I put an enzyme in there? Well, I'll tell you something very important to remember. Enzymes do not change the delta G for a reaction. They do not change. Notice, the enzymatic reaction starts with a substrate at the same energy and it creates a product with the same energy as the uncatalyzed reaction. The delta G is exactly the same. So what did the enzyme do? Well, the enzyme lowered the activation energy. It lowered the activation energy, and it made it much more likely that when two molecules hit each other they would have enough energy to make this thing go forward. Now, we'll see enzymes do some other tricks, as well, but the number one thing that enzymes do to speed a reaction is they lower the activation energy. Now the analogy I like to give for this is, if I took the class and I said,"Okay. "I'm going to give you guys extra credit. "We're going to go out here and we're going to go up "to this giant steel ball-bearing." And we're going to go out the door and Corvallis is at about 250 feet above sea level, and we're going to push it towards the ocean. In theory, it should go right over to the ocean, because 250 feet higher, there's Corvallis, there's the ocean. Duh, right? Well, of course, it's not going to go. Why isn't it going to go? There is a coastal mountain range between us and them, between us and the ocean. There's our activation energy. So we say, "Okay, well, we've got a whole bunch of us. "Let's take, and we want to make sure it gets there, "so let's go take this ball-bearing "and we'll push it to Marys Peak." Marys Peak, of course, is the tallest peak in the Coastal Range. It's just to the southwest of Corvallis. We work and we struggle very hard to get that up there. I'm assuming there's no trees in the way, and, given clear cutting, that's not an unreasonable thing to think, anymore. [class laughing] Bad joke, huh? Bad professor! We push this ball-bearing to the top of Marys Peak. And we say, "Well, it's going to have some ups "and downs along the way, but it's going to have enough "energy to make it to the ocean." And it will. Again, assuming we do enough clear cutting, right? Okay, well, then the smart person says, "Wait a minute, this is the dumb way to go. "We really don't have to go to Marys Peak to make it go. "All we have to do is make sure we get over "the highest pass." Right? As long as we get it to the point of the highest pass, then it's still going to have enough energy to get down because all of the other passes will be lower. The enzyme is helping you to find the pass. That's what it's doing. The enzyme has found the pass. It's found that. You're not going to have to put as much energy into getting it all the way up to Marys Peak. You've only got to get it up to this pass to get it across, over to the ocean. So that is my metaphor of the day. Questions on that? You guys are looking tired. Student: It's Friday. Student: Yeah, it's Friday. Kevin Ahern: You're still looking tired. Yes, sir? Student: Are there enzymes that can bring that down to where there is no positive activation energy and just make it spontaneous? Kevin Ahern: Enzymes will help this reaction go. The spontaneity of a reaction is really determined by the delta G. But the enzymes can lower that significantly. They can lower the activation energy significantly, yes. Question over here? Student: You said earlier that there's molecule A and molecule B and they bump into each other? Kevin Ahern: Bang! Student: Does the enzyme grab A and B? Kevin Ahern: Oh, good question! Does the enzyme grab A and B? In fact, that's one of the other tricks the enzyme does. It has a specific binding site for A, it has a specific binding site for B, and the enzyme is positioning them in exactly the right way so we don't have to worry about them hitting the right way and bouncing off. It positions them in exactly the right way so they bond. Kevin Ahern: So, what? Student: [unintelligible] lowering the activation energy? Kevin Ahern: That would also contribute to lowering the activation energy. That's correct. Yes, Connie? Student: What usually contributes to that activation energy? Like, is it the heat of the general surroundings? Kevin Ahern: What contributes to activation energy? Well, activation energy is a function of the temperature, certainly the heat of the surroundings. It's a function of the concentration. So those are two variables that can really affect what's there. There are other things that can play into it, as well. Concentration is a very important one. Concentration is very, very important, because the more concentrated something is and this was the question that he had over here earlier today when I want to see a million molecules of product per second, do I have to have the enzymes saturated with substrate? You betcha! That really works well. So I'm going to get to that in just a second. But before I do that, why don't we stand up and stretch? Just stand up and stretch. You guys will get some oxygen in your system. [indistinct talking] Now you look alive, alert, refreshed, and ready for more, yes, sir, more biochemistry. Kevin Ahern: What's that? Student: [unintelligible] Kevin Ahern: Oh, yeah, it is distracting. I wish I could just make this thing go away, but it's part of the security system and they will not let us fire that thing. But whenever you see it, let me know and I'll be happy to turn it off, yes. About every hour it starts up. Let's think about concentration. This is a plot that you're going to see a lot of. It's called a "velocity-versus-substrate concentration plot." It's also called a V-versus-S plot. So I want to introduce it to you and describe to you what it tells us. What does this plot tell us? This plot tells usóI think the batteries in this thing are just going badólet's imagine that, instead of having an enzymatic reaction, I have this factory that I'd talked about. The factory is full of workers and the factory is try to make, let's say, automobiles. To make an automobile, you have to have a lot of materials, they have to be assembled, they have to be stuck together. This group of workers in the factory assemble the automobile from the parts, and that's their job every day. If they have very, very few parts, or the parts come in very, very slowly, what happens to their ability to make automobiles? Well, it's going to go down. They're not going to make automobiles so fast. They're going to be spending a fair amount of their time waiting on parts, waiting on parts, waiting on parts, right? Their velocity is going to be low when their amount of raw materials is low. As they get more and more raw materials, they start making more and more cars, and we see that go up fairly rapidly, at least at first. And now, all of a sudden, the factory is starting to get more and more raw materials, and the employees are going, "Oh, wow, I'm going as fast as I can, going as fast as I can. "Whoa, there's more! "I'm going to go as fast as I can!" Eventually, we get to a point where the employees, no matter how hard they work, they can't work any faster. They reach a maximum velocity of making cars. In this case, the enzyme is reaching a maximum velocity of making product, exactly the same thing. Not surprisingly, enzymes have a maximum. Now, if I take my automobiles, I take my factory, and I say, "Whoa, this factory is working at maximum output "of cars that it can get." This is 400 cars a day. But I can sell 1,000 cars a day. Well, it doesn't matter how much more I pour into that factory in terms of raw materials, the employees can only do so much, right? So what do I decide to do? Well, the smart thing to do would be to build another factory, right? If I build another factory identical to the first one, with the identical abilities of the workers to work, instead of having 400 cars per day, my two factories can make 800 cars per day. You with me? Now, I'm illustrating a point about factories to you that's important to understand when understanding how enzymes work. The maximum velocity an enzyme can work at is called Vmax, velocity maximum, Vmax. Vmax is reached when an enzyme is saturated with substrate. Substrate is the stuff that the enzyme works on. It doesn't matter if I keep increasing the substrate anymore. I'm not going to get any more product made per time. Just like the factory, I'm working as hard as I can do. I can't put out more. But if I add twice as much enzyme, what's going to happen to Vmax? It's going to double, right? I double the enzyme, I'm going to double the velocity. That tells us something very important. Vmax is an interesting quantity, but it's not a characteristic of an enzyme, because Vmax depends on how much enzyme I use in my reaction. If I use twice as much enzyme, I'm going to get twice as much Vmax. That make sense, no, yes? Student: Well, so, in theory, proportionally speaking, it still could be a characteristic quantity, then, right? It could be unique to an enzyme in terms of... Kevin Ahern: Vmax is not a characteristic of an enzyme. No, okay? Vmax is not a characteristic. And you will talk to many people who don't know that. They'll go, "Oh, yeah, the Vmax of this enzyme is blah, blah, blah." And I'd say, "And how much enzyme did you use?" [mumbling] Now, Shannon is sort of thinking ahead. There's gotta be something characteristic about the enzyme that's here. What could it be? Well, I told you that the Vmax depended upon the concentration of the enzyme. I double the enzyme, I double the velocity. What if I took Vmax and I divided it by the concentration of the enzyme? Now I've taken the concentration of the enzyme out of the equation. What do I get? I get something called "Kcat," K, with a lowercase c-a-t subscript. Kcat is equal to Vmax divided by the concentration of the enzyme. Now I've gotten rid of the concentration out of the equation, and I get something that's really interesting. It's called Kcat, "Cat," c-a-t, yes. It's also called "turnover number." And you've already seen Kcat. That very first table I showed you that had those big numbers? When I said that carbonic anhydrase had made a million molecules of product per molecule of enzyme per time? That's Kcat. So Kcat is a measure of the number of molecules of product an enzyme makes per time. A Kcat of one million means it makes a million molecules of product per molecule of enzyme, per second, in this case. Remarkable. That is a characteristic of an enzyme. We can compare Kcats between two enzymes. We can't compare Vmaxes between two enzymes. Everybody got that? If you get that, you will know as much as many people know about enzymes, and I'm very pleased, in this class, that students usually take that message away with them, and I think it's a very important message. Kcat is a characteristic of an enzyme. Vmax is not. It depends on how much enzyme I use. So if I want to compare two enzymes, I've got to compare their Kcats, not their Vmaxes. That's a lot of stuff for one day. Let's finish a few minutes early. I have a new clock to celebrate, over here. It tells me I'm finishing early. So let's do that and I will see you guys on Monday. So, is that where you were headed? Student: Yeah. Kevin Ahern: I figured it was. Student: I guess it must remind me of something. So on Monday I have to miss class. Kevin Ahern: Okay. I'll give a pop quiz. Student: Oh, okay. Well, of course, I'll stay on top of the reading and so forth. Is there anything else I ought to do? Kevin Ahern: You'll be fine, you'll be fine. Kevin Ahern: Take care, Shannon. I've have nobody to pick on in the front row. Hi, how're you doing? Student: I have a question for you. Kevin Ahern: Yes, sir. Come on back. [END]
Medical_Lectures
Medical_Video_Lectures_RenalKidney_Anatomy_Physiology_By_NEPHROLOGIST.txt
hi everybody Welcome to the section of nephrology we will begin our discussion by discussing anatomy and physiology of the kidneys this will be very important topic because if you are thorough with your anatomy and physiology of any organ system I think it will help you understand the disease process very well and also the management of particular diseases so let's begin our discussion by taking into account the anatomy and the phys ology of the kidney the urinary system basically consists of organ systems that produces stores and carries the urine as we all know there are two kidneys which produces the urine traveled down to the urinary blad through the Urus and then stored in the bladder for good amount of time and then it's ultimately excreted through the urra on average human produces about 1 .5 L of urine within 24 hours period of time and it varies depending on your fluid intake and also varies depending on your other systems which will be also contributing to the fluid losses from the body and those includes basically through the respiratory system that is through your breathing and through perspiration so these two accounts for the insensible losses so whenever you are calculating intake and output in the in in in patients who are sick especially in the setting up Intensive Care Unit you should also take into account the insensible losses which comes from respiration and the perspiration certain medications will also affect especially the diuretics if you look into the functions of the kidney these are the basic functions that it takes one is excretion the waste products maintaining the homeostasis that's the volume status maintenance is very important part played by the kidneys acid base and then the secretions of certain hormones which are vital excreted products basically the products of the metabolism water regulation as we talked about certain hormones vitamins and toxic substances which in excess can cause damage to the remaining organ systems of the body are excreted coming to the anatomy of the urinary system let's go into little details with kidneys first so if we look at the kidneys it's a paid organ weighing about 300 G composed of two parts one is a cortex and the medulla here in bracket it says that if you take out the fluid means the G the fut trate and you analyze it in the cortex it will be isotonic and in the middle it will be hypertonic once we are discussing the physiology and the functions of the nephrons we see how this happens the cortex contains the gimal apparatus the Meda is further div into outer and the inner Meda each kidney consists of about 1 million filtering units and that's called the nephrons and they play a very important role in regulating the electrolytes in the human body especially the sodium potassium the bicarb and also the calcium it also helps in cleaning Ura this is one of the ways of excreting it's end product of Amo acid metabolism and human beings forms Ura which is less toxic if you take birds they are they excrete the nitrogen component from the am acids through in the form of uric acid so they are uricotelic org animals and Aquatic animals excreted in the form of ammonia let's go into a little bit details with the renal structure in itself if you look at the renal Anatomy the major function of the renal pelvis which forms which is the beginning part of the Urus it's it's like a funnel for urine flowing to the Urus it represents the distal proximal portion of the Urus and it's the main convergence of two or more major kisis these in turn are formed by the renal Pap is surrounded by a branch of Ral pelvis called as the calx so a calx forming a renal P forming a calx multiple calx combined to form the renal pelvis the renal pelvis is representative of the beginning of the Urus the urine is connected in the UR renal pelvis then connects to to the uror and comes to the blad they're about they are about 200 to 250 mm long they're made up of a smooth muscular tissue in the walls of the uror peristaltically forces the urine downwards the urine is not flowing through the uror but goes to the bladder as an urinary spindle the starts with the sucking up of the urine during the D phase closing of the collecting ductus and then with the peristaltic moments the urine is flowing into the blood so it's not a passive moment it's an active force that is pushing the urine into the blers a small amount of urine is emptied into the blad from Urus about 10 to 15 seconds urinary bladder further going down is a hollow muscular organ shaped like a balloon these located in the pelvic fosa and held in the place by the ligaments the bladder stores urine up to 500 mL of urine comfortably for about 2 to 5 hours and this is the time when people start having the urge to micturation Spen these are the one which regulate the flow of the urine from the water and there internal ureal spinter in the beginning of the urethal smooth muscle and not under voluntary control this is what one should know an external ureal spinter is your scal muscle which we can control so you see this internal OS spent is an important thing to remember because it's not under voluntary control whereas external SP EAS there is something called as Det muscle as we all know when we study in anatomy it's a layer of the urinary blad wall made up of the smooth muscle fibers arranged in inner and outer longitudinal layers and a middle circular layer the contraction of it causes the bladder to expel the urine that's the importance of the rral muscle and you see some of the diseases that affect the detrial muscle especially in elderly population who have hyperactive bladder that's why they will have incontinence the frequent passing of the urine and there are certain medications that can use to block this muscle and help prevent incontinence coming further down the urra it's an excretory function in both sexes to pass the urine to the outside and in males it has another function and that's basically Health it's it's a part of reproductive organ and uh it helps as a passage for the spms the external e spin is a stratified muscle smooth muscle that allows your voluntary control overation and we just now learned about the E spus being internal external in males the internal and external youal spinter are more powerful able to return the urine for twice as long as females and and in urra if you take the men and the women it's shorter in women and it's longer in men the significance of this is that women tend to develop more unitract infections because of the shter Ure compared to men um the urethan men has um four different parts in tral prostatica membran Shia and spongiosa the process of excreting the urine you know the whole system that's urination the maturation how does that happen the process of disposing the urine from the urin Blum through the urra to the outside of the body that we call it as micturation or the urination and it's usually under voluntary control and incontinence is the one hand there is inability to control the ation called test there are different types of incontinence we we we know about stress incontinence paradoxical incontinence urinary retention refers to theability to do urine and Anis not cha is nothing but incontinence during the night time so it's effect of emotions and what's the micturation reflex it's activated when the UN blad wall is stretched so the patient feels the urge to mixture it and the center for this is in the spinal cord especially in the sacal region that is mediated with the higher cus in the brain that's the p in the cereon the presence of the urine in the BL it stimulates the stretch receptors producing some action potential carried through the sensory neurons to the sacle segment so that's where the spinal cord level centry is there from there this is through the pelvic nerves the parasynthetic fibers carries the exchange the urin BL in the pelvic nerves the pressure in the urinary bladder increases rapidly once its volume exes approximately 400 to 500 mL this we talked about anatomy in brief about anatomy of the kidneys and urine which is already formed and how it is executed from the kidneys from the bladder to outside let's talk about the kidney proper we mentioned the term nephron earlier which is the functional unit of the kidney its Chief function is to regulate the water soluble substances by fting the blood resorbing reabsorbing what is needed and excreting the rest as urine each nefron is composed of an initial filtering component which is renal carpos and a tubule which is specialized for reabsorption and excretion the renal carpos filters out large solutes from the blood and that is delivering the water and small solutes to the renal tubules for further modification NE the nefron each nephron has these parts and one should be aware of these things a glamous apparatus which is the first initial filtering unit the PCT proximal con related tub the loop of H an interesting part which has um descending limb ascending limb in ascending limb there is a thin and a thick part there is a significance to knowing these things because the medications belonging to the class of diuretics there are different classes in it and each class of medication acts at a different level in the nephron and if you know the structures of nephron in detail it will be lot easier to understand the effect of those trucks and what they causes it's it'll be easy for you to remember so glal apparatus proximal TI Loop of handl with descending limb and an ascending limb ascending limb having a thick and a thin part A distal conol tubule in the collecting ducts depending on the location of glas and the the rest of the tubules you can classify the nephrons into two types one is a cortical nefron and intermediary nefron and the gular nephrons the way they differ is the glom apparatus belongs to the surface and loop of Handley only to the outer part of the medula remember we talked about Meda being having three parts two parts outer and the inner if it goes in if the loop of finle just takes a turn in the outer part of the metal it's a caral nefron if it's in the middle then it's intermediary and if it goes all the way down into the inner medal law that it's JRA nefron renal carpus skulls it is the nefron initial filtering component we just now discussed about the glamas it's a capillary tough that receives its blood to the eant arterial passes it into the capillaries and comes out to the ephant arterial from theant arterial it enters into your per tubular capillaries and that gets converted into veins and that ultimately gets into the renal vein and the IVC back to the heart eant arterials of JRA medr just know we learned about the JRA Med meons right it is the one where Loop of finally goes into the inner medum and and such are like 15% of the nephrons are ex medary they send the straight capillary branches that deliver isotonic blood to the r medal along with the loop of handle there is something called as Vasa recta these are nothing but the intercalating uh capillaries around the tubules and they have a very important function which is called as a countercurrent exchange system and they are the one who tries to maintain the tonicity of the urine whether it should be hypertonic hypotonic depending on the volume status of the individual that's whates and then the bman capsule is the one which surrounds the glas and is composed of a visal inner layer and a par OU layer the glus has several characteristics that deviate from the features of the most of the capillaries of the blood what is that because remember any organ system you take in the body be it heart be it lungs be it lever or brain each has a unique function each has its own capillaries now something unique about the capillary structures of the glomas what is that now the endothelial cells of the glomas they have numerous PES which are called as a fin tree that's number one number two is Glam endothelium sits on a very thick basement membrane and on the surface it has called negatively charged glycos amog glycans which are which is is a Hain sulfate and this is very important because the clinical significance of this is that in certain conditions which affect glus which is called as Glon nephritis this can be lost this negatively charged property of the membrane will be lost what's what's unique about this negatively charged property of this basement membrane remember albumin which is a important component in the blood is what it's a protein it's negatively charged so when you have two negatively charged things coming close to each other they will repel so in glal letis when you have damage to this part there is nothing to repel the negatively charged ions from the blood and helps they can e and hence they can easily pass into the from the blood into the glal space and that's why you tend to see prua especially this holds through nephrotic syndromes like focal segmental GL sclerosis hypertension nephropathy diabetic nephropathy we will discuss once we come to the Pathology blood is carried out of the glus by the arterial instead of the venues and is as observed in most of the capillary systems so these were some unique features of capillaries of K now if you take the filtration as such the filtration surface is 1.5 square m the amount of the solution which is filtered through the glamas is around 180 to 200 L 25% of the cardiac output goes to the kidneys the rest 97% so what happens what one the ultimate urine that is formed is just 1.5 L at the end of the day however 180 to 200 L of blood flows and forms the filtration solution so what happens to the rest 997 it's reabsorbing the tubules back into the body adjusting your final volume of the urine the glal filter now we know by this at this point of time that there is a capillary endothelium a basal membrane and epithelium of the Bowman's capsule which is the protocy and porocytes have a special special cells which have numerous pseudopodia which is paricles that inter digitate to form the filtration slits among the capillary wall now the glom filtration that happens it depends on four important things that you should remember remember hydrostatic pressures and nonic pressures if you remember these two words then your life will be easy pressure gradients across the filtration slip is the one which determines about your driving forces for the filtration and we just learned that on cing in the hydrostatic pressure we'll discuss it in a little more detail in the subsequent slides blood circulation throughout the kidneys is important part as well because that determines your hydrostatic pressure permeability of the filtration barrier again we just now learned that there are certain amog glycans that attach the basement membrane which repels your proteins in the blood filtration surface yourself so if you reduce the amount of the filtration surface then obviously that's going to affect your GFR which is flation rate the solution after flation is very similar to the plasma but should be without proteins the abilities of the kidneys to clear the plasma from different products so this is a very important thing that one should know as glomular filtration rate GFR it's the way to look at kidneys in terms of how they are functioning see any organ in the body you try to look them anatomically if there is any damage functionally if there any damage right holds true with every organ system in the body take it hard beat liver beat kidneys how do you measure GFR it can be measured by measuring the excretion in the plasma levels of a substance that is freely filtered throughout the glamas and is neither secreted nor reabsorbed like anolin is the best one which determines this GFR how do you calculate it it's U * V / P which is stands for Unique concentration of the in in the urine times volume of the urine and P stands for the concentration of the in in the plasma what's the normal GFR it's around 125 ml per minute so it comes around 7.5 ml per hour as we are going through the discussion we we talked about the glas which is the first part of the nefron there's an important structure called as an GTR glal apparatus this is nothing but a specialized type of cells that synthesizes stores and secretes a very important enzyme called as running this Ren helps in secretion of aldosterone a very important axis running ngot tensin aldosteron AIS involved in very important diseases like congestive heart failure hepatorenal syndrome curosis of the liver so one should be thorough with this system specialized smooth muscle cells in the walls of the effent arterial that are in contact with the distal tubules they have a mechano receptors for the blood pressures A specialized cell called as macular densa cell is an area of closely packed specialized cells lining the distal con and it lies next to the Gamal apparatus they have very prominent nuclei then surrounding cells and they are very sensitive to the concentration of the sodium ions in the fluide now we further Move Along we discussed about the glomas and the JRA glal apparatus next coming to your PCT which is proximal ConEd tubule it's around 15 mm long and 45 microm diameters um have um epithelium um with brush borders there's projections which enlarge the surface for the reabsorption what are the functions it's basically the largest volume of the filter is reabsorbed in the proximal tibule if you see 70 to 80% of the water which is filtered it absorbs most of the ions in the body sodium chloride bicar potassium calcium magnesium and the phosphorus the glucose so this is all absorbed in the PCD so if you see any diseases affecting proximal con tibule you tend to see one is loss of significant amount of water loss of multiple ions from the electrolytes in the body and glucosuria and what one such condition is Fanon syndrome and if you look at the tonicity of the solution it's isotonic still a fluid filtrate entering the PCT is reabsorbed into the Vasa Rector including proximal approximately to to the fil salt and water and all filter organic solutes further it is it has a another important function called na plus K plus EDP lying in the base lateral membrane which helps in transport of the sodium ions into the Lumen it has much of a mass m of the water in the solute occurs between the cells through something called as tight junctions and we just know knew that it was isotonic fluid glucose and the amino acids are reabsorbed actively via a CO trans Channel driven by the sodium gradient next further moving down is loop of Handley we just um learned um earlier saying that has a descending limb which is thin part and the ascending limb which has Thin and Thick part it's begins in the cortex receives in from the proximal Comm tubule and extends into the medala and returns back to the cortex its primary role of is to conent concentrate the salt in the interium the tissue surrounding the loop the specific function lets go into little details it's descending Lim it's permeable to water and salt and this indirectly contributes to the concentration of indium as the filtrate descends deeper into the hypertonic indium of the renal medala water flows freely out of the descending Lim by osmosis until the tonicity of the filtrate and the inter equate the longer descending limbs allows more water for more time for water to flow out of the filtrate so longer limbs makes the filtrate more hypertonic than shorter limbs so this concept is important because we learned about different types of nephrons seeing there are cortical nephrons there are intermediary nephrons there are extra nephrons and GTR medary nephrons are the one which have very long limbs now you know what's the rule of those Str functions then the ascending limb is impermeable to water remember this it's impermeable to water but permeable to Salt whereas the descending limb is permeable to water and salt both remember this first part that's why it's highlighted descending limb is permeable to water and salt whereas ascending limb is per impermeable to water but permeable to Salt it actually pumps the sodium out of the filtrate so generating the interation to be hypertonic and that drives your countercurrent exchange so it will result in what kind of fluid in the tubules it should be hypotonic right because it's permeable to Salt so you're letting your salt go out of the system it becomes hypotonic this hypotonic fil then goes to what is your DCT if you look into the anatomy of the DCT we just now learned there is one structure that it forms is when it's forming a loop it's going up into the cortex it touches glamas and makes the turn into the collecting doct when it touches the glamas that's where it forms something called as JRA Glam apparatus which has some specialized cells called as Maca they are the one with secrete renin and they are very sensitive to salth that is sodium let's be specific uh it also helps in resorption reabsorption of the water in the sodium and ultimately the hypotonic fluid that came out from the ascending LM of Lua is going to be isoton once again now after traveling to the land of the DCT only 30% of the water remains and the remaining salt content is negligible 97% of the water in the glal filtrate ERS the clam convulated Tes and the collecting ducts by osmosis the distal convered tib the DCT similar to PCT in structure and function cell Linings of the tules have numerous mitochondria so they have more active transports because the mitochondria's function is to generate more atps atps are nothing but the money to sells right in order to activ involved in any kind of process they have to sell some atps in order to generate atps you need more mitochondria so DCd has more mitochondria and hence it generates more atps and hence involed in more active transports much of the iron transport taking place in the DCT is regulated by some TR systems and those are now these are three important things that you should know parathyroid hormone which helps in reabsorption of calcium and excretion of more phosphate and it exed DCT that's why in chronic kidney disease you see parathyroid hormone levels going up so you see something called a secondary hyper parathyroidism why that happens we can talk about that it's a different issue when aldosteronism present when is present the more sodium reabsorbed and more K is excreted and a NP that is at natriuretic peptide causes your this dcts to excrete more sodium because these this is secreted the NP atrial nutritic peptide is secreted when there is stretching of the atrium and that happens when there is volume more low so if there is too much of volume intravascularly that's going to stretch your Atrium and that causes relase of a andp brain nutritic pride is also is secreted when there is excessive stretching of the ventricles in addition tubules also secretes H hydrogen and ammonium ions that helps in the regulation of the pH coming further down to the collecting ducts um about 10 distal tubules continues as a medary pyramids so in that case close to 2700 nephrons and final adjustment that happens so you remember and it was filtered in the PCT it was isotonic in the descending limb of Lup it was hypertonic it became hypotonic in the ascending limb it became isotonic in the DCT and by the time it comes to the correcting it's going to be hypertonic now each DCT delivers it filt to the collecting duct most of which begins in the renal cortex and extends deep into the meddle law and now this starts traveling down the collecting duct it passes to the medary into which has a high sodium concentration as a result of LP of handle the collecting ducts important thing is that it's normally not permeable to water but the hormone ADH anttic hormone or the base oppression is makes it to absorb water so as much of 34 of water from the urine can be reabsorbed as it leaves the collecting DCT by osmosis the level of ADH is the one which determines whether urine should be concentrated or dilute so this is an important concept so when somebody is dehydrated so what should body try to do is to retain more fluid so it secretes your was suppression of the ADH and that causes fluid retention so your urine becomes concentrated the collecting ducts lower portion are also permeable to Uria so allowing some time to some some of it to enter the middle of the kidney so hence it's ion concentration goes up unless urine leaves the collecting duct through the Reno pap and then gets into the hoc kisis and further down we just learned whatever happens because of different embryonic origin from the rest of the nefron the collecting duct is from endoderm whereas the nephron is from mism the collecting duct is usually not considered as a part of a nephron however when you describing the anatomy of the nefron you just include that as well transportation of the different substances and natrium and the and organic substances so some of which are passivity transport some of which are activity transport and simple is sodium chloride glucose amino acids something by anti transport which is sodium inside s or or calcium out we learned this in the in the distal converged tub act to transport by this na+ SK plus pump some of the organic substances um glucose each a kidney has a threshold in order to reabsorb certain things once it goes beyond that threshold then those things are excreted secondary active transport is the m the is the wave here the glucose is reabsorbed it is secondary dependent on the ATP it means that glucose transport is together with sodium plus Co transport and the ADP dependent ATP dependent natrium like natrium cium pump that's something but A+ k+ pump helps to keep the G of the atrium um excretion we talked about the kidneys excrete a variety of waste products and product sub metabolism UA from protein catabolism uric acid from nucleic acids the UA is filtered in the um glal apparatus concentration is more in the pctd because the reabsorption of water then it is reabsorbed later in the PCT so back into the loop of hand helps to keep the institution and out the correcting DS acid base are very important function of the kidney helps in regulating the pH by exchanging the hydronium ions and the hydroxy ions and they are the one which strictly maintains your blood pH urine on the other hand becomes either acidic at pH of five or alkaline at a pH of 8 so this you should remember because this is important when you are talking about penal stones and this comes of Becomes of significance and also it's of importance in certain conditions like rabdom myasis certain poisonings water balance we learned about aldosteron because of the sodium absorption so water moves along with the Sodium so hence it helps in the water resorption and water balance plasma volume which is the ADH and I guess we we learned about different hormones um aldosteron Pary hormone and EDH are the one which acts at the level at different levels in the nefron and helps in regulation and functions of the kidneys so here we come to the end of discussion on the topic of anatomy and physiology of K me thank you
Medical_Lectures
How_to_Create_a_Differential_Diagnosis_Part_2_of_3.txt
this is part 2 of a guide to clinical reasoning or how to create an accurate differential diagnosis from a patient's presentation in the first part I reviewed a practical five-step bedside approach to clinical reasoning here it is to remind you in this part I will demonstrate how to use this approach with an actual patient case at the student level I present this patient to you the same way in turn might present the patient to his or her attending on rounds or to their colleagues during a morning report or teaching conference as I present the case I'll keep a running list of the key features of the presentation so they will seem like I am doing step 1 acquired data and step 2 identified key features concurrently rather than sequentially in reality most trainees would probably wait to identify those key features until after the presentation was completely over and the complete case was known but as a trainee gains experience he or she will improve their ability to do all these steps simultaneously if you are looking to practice try listening to the presentation without watching it and writing down what seemed like the key features to you as you hear them for the chief complaint the patient is a 75 year old woman presenting with epigastric pain for four hours mrs. Smith is a 75 year old woman with a history of smoking and moderate alcohol use who was in her usual state of health until four hours prior to presentation at which time she developed the onset of abdominal pain the pain is relatively well localized to the midline in the region between her humble ankus and xiphoid process there did not seem to be any particular trigger and the duration from initial onset to its maximal intensity of 8 out of 10 was about 45 minutes the pain is constant does not radiate is not exacerbated by anything including nor alleviated by anything she had moderate nausea it has refused to attempt to eat or drink anything since the pains onset because she is concerned that it will concert a vomit which she has not yet done she denies changes in her bowel habits shortness of breath chest pain changes her skin or eye color her past medical history is notable for hypertension and diabetes she has had no surgeries her medications include hydrochlorothiazide 25 milligrams daily and below to pain 5 milligrams daily metformin 500 milligrams be ID omit per salt 20 milligrams daily thymine 100 milligrams daily and folate one milligram daily in her social history she currently lives with her husband in a small community in the Santa Cruz Mountains she is a retired horse trainer she drinks three to four alcoholic beverages a night but denies any history of withdrawal symptoms she has smoked one pack per day for 40 years for her family history her father died at 85 from a heart attack and her mother at 68 from breast cancer she has had no siblings when asked if she has any particular theories or concerns as to what is causing her symptoms she says she has no idea but her husband is concerned about contamination from a new well that was drilled on their property for drinking water one week ago on exam she is an elderly woman a piercer stated aged in moderate discomfort secondary to abdominal pain her temperature is 99.8 heart rate 110 blood pressure 132 over 80 respiratory rate 26 and oxygen saturation of 98% on room air h ee and t exam reveals normal oral pharynx but poor dentition her neck is supple without breweries and without lymph adenopathy her chest is clear to percussion and auscultation bilaterally her cardiac exam has a regular tachycardia hypertension her jbp is undetectable at either 45 degrees or when she's completely sipping on abdominal exam she has non distended and there are no scars she has normal resonance to percussion the abdomen was soft and there was no rebound tenderness she has market tenderness however to light palpation in the epigastric region and mild tenderness in the perio umbilical region she declined deep palpation there are no masses noted liver edge is 1 centimeter below the costal margin and non-tender she declined examination of the spleen due to the fear of exacerbating her pain bowel sounds are unusually quiet but present rectal exam with normal tone no tenderness and minimal guaiac negative brown stool musculoskeletal exam revealed full range of motion in all joints without bony abnormalities there is no edema or clubbing in her extremities a neural exam she was oriented to person place time and situation she was fully conversant and appropriate speech and language normal thought process normal cranial nerves 2 through 12 normal patellar and brachial reflexes one plus bilaterally thorough strength testing and gait were deferred due to patient discomfort and exam petite for her labs her CBC revealed a white blood cell count to 15 with 85% neutrophils her hemoglobin was 14.5 and platelets were 325 basic metabolic panel was normal lfts notable for a total bilirubin slightly elevated at 2.5 all of which was indirect amylase was slightly elevated at 135 and lipase elevated at 110 GKN troponin are normal but your analysis is normal an EKG just shows sinus tachycardia with no other abnormalities CT scan of the abdomen and pelvis has been ordered but not yet completed so let's review what I've identified as key features remembering that these are the individual elements of the presentation that we expect will allow us to differentiate one diagnosis from another the patient is a 75 year old woman with abdominal pain for four hours it is epigastric well localized progressed over 45 minutes is constant with no exacerbating or alleviating factors and associated with nausea from the rest of her history she has hypertension diabetes moderate alcohol use smoking and is drinking water from a newly drilled well on exam she is MA in moderate distress she has a borderline temperature tachycardia and tachypnea she has a non distended abdomen that was soft with no rebound severe epigastric tenderness and guaiac negative the key test results are a white blood cell count to 15 normal basic metabolic panel mildly increased indirect bilirubin modestly elevated lipase normal troponin and CK and an EKG with only sinus tachycardia if we return to our five-step approach we see that the next step is creating the problem representation remember that the problem representation is a one to two-sentence summary using precise medical terminology of the most highly relevant aspects of the patient's history exam and diagnostic tests that is a summary of the most important of the key features which of the key features seemly absolutely most important obviously this is a matter of judgment and opinion but here is mine and to quickly review the structure of the problem representation which has noted in the part 1 of this video is also sometimes referred to as the summary statement or the impression the problem representation starts with the age and gender then the highly relevant past medical history then the primary symptom using semantic qualifiers and ending with the highly relevant diagnostic data using clinical syndromes when possible so for this patient we would start with mrs. Smith is a 75 year old woman then for the highly relevant past medical history I would probably include the hypertension diabetes and smoking which gets summarized as multiple cardiovascular risk factors and the alcohol use the primary symptom using semantic qualifiers might sound something like acute constant epigastric pain and finally for the relevant diagnostic data we would group together the white blood cell count borderline fever tachycardia and tachypnea as sirs or the systemic inflammatory response syndrome we would mention the epigastric tenderness since it was so prominent and was directly related to the chief complaint the soft abdomen and lack of rebound would be grouped into the collective descriptor of absent peritoneal signs and given its high relevance to one of the leading diagnoses we would also include the elevated lipase so altogether mrs. Smith is a 75 year old woman with multiple cardiovascular risk factors and alcohol use presenting with acute constant epigastric pain who is found to have sirs severe epigastric tenderness without peritoneal signs and an elevated lipase the next time you are in rounds or in a teaching conference and the senior physician asks you to summarize the case if a statement similar to this one effortlessly comes out of your mouth I guarantee you everyone in the room will be impressed so now step four adopt a framework as I discussed in part 1 of this video series no problem representation has only one correct framework the organization of a particular framework may just appeal more to some people than others in general when the primary problem is some form of abdominal pain most people find an anatomic framework works best particularly one which subdivides the abdomen into quadrants or regions in which a disease of a specific organ is listed within the region under which it lies so for this patient with epigastric pain the epigastrium would obviously be the most critical anatomic region to include organs that physically lie directly underneath the epigastric area in the stomach pancreas and small bowel diseases of the stomach which caused acute abdominal pain are many but most commonly are gastritis and peptic ulcer disease in the pancreas we have acute pancreatitis and diseases of the small bowel that can cause acute epigastric pain include relatively common and minor things such as food poisoning and gastroenteritis and the uncommon but potentially lethal problem of bowel infarction in addition to epigastric structures structures that physically sit within the right and left upper quadrants can cause pain that appears to originate from the epigastrium this is known as referred pain so therefore we should also include the right and left upper quadrants in our anatomy based framework in the right upper quadrant the major structures are of course the liver and gallbladder along with other components of the extra hepatic biliary system the liver can cause pain with either hepatitis or hepatic abscess the biliary system can cause pain with inflammation or infection of the gallbladder known as cholecystitis inflammation or infection of the bile ducts known as cholangitis or intermittent obstruction of the bile ducts without inflammation or infection a condition known as biliary colic in the left upper quadrant there is really just one unaccounted organ the spleen the spleen can cause pain with either a splenic infarct or a splenic abscess this particular list is not meant to be unusually comprehensive for example I have not included the transverse colon which crosses through the epigastrium though pain the transverse colon more typically is localized to the periumbilical region also because structures in the lower quadrants and retroperitoneal space can on rare occasion radiate to the upper abdomen if one wanted to be unusually thorough with the framework one could also include structures and diagnosis such as appendicitis in the right lower quadrant diverticulitis in the left lower quadrant and even a dissection of the dominoe aorta in the retroperitoneum though the presentation does not really suggest any of these in the framework stage the goal is to produce a structured list of diagnoses for the general problem representation without yet applying specific key features one non epigastric and in fact non abdominal region that I would definitely also include in this framework is the chest referred pain to the upper abdomen from intrathoracic diseases is a common phenomenon organs in the chest to consider include the esophagus from which pain classically radiates to the epigastric area as in GERD the heart can also referred pain to the epigastric region as seen in acute coronary syndrome that is either unstable angina or an mi finally the lungs pain from a pulmonary embolism can be referred to the abdomen although it more typically is to the right or left upper quadrants and not the epigastric now that we have a framework we move on to the final and hardest step applying the key features to that framework so here is our framework once again and here are our key features how does one start this process the brute force method would simply be to take each diagnosis listed one at a time and review each individual key feature to decide if it impacted the probability of the diagnosis and what that impact was that is obviously very tedious and time-consuming an expert clinician would probably start with the key features focusing on the ones that were either most prominent most unusual or those which clearly clustered together then thinking about which specific diagnoses were impacted by those features for the purposes of demonstration I'm going to take an approach here somewhere in between by going through each diagnosis one at a time but only mentioning those key features would seem particularly relevant so first up is gastritis the patient has some risk factors for gastritis with our alcohol use in smoking so these increase the probability of this diagnosis also gastritis can be associated with nausea arguing against this diagnosis is the borderline fever which gastritis does not cause along with the glucose I ptosis and elevated lipase and she simply sounds just a little bit too sick for this diagnosis then there is PUD in the absence of an ulcer perforation there is not a very reliable means of distinguishing PUD from gastritis on clinical grounds in other words the arguments for and against PUD are essentially the same as that for gastritis with the exception that alcohol may be less of a prominent risk factor for PUD as it was for gastritis pancreatitis the alcohol use is definitely a risk factor here which argues for the diagnosis pancreatitis is also usually associated with a systemic inflammatory response syndrome with its elevated temperature heart rate and white blood cell count along with a severe epigastric tenderness the elevated lipase is thought to be a relatively specific finding for pancreatitis increasing the probability of the diagnosis though the degree of elevation isn't so great as to eliminate other possibilities acute pancreatitis is almost always associated with nausea and severe nausea at that so the presence of knowledge here definitely argues in favor of the diagnosis arguing just modestly against pancreatitis is the fact that the pain had no exacerbating or alleviating factors the pain from pancreatitis classically improves with setting up or leaning forward which this patient does not describe however the sensitivity and specificity of this symptom feature to the best of my knowledge has never been studied therefore we don't know exactly how much to weigh this in our assessment of the probability of pancreatitis there using my experience and Gestalt I would place relatively little weight on the lack of positional component to the symptom how about food poisoning and gastroenteritis and so we are on the same page most American doctors use the term food poisoning to refer to consumption of food terminated with preformed bacterial toxins such as that from staph aureus or bacillus cereus we use gastroenteritis to refer to a condition when the patient has actually contracted an active infectious disease of the gut which in developed countries is usually viral occasionally bacterial and only rarely protozoan keep in mind that this distinction between the two diagnoses may be different in your geographic region so for food poisoning the association with nausea is an argument in favor although the lack of vomiting is atypical unlike gastroenteritis the lack of diarrhea is not inconsistent with food poisoning although one might speculate that the nuwell could be a risk factor for this diagnosis well water does not become contaminated with preformed bacterial toxins the same way that foods sitting out at a sketchy salad bar might one arguments against food poisoning includes her sirs physiology food poisoning can cause a number of these vital sign findings but usually as a consequence of severe dehydration from vomiting which she has not been doing it also is not typically associated with a high lipase with gastroenteritis once again the nausea argues in favor but this time the lack of diarrhea argues against Wiles certainly can become contaminated with enteric bacteria but for this to happen with a brand new well it would imply that they literally drilled it into a patch of pre-inoculated earth which without knowing more details about the wells location seems quite unlikely depending upon the specific pathogen her severity of illness could certainly be consistent with a severe form of this but as with most diagnosis on here gastroenteritis is not thought to typically cause a high lipase for a bowel infarction her cardiovascular risk factors put her at risk of this this would definitely result in constant pain appearance of distress sirs and can even cause an elevated lipase really a bow infarction would explain her presentation quite well except for the fact that the bowel infarction is relatively uncommon compared to most other diagnoses listed a key feature that would help further assess this possibility is the presence or absence of an elevated lactate which was not mentioned in the presentation though the normal basic metabolic panel suggests that there is not an elevated anion gap acidosis present and that sort of argues of a bit against the balanced function diagnosis I'll be pretty quick with the organs in the right and left upper quadrants because your presentation doesn't particularly support any of them the unremarkable lfts in and of themselves rule out acute hepatitis a paddock abscess usually has symptoms localized to the right upper quadrant and is also associated with lft abnormalities and a normal light bass cholecystitis also typically causes right upper quadrant symptoms and signs particularly right upper quadrant tenderness along with elevated alkaline phosphatase although it's not enough to make the diagnosis likely enough to seriously consider at this point acute cholecystitis has been described as a cause of elevated lipase the same analysis for cholecystitis holds true for cold and itis with the additional negative argument that the patient has no elevation of direct or conjugated bilirubin that is no evidence of jaundice the patient is simply too sick appearing for this to be biliary colic plus as it's very name implies the pain from this usually waxes and wanes instead of being constant the patient has risk factors for a splenic infarct though this usually does not radiate to the epigastric area and is a relatively rare diagnosis saying for a splenic abscess moving into the chest the acuity and severity of illness rules out GERD for acute coronary syndrome the patient does have numerous risk factors and the combination of epigastric pain and nausea is not an uncommon way for acs to present particularly in a either a woman or a diabetic the major thing which is not fit is the severe epigastric tenderness since the epigastric pain is referred from the chest the abdomen itself should not necessarily be particularly tender finally a pulmonary embolism can cause abdominal pain but as with acs would not be expected to cause abdominal tenderness also the patient has no major risk factors for RPE and has no shortness of breath well it's certainly possible to present with a PE with just pain this case is just not what a PE looks like either in its classic presentation or even atypical variations once we have exhausted the framework we should do one more step before creating the differential we should ask if there are any key features which have not yet been incorporated into the framework that is are there any key features which don't seem to impact the likelihood of any of the diagnosis in the framework in this case there are essentially three the first is the normal troponin and CK so why was this piece of information not incorporated into our framework discussion it's because they were actually miss identified as key features that is there are no causes of four hours of epigastric pain in which we would expect the troponin or CK to be abnormal even in the event that the patient had an acute MI it really is too soon for these enzymes to become elevated and so therefore they probably should not be key features the second unused key feature is the guaiac negative stool this key feature was critical in establishing the problem representation in step three because had she been quiet positive the problem representation would not have been acute constant epigastric pain associated with sirs but would have been something more like acute epigastric pain associated with quite positive stool this in turn would have dramatically changed the framework we came up with in step four and are subsequently using now so even though we did not use the guaiac negative stool here in step five it was still an important element of the presentation and definitely worth mentioning the last unused key feature is the most interesting it's the history of the newly drilled well depending on where you are in your training this probably did not occur to you but the combination of acute abdominal pain nausea and a possible contamination of well water is all consistent with heavy-metal poisoning specifically arsenic and lead this brings up an important point if an element does not fit into the framework yet still seems to be a key feature the framework must be incomplete in this case I would add another category of diagnosis to our four existing categories of epigastric right upper quadrant left upper quadrant and chest that fifth category is acute abdominal pain secondary to systemic toxic metabolic problems that is acute abdominal pain that does not cleanly map to any one specific organ the four major members of this group are heavy-metal poisoning a rare genetic disorder called acute intermittent porphyria another virgin etic disorder called familial Mediterranean fever and finally angioedema for any of these to be the final diagnosis this patient would need to have an atypical presentation of a rare disease and thus it would be a highly unlikely diagnosis except with the fact that the patient has this unusual key feature of the new well that suggests the first item on the list so should we keep heavy metal poisoning under consideration possibly at least for now you need to figure out more about this well and its relationship to industrial activity near her home in the mountains if there is or was an active mind nearby then I would definitely still consider it and would try to arrange for the water to be tested if an alternative diagnosis was not immediately secured however in the more likely case that there was no mind or other unusual industrial activity near her home this can probably be crossed off the differential at this point also if the patient's pending CT scan showed clear evidence of an alternative more prevalent diagnosis such as pancreatitis for example that would also be sufficient to cross off heavy metal poisoning in the interest of completeness I have not mentioned the diagnosis of diabetic ketoacidosis which can definitely present with acute abdominal pain and nausea in a diabetic however the normal metabolic panel which implies a normal glucose completely rules out this possibility so after all of this discussion what's our differential diagnosis to answer that we need to decide which combination of arguments for and against the diagnosis based on the key features results in the most likely possibilities remember in part 1 I mentioned that the skill needed here is the single aspect of clinical reasoning which I think most sets apart experts from novices because it requires more than just textbook knowledge it also requires knowledge of the scientific literature lots of first-hand experience with a wide variety of diagnosis and the ability to qualitatively apply the principles of biostatistics on the fly in my opinion the most likely diagnosis for this patient is acute pancreatitis if we bring back the key features for this we see that pancreatitis is associated with her heavy alcohol use it explains her nausea her distress the vital signs her tenderness on exam leukocytosis and is the best explanation for the elevated lipase the only key feature arguing against it is the fact that the pain is not made better or worse with changes in position which as I suggested earlier is not likely to be a highly sensitive or specific finding in addition pancreatitis is a relatively common diagnosis among patients presenting to the ER with abdominal pain overall this is a relatively classic presentation of a common disease and therefore our leading diagnosis also known as the provisional diagnosis in addition to the provisional diagnosis the other categories of diagnosis which belong in the differential are those conditions which are common and for which this could be either a typical or atypical presentation those diagnoses which are rapidly fatal if missed for which this could plausibly be a presentation and finally any diagnosis that is specifically suggested by an unusual element of the presentation even if the diagnosis itself is rare common disorders which this could be a typical or atypical presentation include gastroenteritis food poisoning and either gastritis or peptic ulcer disease which I have grouped together because I think it's very difficult to tell the difference between them without endoscopy don't miss diagnosis for this presentation would include a bowel infarction and acute coronary syndrome and finally an unusual feature diagnosis is heavy-metal poisoning related to contaminated well water remember that the differential should be listed in descending order of likelihood and I don't mean to suggest that the various categories of diagnosis always or even usually map out as neatly as this also this list only represents my opinion I have no doubt that if a dozen of my outstanding internal medicine colleagues were to watch the case presentation and were then asked to write a 6 to 7 item differential none of the lists would be completely identical for the most part the greatest degree of agreement involves the provisional diagnosis and as you move further down the list ideas start to diverge a little bit I would certainly expect most experienced doctors to identify pancreatitis as the leading diagnosis in this case so that concludes part 2 of 3 of this video series on the clinical reasoning I hope you found it interesting and useful while the approach I presented here is not the only one in use I guarantee medical trainees that if you consciously employ it while on the wards you impress your peers and evaluators and more importantly create a more accurate differential that will increase the likelihood of establishing the correct diagnosis sooner in outpatient hospital course you
Medical_Lectures
Immunology_Lecture_MiniCourse_8_of_14_DevelopmentSurvival_of_Lymphocytes.txt
okay so this uh the final lecture for today so is going to be just as answering the question exactly how we eliminate self-reactive b-cells and t-cells and I will start out by saying that we probably know a lot more about t-cell selection than we do about b-cell selection but so therefore I'm going to do B cells first cuz that I want to end on a strong note showing how much we know okay so questions to consider the first question is how does the immune system provide a high degree of sensitivity and specificity to the broad array of pathogens making a very diverse range of antibodies and yet does not attack self so how do you screen out those self-reactive bezel and t-cells that as you now know since we generate them randomly why how do we eliminate those that recognize self antigens another question is how is t-cell and b-cell maturation different what duties loads have to do during the maturational process that's different for b-cells need to do and vice versa and clue that's going to have an impact upon their our method that we both allow maturation to occur and prevent the attack of self and another important question is how does a t-cell know whether it's going to be a cd4 cell or cd8 cell how is that decision made you know maybe the cell wants to intuitively be a killer cell and yet we say I'm sorry you have to be a cd4 cell or vice versa they want to be a boy scout helper cell but we say I'm sorry you know you're genetically fated to be this killer cell how does that occur them what is the basis for this differentiation process so kind of now just going to kind of step back again and again this this lecture I have to thank Barbara Burstein because this is based on a luxury that she gives an Einstein and I'm going to step back a little bit and just kind of let's go back about what the immune system is supposed to do and how effectively it does it so just to to think in terms of the big picture the immune system has to basically clear a wide range of pathogens viruses bacteria and fungi and in order to utilizes it it has the capacity to call upon a broad array of immune response cells myeloid cells such as neutrophils macrophages which you've heard a lot about basophils and eosinophils we really haven't talked a lot about yet but they play a more critical role in parasite immunology and within the immune response we could break it down into the innate immune response which I discussed in the second lecture which is mediated by mostly preformed of both receptors and factors utilizing pathogen array receptors that recognize motifs that are common in pathogens that stay the same as opposed to adaptive immunity being mediated by T cells and B cells that recognize a high range of antigens with high level of specificity now the beauty of the innate immune system is you don't have to worry about self reactivity because all those pathogen motif receptors they are recognizing motifs that are time-tested pathogen specific motifs it's lived through X number of years of evolution so you're not going to Olsen mutate you know you're mannose receptor to start attacking self because man else is always gonna be bacteria we're not going to spontaneously start expressing glycan residues on our mold cells that we have to worry about self reactivity in the innate immune system but adaptive immune system is going to be the critical concern the payoff that we have the balance of a more broad and more specific immune response versus the capacity of that being targeted against our own self and the innate immune again as I had mentioned before antigen nonspecific pattern recognition receptors they don't change but they don't have the high level specificity of the adaptive antigen immune specific immunity the antigen recognition is by b-cells making antibodies t-cells MHC restricted and antigen presentation again is MA restricted now where do these cells come from and again some of this is reviewed but I'm sure this will this is like a secondary or tertiary response sometimes when you hear things the more it sticks so where do these cells come from well everything really comes from the bone marrow that's the source of all automatic poétique cells within the bone marrow we have the pluripotent hematopoietic stem cells that can now differentiate into precursor cells and one lineage are the lymphoid lineage and this lymphoid lineage will differentiate into NK cells T cells and B cells and these T cells and B cells form again the core or the adaptive immune response there is also a myeloid lineage which are myeloid precursor cells that can mature into monocytes macrophages osteo class actually which play a critical role in bone metabolism and neutrophils eosinophils and basophils again this is the backbone of the innate immune system and in between bridging this are the dendritic cell lineage and up until a few years ago the dogma was that dendritic cells actually would only derived from myeloid cells but it actually turns out that there are some dendritic cells that can be derived from lymphoid lineage so they're kind of like the in-between lineage dendritic cells but they also is appropriate because the dendritic cells are the bridge between the innate and the adaptive immunity and if you recall when there's an infection you need to recruit the adaptive immune system what cells you use to do that are dendritic cells because those dendritic cells as Langerhans cells are located in the skin they pick up antigen bring it into a lymph node and use that to activate and recruit the b-cell and t-cell response so this is truly the bridge between the innate and adaptive immunity and it's actually kind of appropriate that then vidiq cells can be derived both by from the lymphoid lineage as well as by the Milo lineages there are differences in terms of function of these subpopulations of dendritic cells but again it's beyond the scope of the core to go into details how they specifically differ okay is that clear pretty straightforward okay and this is kind of put it into greater detail again pluripotent metabolic stem cell the pluripotent two generator and again the cob with lymphoid progenitor be solved easily NK cells sub dendritic cells can come from that common myeloid progenitor all the myeloid lineage cells and again dendritic cells have come and here again same concept you have committed four generators these can only go in this differentiation pathway a common myeloid progenitor it cannot give you T cells and B cells a common Lucroy progenitor it cannot change its mind and give you my Lloyd cells but the pluripotent hematopoietic progenitor can go in either direction now this is due to again you've seen the slide several times before this is to come again reiterate the process of b-cell and t-cell differentiation and in terms of also the process of clonal deletion so again this is very easy to review your pluripotent metaphoric stem cell it undergoes a whole range of genetic rearrangements and now you know what those rearrangements are for the T cell receptor vdj recombination for the beta chain the dump and beta and VG v j rearrangement for the alpha chain and for the immunoglobulin molecule DDJ rearrangement for the heavy chain VJ for the light chain and that generation of diversity of course molecularly occurs there for yielding millions and millions and millions of D cells that each one has unique antigen specificity based on the immunoglobulin molecule expressed in the surface for the T cell millions of millions and the specificity is based on the T cell receptor has on the surface at this stage what is the iso type for example of these cells that's expressed on the surface of b-cells what type of antibody molecule is expressed on b-cells services at is early-stage i GM absolutely it's never seen T cells never seen antigen so the first immunoglobulin as I'll show you in a few minutes is up is going to be IgM during this process at this stage you have a mixture of cells the overwhelming majority of these cells probably recognize foreign antigen but still a sizable number of these cells recognize self antigen why is that because this molecular rearrangement of vdj has no idea what antigens are out there since it doesn't know what antigens are out there it can't possibly distinguish between whether it's antigen receptor recognizes the self antigen or whether it recognizes a foreign antigen in addition again every one of these cells will express thousands of receptors but every one of them recognizes exactly the same antigen again one antigen one cell of recognition capacity so now you need to call out any kind of self reactive T cells or B cells before you can allow them to get into the peripheral circulation and potentially attack self and therefore there has to be a stage which what I'm going to this lecture we're in potentially self-reactive immature lymphocytes are deleted by clonal deletion and this is a very delicate process because you don't want to go overboard if you over delete at this stage you run the risk potentially of deleting important effector cells that may be making antibodies or have T cell receptors that are going to be important in fighting infection so you can't be over exuberant in terms of just saying well whatever in doubt eliminate and before you know it you may be eliminating very critical cells in mounting a good immune response so that's a very delicate balance that you have to have finally you end up having a pool of mature naive leukocytes and again as I discussed in detail today it's encounters antigen binds it undergoes clonal amplification and it allows itself to eliminate antigen but we're going to focus on participation both for b-cells and and 40 cells and this maturational process that's occurring at this early stage okay any questions okay so now let's look at start with these cells and again this is going to review a lot of concepts the B cell development that occurs in the bone marrow is where the B cell basically generates its immunoglobulin gene that it expresses on the surface and as you know very well now what's going on in the bone marrow the first thing that has to happen is vdj recombination the immunoglobulin gene responsible for antigen specificity the vgj sequence has to occur for the heavy chain and light chain in order to provide antigen specificity that's step number one it also has to occur both for two molecules it has to occur for the heavy chain of the immunoglobulin molecule and it also has to occur for the light chain so you have to undergo vdj recombination for the heavy chain you also have to undergo vdj recombination for the light chain and then when both of them get expressed that's when you have a complete immunity Lagavulin expressed now in the bone marrow the only two isotypes that can possibly be expressed because there's no T cell help or antigen presentation occurring is going to be as you said before IgM and also IG d why is that because these are the first genes in line therefore when you make the transcriptions vdj and then you make a transcript with the I put the IgM and the IG D and then there's a splicing process that occurs which allows some IgM to be made and then it also allows I GD to be made without requiring any kind of genetic rearrangement in order for itg IgE iga to be made you have to physically rearrange the DNA of the chromosome and in order for that to happen you need to have T cell help Koston Vittori signal and antigen so therefore in the bone marrow in the absence of any antigen the only type of b-cells that you're going to be seeing are those that has IgM on the surface or IgM and IgD on the surface okay is that clear any questions now the immune system is actually very very tough and it basically says to b-cells it's like standards so there there are a lot of students here right and you're come here and you're told you know welcome to school we love to have you here we want you to be successful but we have standards right you have tests you have to pass the tests if you don't pass the tests well I don't know what do they do here hmm Phil you keep going it's like okay you know if you fail but you know that's fine we know that's fine you try what do they do so you know so they'll you know may failure they say you know maybe a different career choice may be the case you know the if you did a lot of the molecules exactly the same way be cells basically you're told you have a job to do your job is to make immunoglobulin if you can't make you mean a blob one you're not going to be very good as a d-cell obviously so therefore those cells are basically allowed to pass away but in addition there's another important concept which is that as I said one cell one antigen how many chromosomes do you have how many how many peers I mean how many how many alleles do you have for immunoglobulin region not a trick question how you know you have one chromosome from the mother one from the father right so how many genes do you have that could potentially encode in Novato molecules - right which means that potentially you could rearrange your maternal IgM vdj sequence and you could rearrange your paternal one right does that make sense well if we would allow that to happen how many different antonin specificities could it be still have up to two well we don't want that to happen because that would violate the rule of one antigen to one b-cell now again why that should be a very strict rule one of the reasons is in order to prevent the expression of self reactive cell or elimination of a important antigen specific cell because the other antibody recognize itself so we basically have very very focused maturation so in order to guarantee that that happens is a process called allelic exclusion which basically means that each b-cell only expresses a single immunoglobulin gene allele so even though you have to you only can express one and a simplistic view is this is a amount that that has an A and D gene and I guess this is represented by the yellow stripe and by against the grey stripe and this is like kind of a rugby kind of look and but now if you look at the B cells even though it could potentially have either the yellow allele or the gray allele rearranged if you look at the DES cells they will either express in a globular molecule that came from one chromosome or the other chromosome never from both chromosomes okay that that's very straightforward so apparently what seems to happen is that when you undergo to be arrangement you first try one chromosome and you can imagine how complicated the process of edj rearrangement is it's hard to understand imagine what it's like to do it so clearly things going wrong so it may be that the first time you do it in your first chromosome it doesn't work so the same way that if you failed test once you get a second chance so to be cell say Oh you got a second you got your safety plan B chromosome you can try to rearrange that immunoblot one as well if you fail for the second one that's it you're out but you could use the first one maybe but now if you successfully rearrange for the first immunoglobulin molecule it seems that factors are generated that shut off the rearrangement process so it won't happen on the second one so then the way to think about it basically is you try to basically rearrange one chromosome if it's successful it stops and inhibits rearrangement of the second chromosome if the first chromosome is not successfully rearranged then you're allowed to rearrange your second chromosome you get a second chance and then you can express an integral and molecule and both times fail then the B cell basically is no longer allowed to continue to mature okay is that clear now it's a it's a it's actually has a very high wastage process in that a lot of cells don't make it so if this is this is basically showing an illustration of what the bone marrow looks like and you have the microenvironment morgan progenitor cells are proliferating but not differentiating these progenitor cells generate the most immature B cells called pre B cells these pre B cells are starting to initiate the process of rearrangement of their heavy chains the ones that are successful can go on to the next step of rearranging their light chains the ones that are unsuccessful end up getting deleted and that's then you have these phagocytic cells that are waiting on the sides to phagocytosed these and get them out of the bone marrow and only these can go on to the next step and then once they successfully go on and rearrange their life chain now they will be immature b-cells and now they can undergo the process of eliminating any self reactive V soul so this is just showing at anatomical level how this is stratified so it's specific location specific processes are going on okay is that clear so the first step in terms of the rearrangement is going to be to read change the heavy chain of the immunoglobulin molecule and in this case you have the DJ region the first step basically would be to have a rearrangement of the D and the J and then and then following that you have d DJ arrangements again at this early pro be these are the most immature b-cells there still is no functional protein expressed because this has not yet been associated with the visa region only a DG recombination has occurred but now as it matures it now makes the appropriate variable region rearrangement so now you have the VD j full VG J variable region this canal form of transcript off of the constant region and now you have expression of the appropriately rearranged heavy chain that recognizes whatever antigen specificity is encoded in this particular video sequence however it has not yet rearranged the light chain so since it hasn't rearranged this light chain theoretically a heavy chain alone can't do anything but in order for it to be expressed it has to express a light change so this is like kind of paradox how are you going to deal with that you need to light chain in order to be expressed but yeah it rearranged the light chain so in order to do that the b-cells have this unique protein called a surrogate light chain that is not a real light chain it's like a fake light chain but it's similar enough in structure to allow this b-cell to be this heavy chain to be expressed on the surface in a relatively normal fashion and while this is occurring this b-cell now can send a signal saying I successfully rearranged my heavy chain now it's time to turn on the Machinery to start rearranging the light chain and now and stop rearranging heavy chain so that if this is from the first chromosome it says to the second chromosome you do not need to rearrange your IgM genes I've already successfully done it so it needs to sire your chain to be expressed on the surface to somehow generate the appropriate signals both to stop heavy chain rearrangement as well as to tell the B cell to progress into light chain gene rearrangement and this is by virtue of being expressed together with this surrogate light chain the stake light chain but it serves the purpose of providing it with it's appropriate harder okay is that clear so I mean again are you going to see any B cells with a surrogate chain in the peripheral blood what do you think who says no raise your hand the absolutes it's just a temporary place holding type of a maturation process and again at this day it's called a pre B cell so you know nomenclature again in immunology is always very can be complicated but again just to review a pro B cell is a V cell that's expressing your functional protein on the surface and I apologize because this is actually counterintuitive we think of a pro is like professional as sometimes really knows what they're doing and actually ironically for D cells pro is like the least b cell in terms of maturation so again the way I remember it is the opposite of the way normally we think about it because B cells kind of like are the opposite so the pro B cell is really the amateur B cell in a sense it has no receptor at all the previous L is right before it's a B cell right before it's a B cell it's expressing the heavy chain and this fake surrogate light chain that's the previous cell once now you express the light chain and now you're able to express a normal appearing immunoglobulin molecules now you rearrange the light chain it now seeds with the IgM you've got rid of the surrogate chain now this is called an immature V cell and it's a little B cell because it has the forward immunoglobulin molecule but it's not yet a mature B cells and what market is lacking in terms of it being a mature B cell is the IgD immunoglobulin so it defines an immature B cell is its expression of IgM alone without also expressing IgD so whatever splicing mechanism arrabal to allow i GD to be made is not yet occurring and therefore this is called an immature b-cells so just to review a b-cell that has no immunoglobulin molecules on surface what is it called a priori cell in a being blog on the molecule that has a heavy chain and a surrogate light chain what does that hold free it's before Apple B cell and a B cell that expresses IgM heavy and light chain but no IG d what's it called immature it's a little derogatory and once it expresses IgD what is that call a mature B cells now some of you may be saying who cares what is the difference between an immature B cell and immature B cell what is the difference between expressing IgD in terms of the ability of these cells to mature it turns out that this is really the critical checkpoint in terms of eliminating self-reactive these cells because this is the stage where the antigen receptor of the B cell is wired exactly the opposite of the way it normally is wire so normally you bind to the antigen to the immunological molecule it turns to B cell on however before idds expressed the exact opposite happens that if you bind to the immunoglobulin molecule you turn to be so off so in this case this particular immature B cell is expressing IgM it's in the bone marrow and it's looking around if there's any antigen that binds to it it sees no antigen that it binds to therefore it doesn't get turned off so what happens to this B cell is permitted to migrate to the periphery where it can now express IgD or called Delta positive and now become a fully mature B cell once I GD is expressed now the wiring is completely Swift again now keep lying to the min Blokland molecule now it turns the B cell on now the respond to antigen if in the bone marrow the IgM binds to a non porous linking self molecule and again a recurrent motif in signal transduction is if you cross link multiple surface molecules you generate a signal a very strong signal if you only bind to a single immunoglobulin molecules such as an antigen that really doesn't have that has this single Velen it's not strong enough to give a signal to tell this B cell to basically die in the bone marrow however this apparently generates a B cell that maybe not as responsible called clonal ignorant but but but the most dramatic effect in what happens in the bone marrow when the B cell immature B cell is exposed to a multivalent self molecule in this case is able to cross link the immunoglobulin molecule the immunoglobulin molecules send a signal and this basically causes the cell to die and not leave the bone marrow because we made a assumption if you've seen self in the bone marrow we want to get rid of you that's why the immunoglobulin molecule in the immature B cell is wired to give a negative signal okay is that clear however the b-cells want to provide a second chance so basically we're saying well you know you are binding to self antigen and I really should to lead you but there's still a possible way of saving you from this fate and if you recall the antigen specificity is a combination of the heavy chain and the light chain well let's say you change the light chain do you think a self reactive b-cell can be converted into a non-self reactive b-cell what do you think who says yes raise your hand why not makes a lot of sense so in essence what the body says to the b-cell is is that I am going to turn on a process to allow you to basically get a second chance to rearrange a new light chain and see if that light chain is self reactive so it's almost like a take a test you fail the test there's a makeup exam this is the makeup exam for b-cell antigen recombination so basically what happens is now the process gets reopened and it doesn't happen in the heavy chain it only happens at the light chain but now you can rearrange another VJ sequence at the light chain even the third VJ sequence expressing that light chain again if it recognizes self molecule you keep can turn on the process again obviously only have so many chances but again this is a way of rescuing these cells now it makes a lot of sense because you've gone to all the effort of rearranging the IgM heavy chain why should you waste all that work if the B cell can still be saved so this is kind of like a oh you know the green approach of immunology of conserving all the rearrangements that you've done and recycling it however if you keep on making life chains because maybe your heavy chain has such a high affinity for the self antigen that D'Lite jainism contributing as much ultimately again that B sub will not be allowed to leave the bone marrow and this is the mechanism by which we illuminate self-reactive these cells okay is that clear and again as I said some b-cells become an urgent where they interact with self reactive molecules in the bone marrow but did not line cross-linking a sufficient number of immunoglobulin molecules the cell self doesn't die it migrates the periphery and actually becomes anergic vsauce so it functionally becomes inactive and therefore even though it may be self reactive it won't make antibody and therefore won't cause any kind of autoimmune diseases okay so now just to kind of put the big picture of D cell maturation once you the B cells require certain growth factors bind to receptor undergo signaling and then tissue specific transcription factors are present that requires the B cell maturation to occur and again clearly I don't expect you to memorize what they are but just to make you aware the very very immature multipotent progenitors use a growth factor called flit 3 and people who are doing research may be familiar with flit 3 because that the growth factor we use one of them for growing progenitor cells and as it matures into the common lymphoid progenitor it now expresses the interleukin 7 receptor which uses these particular transcription factors pu.1 and ETP 2a as it matures further and becomes a specified B lineage cell it utilizes an EB F a new transcription factor and then as it matures further to a pro B cell now would it be for bit before discuss IgM it uses growth factor acts 5 so you can actually determine the maturational state of a b cell by looking at what transcription factor is expressing at any given time and this is just showing the temporal timeframe in terms of the specific state through maturation and one particular proteins and growth factors of being made the one I just highlighted here is called vtk b-cell tyrosine kinase or bruton's tyrosine - and the reason this is important is that there's a disease called a gamma globin evo for individuals that have a mutation in this BTK lack the ability to make any kind of inter globulin which makes sense because this plays a critical role and this whole probably throw b-cell even a lot in IgM heavy chain and as well as light chain differentiation if this can happen you won't win any functional V cells and patients with a gamma of anemia as I'll discuss in the immunodeficiency lecture this is the specific defect that they have okay any questions so just to kind of put things together after the vote b-cells like the bone marrow again in the bone marrow the labelling gene I mean globin molecule is wired so if it binds to something that recognizes it gets turned off either killed or allow the second chance of light chain rearrangement this is where you basically eliminate any self-reactive b-cells in addition there could be some peripheral tolerance and these some b-cells can't enter lymphoid follicle so the B cell does not get into a lymphoid follicle within a few days it basically will die however if it gets into the lymphoid follicle then it has the capacity to circulate waiting for antigen to be exposed to it to cause it to be activated and this can last for several for several weeks if it does come in contact with the antigen then it will obviously proliferate and then go on to become some of them to memory these cells class wishes cetera et cetera as we discussed earlier today and again I'll be discussing this process tomorrow and when I discuss the activation steps okay so that's it for b-cells any questions yeah everybody has of water antibodies that are around anyway so somehow this process allows for some water and twins to be generated it's an excellent point this process does allow water antibodies we generated what ISIL type do you predict those Motum antibodies would be IgM because the only way you're going to be getting IgG Auto antibodies if there's another t-cell out there that's also border reactive that can now cause that V cell to class switch because of the fact that my GM is low affinity and most of our self antigens are Univ eylandt therefore the IgM is there but it really probably doesn't do much in addition as I mentioned earlier IgM doesn't get into the interstitial tissue very very efficiently another reason why it will probably sequestered away from most of the potential exposure so it's actually 100 percent correct maybe IgM specific Auto antibodies but one of the checks that we have in the system is requiring t-cell help to class which the IgG early IgG antibodies really going to get you into trouble okay now we'll discuss T cell maturation and this is basically a picture of a transparent kid that so this is an artist rendition of using Photoshop I guess actually this is just pre Photoshop because in Life magazine of five is inside of a baby and one thing that all pediatricians know is that babies have huge sinuses because pretty much the overwhelming majority of T cell maturation occurs in the first years of life as we get older pretty much we slow down in our T cell maturation and our thumbs to shrink very very true matically this is a slide showing histology where the thymus looks like and basically consists of the cortical region which are tightly packed because it's a tremendous amount of cellular proliferation occurring and the medullary region which is a little less densely packed and again the classic hassles corpuscle that is associated with thymic tissue if you remember Scala G that was always when you look at a slide doesn't you had a hone in on today we're looking at items again just to review to the antigen presenting cell has MHz plus peptide either class 1 or class 2 cd8 specific for class 1 cd4 specific for class 2 and the T cell receptor is recognizing peptide but what this slide is teaching us is that T cells have to do something that these cells don't because all these cells need to do is generate in a blogroll molecule and that's it you don't even care what antigen and recognize it as long as it's not self but the T cell has to do something very very critical it has to rearrange his T cell receptor that it has to be able to recognize MHC molecules because if you recall it from yesterday the only way T cell is going to see antigen is in the context of an MHC molecule it has to recognize our piece of that MHC molecule if you have a t cell they have T cell receptors that don't recognize your MHC molecules it's totally worthless to you because it's never going to be able to functionally C antigen so therefore a critical step in the maturational process is that you have to make sure that you randomly generated t-cell receptor has to be able to recognize MHC sequences your MHz sequences do what in your own body that clear now why is that critical in this step because now the t-cell receptor has two chances to recognize well actually has you know a help if you're heterozygous right you're expressing at least six class one MHC molecules and at least six class two MHC molecules your body doesn't care which MHC molecule your t-cell receptor recognizes as long as it recognizes one of them and also doesn't care where you recognize a class 1 or class 2 as long as you recognize one of them so in essence during the process of item maturation the t-cell receptor is query in the thymus where it's exposed to all the MHC molecules and so what do you recognize if you don't recognize any you're out of luck you're not going to be allowed to proceed and you're basically deleted if you recognize a class 1 MHC molecule region then now a single gets sent into the T cell and saying you're going to become a CDA cell because that's what you recognize if the red rearrangement was one that permitted the T cell receptor to recognize an MHC class 2 sequence then your T cells tall congratulations you're going to be a cd4 T cell and now so this is how the specificity of cd4 vs cd8 is generated solely based on what MHC sequences are randomly generated TCR sequences are randomly generated in order to enable to recognize a piece of the MHC molecule okay now another important point is is that all of us have different MHC s so there may very well be that T cell receptors sequences that can't in my MHC because they don't recognize my MHC making sure very well on your MHC because it's a different sequence or vice versa so t-cell receptors that be mature very well in your body because ma see it recognizes in my body may not mature well because two different MHC so this means that even though then people TTC our repertoire were very very different depending upon what MHC molecules that they have and again as a lot of you will learning different MHC molecules can be associated with different disease courses and different autoimmune diseases or capacity to fight infection okay so the job of the thymus I'd our T sub maturation is twofold first as I said just now it's called positive selection this is something B cells don't have to undergo it has to undergo the ability to recognize one of at least these six MHC molecules that you have in your body either plus 1 or plus 2 if you fail positive selection you basically fail being a functioning T cell and you get deleted but even if you've been proud enough now to pass ma C recognition if your T cell receptor turns out to recognize some antigen that as in the V sub maturation is also considered a lethal event that the T cell is eliminated okay well you can imagine that again the wastage rate and T cells is very high and the estimate is only 10% or less of final size actually end up successfully passing for the positive selection and negative selection and leaving the thymus so if we look upon this now at the same way with the B cells in terms of what's happening to the rearrangement of the immunoglobulin molecule dude we're being 500 thousand divided into three different groups double negative and this refers to expression of cd4 and cd8 so if with finest type does not express cd4 or cd8 it's called level negative the next day this called double positive these are famous sites that expose both cd4 and cd8 well why do you think the thymus ID is expressing bottled cd4 and cd8 at this stage exactly it hasn't been positively selected yet we don't know what t-cell receptor MHC specifiy is going to be it could be class 1 MHC it could be class 2 MHC but therefore it has the option to go either way the t-cell receptor undergoes positive selection and if it recognizes a class 2 MHC molecule then it turns off the cd8 and becomes single positive cd4 if the t-cell receptor recognizes a class 1 MHC molecule it turns off cd4 and becomes a single positive cd8 and then now is ready to undergo negative selection to see it if it can come if it recognizes self peptide so again the double positive stage is where the positive selection is going to occur to determine what cd4 or cd8 is and what is able to recognize antigen so now if we look at genetic rearrangement the double negative t-cell is the point where the t-cell receptor is being generated and again here it's the same kind of steps you have an output chain you have a beta chain you undergo too deep the DJ beta is the first step that vdj beta and then after the beta chain rearranges here is actually expression of kind of like a surrogate alpha chain and now during this double positive differentiation the Alpha chain rearranges and now you have a fully functional t-cell receptor at during this double positive phase that can now determine whether what MHC molecule it represents able to bind to and recognize okay is that clear so clearly then if you're looking at a thymus sin did now this is divided the double negative stage is divided into four different stages in terms of expression of C 25 is the aisle to receptor of beta chain state 1 stage 2 stage 3 now expresses P TCR and then stage 40 undergo proliferation and then in the dull positive phase you can start having full TCR as well as positive selection now if you look at where this is happening again here the cortex has them adult remember the medulla is very loosely packed and the cortex is tightly packed why because that's where the proliferation is occurring so the double negative one free pressure comes out of a venial from the peripheral oils you will be integrating from the Baldomero it comes in encounters dendritic cells other cells starts maturing it encounters critical epithelial cell continues to mature now it starts rearranging its beta chain the double negative 3 now the double negative 4 as the beta chain Boyka range and now it comes into the the middle cortical area where now it's an immature double positive thymocyte undergoes a high level of proliferation expressing a mature t-cell receptor and this is where it's undergoing positive selection is also densely packed because this is where a lot of the proliferation is occurring as an L migrate back into the medullary reason it comes in contact with introduce those epithelial cell will undergo negative selection and then after it successfully passes through that it gets into the peripheral circulation leaves and then becomes a mature circulating t-cell and to kind of put the positive and negative selection into context if the T cell basically does not express any T cell receptor that's obviously a negative event because you need to have a vulnerable signal transducer in order for this cell to survive TCO is with no recognition of self-mhc so it doesn't recognize either plus 1 or class 2 MHC in a person's body again it basically gets no signal and it dies so that's basically if it has weak recognition of self MHC + peptide which means that the TCR is recognizing the self MHC region but not recognizing the peptide it undergoes positive selection and then survives and now it's able to now pass on into negative selection to say it is the TCR recognizing any self peptide if it recognizes strongly self MHC + cell peptide it also his quote is stimulated dying and just like the D cell the TCR is wired that a single strong signal is a negative factor as it matures and leaves and thymus it rewires the t-cell receptor that now becomes the positive signal it is in the periphery where it encounters antigen so this is basically the and again obviously if it passes both negative and positive selection immature and then can become about T helper cells CD a suppressor cytotoxic cells and Margaret to peripheral Luke look at organs it's functional because a recognized MHC is not deadly because it doesn't recognize any self-reactive antigens okay and again basically T cell development alpha beta T cells cd4 cd8 I'm not going to go into Delta Gamma and also there's a subpopulation of T cells some of you are familiar with called regulatory t-cells and I discussed that a little bit in detail in a later lecture so I'm not going to talk about it much now but sometimes things can go wrong and if there's a syndrome called Bell cardiac facial syndrome as well as DiGeorge syndrome some of you may have seen that they don't they lack of thymus therefore have no T cells there's also mutation of the common gamma chain of receptor interleukin-2 which has basically scared development some individuals lack class 1 MHC I mentioned some people like tab for example they won't have CD AIDS if you lack CD class 2 MHC s you have no cd4 expression and one that I want to talk about a little bit now is called Erie and a question that some of you may be thinking about is in order for you to delete t-cells that a self-reactive you have to see all the self proteins and peptides right well how is the thymus going to have all the possible proteins that you're going to have in your body you know myosin in the thymus there's no muscle in the thymus so how is it possible therefore at best you're only going to be exposed assignment proteins well when you leave the thymus and you get into the periphery you know you're going to see all those other proteins you've never seen and you haven't been screened against them and theoretically you should now recognize them and you should start tacking them it doesn't happen and this was really a major question of how finest maturation can do that how could it delete a t cells that recognize proteins that are not normally made in the thymus so immunologist basically a made of a story they said well magically the thymus must have this ability to express all sorts of different proteins in order to allow this to occur okay that's a nice explanation but you know this is science you have to prove it and it turns out that in fact there's a gene called a IRD called border immune regulator which actually is a protein that allows ended epithelial cells in the thymus as well as dendritic cells to express other proteins that they normally don't Express and again you have to realize that cells and the thymus have the entire genome they have no control the entire chromosome that every cell in the body has the reason that you have some specific protein expression is because of more events such as for example methylation of promoter regions or genes to turn them off but theoretically every cell the body as we now know can be reprogrammed to mate with step so you can take a skin cell now treat it with your protein growth factors and convert it into a pluripotent stem cells how to make all the genes possible so air does is air takes those cells in the thymus that are involved in the negative selection permits them to express genes that they normally would not Express at variable levels but an ethical level to allow peptides derived from those other proteins from the body muscle proteins cardiac proteins etc said to be expressed at high enough levels to allow T cell to see them and therefore to undergo negative selection and the proof of it is is that individuals that have mutations in air have terrible immune diseases because they have not successfully been exposed to all the self proteins and therefore now in those diamonds when those thymus had to leave as T cells they haven't been completely undergone negative selection and now they have a lot of autoreactive diseases so again this is a this is a really nice because it actually able to demonstrate the real basis for something that we knew had to happen but really couldn't explain how it was happening and regulatory t-cells are also generated the thymus they're characterized by the expression of the interleukin 2 beta receptor as well as the Fox p3 transcription factor and again I'll be discussing this a little bit more detail in a subsequent lecture but it gets to make you realize the individuals that lack Fox p3 lack T regs the T regs function as the kind of the suppressor cells to kind of damp down the immune response if you lack that they have dysregulation of your immune system and a significant amount of autoimmune because you have an unchecked immune response and also just to kind of conclude is that if t-cells sometimes develop malignancies and they can develop maluma C's in any different stage of the maturational process so developing t-cells can become malignant at the stage that their lymphoid progenitor cells and this then would give you chronic acute acute lymphoblastic leukemia v extraña cells conformed by moments and much more commonly more mature thymocytes can form the acute lymphoblastic leukemia and again this is very important because there are unique cell surface markers associated and individual they can take care of patients with leukemia routinely phenotype what the Leukemia is in terms of the t-cell maturational stage and that has very important implications in terms of staging in terms of prognosis and in terms of therapy so again this is you know the hematology taking every patient with cancer actually have a very good understanding of the t-cell maturational pathway similarly these cells also could undergo a malignant transformation at different stages so for example there's a kind of leukemia called pre b-cell leukemia where it affects the B cell stage the pre b-cell where you're only expressing the heavy chain receptor acute lymphoblastic leukemia is the same as that shown here is affecting the blue tafolla Jenner before it's become either a t-cell or ESL and and so basically that's just to make you aware that million C's you'll hear described based on what stage in the maturation process the malignancy occurred and it raishin occurred so basically just want to end by saying it's a job of the immune system B cells and T cells to develop a primary repertoire the primary location for that in terms of the maturation and elimination of what are reactive B cells and T cells or B cells in the bone marrow T cells in the thymus and these are the antigen specific binding for be salt even water molecule for the T cell T cell receptor plus c4 in CDA and again either they survive or they undergo apoptosis and the goal is that you don't want to be too strict because if you're too strict you can end up with a weaker immune response on the other hand you have to be strict enough to eliminate mold self-reactive cells and nipple to avoid alum unity and clearly you can't have too robust proliferation in these tissues because you want to avoid bullying and transformation so basically on what so how does he use it to provide a high degree of sensitivity specificity again by undergoing the BGA recombination and proliferation Howard T so be sold maturation different will clothe these cells use immunoglobulin molecule as receptor T cells use T cell receptor but in addition the T cell has to undergo a set another step it has to determine whether his t-cell receptor recognizes self MHC class 1 or class 2 and that determines whether and and how many powders the T cells know whether to be a cd4 or cd8 because if when undergoes the positive selection is TCR recognizes an NB C class to domain it becomes cd4 if it's TCO are randomly generated now recognizes MHC class 1 now becomes a cd8 positive T cell and that's it okay again thank you very much for your attention and let's see we have eight lectures down and we have a six more to go so it passed half Waypoint thanks a lot see you tomorrow you
Medical_Lectures
11_Biochemistry_Enzymes_III_Lecture_for_Kevin_Aherns_BB_450550.txt
Captioning provided by Dissability Access Services at Oregon State University. Dr. Ahern: Okay, folks, let's get started. We are rapidly making our way through enzymes and we're in very good shape with stuff so I will finish where I finish in the lecture is where the material will stop. That will likely be down in here. I'll repeat that. Where I stop today, that's where the material for the exam 1 will stop. I have tentatively put in a request for a review of session time for ALS4001 for Saturday at 3:00. That's my tentative time. I see frowns. I always see frowns. I don't see smiles but I see frowns. As I said, I will videotape that so if you can't make it, hopefully you'll be able to watch the videotape. As always, I recommend you come get your questions answered. That's why I have a review session. Watching other people's questions I don't think is necessarily the best way to review. If you have questions, come see me. I'll be more than happy to meet you and hopefully try to answer your questions. Last time I finished talking about perfect enzymes. Perfect enzymes are pretty remarkable things. We see, at least I hope you see why not all enzymes are perfect, that driving too fast can cause problems and that's certainly the case. We can imagine the perfect enzymes. We will see increasingly that enzymes are such powerful catalysts that cells really have to put a throttle on them. Cells need to control enzymes. We can't just let enzymes just go crazy and there are some very interesting strategies that cells use to control enzymes. We're going to mention one of those today. We'll see some others as we get going through the term. But enzymes, because they're so fast and they are so powerful, cells really do need to keep a handle on them. That's important for them to do. Turn that off, alright. So the first thing I want to talk about today is again related to some mechanisms of enzymatic action. These are for enzymes that bind to multiple substrates. Not all enzymes bind to multiple substrates. So a prime example is there's an enzyme in glycolysis called phosphoglucose isomerase and it binds glucose six phosphate and it rearranges it and makes it fructose six phosphate. No other factors involved. It's only one substrate bound by that enzyme. There's a substrate that comes in, there's a product that's released, but it doesn't have two things. Many and in fact most enzymes have at least two substrates that they bind to. A + B goes to C + D. We'll see some examples of those today. So when we have multiple substrates, one of the questions that arises is, "well is there an order or is there a specific way "in which these substrates bind?" So I will give you some examples of some interesting strategies that exists for enzymes. The first of these are what are called sequential displacement. Sequential displacement is a mechanism that really can be divided into two categories. One in which the order of binding of the substrates matters. The other the order doesn't matter. They're both called sequential displacement. So let's look at an example of an enzyme for which the order in which the substrates bind matter. We can see here that here's an enzyme that binds these guys over here. We'll specifically focus on pyruvate and NADH and if the enzyme doesn't bind the substrates in the proper order, the reaction will not go. My first question to you is can you imagine a scenario why that might be, based on things we have talked about with respect to enzymes? Why would this class of enzymes need to bind things in order? What have we learned so far about enzymes that would suggest that order might be important for an enzyme? Student: When it binds one specific molecule, it changes the shapes so that it combines to the other one Ahern: His answer is exactly right. The model in which the binding of one changes the shape of the enzyme so that now a binding set for the second one becomes possible. What was the model that we talked about that had that enzyme changing shape like that? The induced fit, right? So this model is very consistent with the induced fit when we had the induced fit, we had A, B and C. We saw A binding and that created a binding site or a better binding site for B and then for C and the enzyme could do its thing. Well in the case of this particular enzyme, and by the way, I'm going to show you a couple of diagrams. You're not responsible for the diagrams. I only show them to you so you can see something about how the enzyme works. So in this case, the enzyme binds to NADH first. The binding by the enzyme of NADH causes a shape change that now favors the binding of pyruvate. And after pyruvate has bound, the reaction is catalyzed. So very, very simple example of an induced fit happening when we have ordered binding of sequential displacement. The other mechanism that's involved or the other possible mechanism of sequential displacement is called random binding. And as its name would suggest, it doesn't really matter which order in which these bind. So if I had this reaction, creatine + ATP, it means that ATP can bind first, or creatine can bind first. It doesn't really matter which one is the first one to hit the enzyme. Now you might look at that and say, "does that mean the sequential displacement doesn't happen?" The answer is no. Does that mean that induced fit doesn't happen? And of course the answer is no. Induced fit happens essentially with any enzyme but the induced fit doesn't have to solely involve the binding sights of the substrates. In the first example, it did involve the binding sights for the substrates. You might say well how does the induced fit work on an enzyme like this? And my answer to that is that virtually every catalytic action is the product of induced fit. So when we see catalysis happening, that alone is pretty good evidence that the induced fit is occurring. So random binding and there are many enzymes that have random binding, it doesn't matter the order in which they bind. We'll talk about this enzyme later in the term. It's very interesting. It's a very important enzyme in our muscles. It's called creatine kinase. Creatine kinase makes this very high energy intermediate called phosphocreatine that as we will see has implications for us as we exercise. Yes? Student: Can you say one more time the relationship to the random model in regards to reduced fit? Ahern: So how does induced fit work with the random model? The induced fit works with the random model as it works with any enzyme model in the fact that catalysis is occurring. Catalysis depends on that induced fit. There are slight changes produced in the enzyme by the binding in the substrate. This may cause, for example, the active site to have a slightly different configuration. And that slightly different configuration might put different molecules together. I'll give you an example. So here's an enzyme that we'll talk about later in the term called hexokinase. Hexokinase I believe is a random, I'm not sure about that but we'll say for the moment it is because it gives a good example. Hexokinase is the first enzyme in the pathway of glycolysis and it's a really cool enzyme when we look at its structure. So hexokinase takes two molecules. It takes glucose and it takes ATP. You don't need to know that right now. So it has to bind two substrates. It binds one substrate up here, let's say this is glucose. It binds the other substrate down here. We'll say that is the ATP. And the binding of the two of them causes a conformational change in the enzyme such that these jaws literally close. So either one can bind. It didn't matter if they were actually change the shape of the binding site. But what changed was the shape of the enzyme. And so those jaws closing now bring the ATP very close to the glucose that's in the enzyme and a phosphate is able to move from ATP onto the glucose. That change then induce the enzyme's jaws to open up and let go. So now instead of having glucose up here, I have glucose six phosphate. And instead of having ATP down here, I have ADP. So we see that the shape of the enzyme changed as a result of the binding of the substrates. The binding of the substrate didn't affect the binding of the other substrate but in fact the enzyme. Does that make sense? So we'll see other examples of enzymes changing shape as a result of binding of substrates and as I say, virtually every catalysis we can think of, there's some changes happening in that enzyme that's making that possible and again, this is consistent with the idea that this is how enzymes are able to catalyze reactions so much faster than chemical catalysts can because the enzymes are flexible. So that's two models that are fairly straight forward to understand for binding of two different substrates. There's a third model and it doesn't really fit into sequential displacement. It also involves binding of different, of two substrates but in this case, it binds the substrates separately. One after the other. And in the process, the enzyme is continually changing its state. It's continually changing its state. What does that mean? Well, this class of enzymes are called double displacement enzymes. And in a double displacement enzyme, the enzyme is actually grabbing something from one of the substrates and taking it and exchanging it with another substrate. I'll explain this as I get going through it. But the enzyme exists in two states. So the enzyme that catalyzes this reaction is called by the general name of transaminase, transaminase. [spelling out transaminase] Transaminases are interesting enzymes in that they catalyze reactions that move an amine from one molecule and put it onto, in the place of an oxygen on another molecule. And the oxygen on the other molecule is moved back onto the amine of the first molecule. So there's literally a swapping that's happening of the amine and the oxygen. So I need to explain to you now how this enzyme works. How does this enzyme accomplish this? Because it does not bind both substrates and the same time. It does bind both substrates however. Well, let's look start out our enzyme in the state I'll call O. The O state, the enzyme has bound to it an oxygen. It's just carrying out here. So it's got this oxygen on itself. When it's got its oxygen on itself, it's looking to swap that oxygen for a nitrogen. So in this case, it binds to aspartate. When it binds to the aspartate, what the enzyme does is it catalyzes a swap. The oxygen that it's carrying, it gives to the aspartate and the amine that is a nitrogen that is on the aspartate, passes it off to the enzyme. Now the enzyme is in a different state. It's in what I would think of as the N state. The enzyme lets go of the aspartate except for it's no longer aspartate, aspartate has got an oxygen and that turned it into oxaloacetate. So we've taken a nitrogen and we've made it into an oxygen. That's the first part of our reaction. Now the enzyme is in the N state and it wants to bind to an oxygen containing molecule. It binds to alpha-ketoglutarate and it takes that nitrogen, I'm sorry that nitrogen that it has, passes it off and takes the oxygen away from alpha-ketoglutarate. That in turn converts alpha-ketoglutarate into glutamate or glutamic acid. So in essence what's happened is we have started with something that had a nitrogen, it ended up with an oxygen. We start with something that had an oxygen and it became a nitrogen. Now what I've essentially done, is I've taken aspartic acid, made oxaloacetate. I've taken alpha-ketoglutarate and I've made glutamic acid. So why would a cell want to do this? Well a cell might want to do this, imagine if you will, if it has way too much aspartate but not enough glutamate. This provides a nitrogen source that we can convert non-nitrogen containing alpha-ketoglutarate into glutamate and then the byproduct of that is oxaloacetate. I don't want to focus so much on the reaction as I do want to focus on the enzyme. The enzyme is existing in two states. It started in an O state, where it donated oxygen and grabbed nitrogen that converted it into the N state. And in the N state, it did the reverse. It grabbed an oxygen and gave up the nitrogen that it had and it moved back into the O state. So the exam is doing something that we call ping pong kinetics. It's going back and fourth between two different states. Once it's in the O state, it's ready to go back and do another one of these. So it goes back and fourth between N and O and N and O all the time. Does that make sense? Student: Ping pong kinetics. Ahern: Ping pong kinetics, yeah. Like a ping pong ball going back and fourth Student: Why doesn't it return the amine to the aspartic acid? Ahern: Why doesn't it do what? Student: Instead of carrying that ping pong ball over to the side, why doesn't it just... Ahern: Why doesn't it just put it right back on aspartic acid? Because once it's put on aspartic acid, it lets go of it. Aspartic acid goes out into the solution. So it literally will go from one molecule to the other molecule, back and fourth and back and fourth. Yes sir? Student: These productive reactants generally yield one specific chirality like in this case? Ahern: You always get the same chirality with this. All the enzymes that work on, essentially all the enzymes that work on amino acids are very chiral specific. You'll always get the same thing, yes. Yeah? Student: What about the transfer of the associated hydrogen. Is that actually... Ahern: Say that again now? Student: On the aspartate on the far left, on the reactant side, you have a hydrogen going back. Ahern: Yeah, I think that, that's a good question. That would be picked from a section I haven't shown you, but that would be picked up from the solution. Student: So does it pull the oxygen or the amine group off right away? Or does it wait for the other partner? Ahern: I'm not sure I understand... are you saying... Student: As soon as it binds to an aspartate, does it pull off the amine group? Ahern: Yes. So it's not waiting for alpha-ketoglutarate to come along. So that's the point here is it's not binding to both substrate. That's different from what we saw before. Because if it were binding both substrates, then we can imagine there might be an order or randomness to that. That's not what's happening here. So the enzyme is binding to one or the other. Once it's bound to one and does its thing, it lets it go. And that now leaves it a candidate to bind something else. These enzymes, by the way, tend to be very broad in their specificities. This could be aspartic acid, but this could also be asparagine. There's quite a wide variety of substrates they will accommodate and the idea is that these are very important in passing nitrogens from one amino acid to another. And nitrogen balance and nitrogen movement in the body is very important. Nitrogen is a precise resource. Yes ma'am? Student: So is it possible for the enzyme to bind [inaudible] acetate and invert it back Ahern: Okay her question basically is, "is this reverse action possible?" and the answer is absolutely. Any enzymatic reaction is reversible and what will be driving it is concentration. Student: Let's say an enzyme [inaudible] Ahern: It can happen with the same enzyme, yeah. It's independent of the enzyme itself. Student: What's the name of this enzyme again? Ahern: The name of this enzyme is called transaminase. Student: So just to make sure I understand, it binds to aspartate and removes the amine group. Ahern: Yup, binds to the aspartate and takes the amine off. Student: It kicks off what's left of the aspartate. Ahern: It puts the oxygen on aspartate. Student: Where did it get the oxygen from? Ahern: It's carrying it. That's in the O state. When the enzyme is in the O state, it has an oxygen on it. Student: which it got from ... Ahern: A previous oxygen. It's always been back and fourth between N and O. Student: Is there not sort of a chicken-egg question a little bit? Ahern: Not really because the first time the enzyme worked, it had to get one or the other. Then the rest of its life it's going to go back and fourth, back and fourth. So it's not really a chicken or egg question, though I see why you might think it is. We could debate that I suppose, right? Did the oxygen come first or the nitrogen come first? Extra credit, right? Other questions? Alright, so that mechanism is called as I said, ping pong kinetics. It's also called double displacement and both are equivalent. Now I mentioned in the beginning of the lecture that enzymes are, because they are so powerful and so able to make tremendous changes in very short time, cells have to control them. And there are about three different control mechanisms that I will talk about this term and we're getting ready to encounter the first of them and the first one is called allosterism. Allosterism. [Spelling allosterism] An enzyme that exhibits allosterism is called allosteric. And that's what you see right there. Right there. Now, what does this mean? I need to tell you what this control mechanism is. We'll talk next term about the synthesis of cholesterol. Cholesterol synthesis is very complicated. There are about 25 enzymes that are involved in converting a very small molecule ultimately by putting a lot of them together into cholesterol. Making cholesterol is very energetically intensive. It takes a lot of ATP to do that and so cells don't want to be making cholesterol if they don't need cholesterol. So they have something called a feedback mechanism and we'll talk about another feedback mechanism next week. But they have a mechanism called feedback in which cholesterol which is the end product of this very long pathway, binds to one of the first enzymes in the pathway and when it binds to that enzyme, it causes the enzymes shape to change very slightly such that it's much less active. Now I'm going to give you a definition of what I've just told you. Allosterism is a property where a small molecule interacts with an enzyme and affects its activity. It's a property where a small molecule interacts with an enzyme and affects its activity. In the example I just gave you, cholesterol is the small molecule. An early enzyme in the pathway is the enzyme and the effect is to reduce the enzyme's activity. Some enzymes will have activity reduced. Some enzymes will have activity increased. And we talk about this, we commonly talk about it as if we are turning on the enzyme or turning off the enzyme. You need to realize that we don't have absolutes like that usually in the cell. Usually we think of it more like the volume. We're turning up the volume or we're turning down the volume. And that's what happens usually with allosterism. Did you have a question? Student: Naw. Ahern: Or you just got tired? Student: Well here's the question. So it doesn't have to be the product to allosterically interact with... Ahern: So his question is "does it have to be a product to allosterically interact?" The answer is no, it does not. We will see an example where another molecule affects an enzyme and it's not a product in any way. Yes? Connie Student: When you say turn off and turn on vs. turn the volume off and the volume on, is each enzyme completely turned off? Ahern: Her question is a good one, you're getting a little ahead of me. "Can each enzyme be turned on or off?" and the answer is "no." Most enzymes, this may surprise you based on what I've been telling you, but most enzymes are not controlled. "That doesn't make any sense. "You just said enzymes are powerful, "they can cause problems, they could do too much "and now you're saying that most enzymes are not controlled?" And the reason that most enzymes are not controlled is because cells are very picky in the controls that they have. When we study metabolism, what we will see is that metabolism occurs in what we call pathways. Enzyme A converts molecule 1 into molecule 2. Then molecule 2 becomes a substrate for enzyme B that converts it into molecule 3. We can see a connectedness of these pathways. 3 goes to 4, 4 goes to 5, 5 goes to 6, etc. Cells are efficient in that they will frequently control the first enzyme in the pathway. If they control the first enzyme in the pathway, it doesn't matter how much of the other enzymes that you have because there's not going to be hardly an 2, or 3 or 4 or 5 or 6 there. So by controlling the major enzymes of the pathway, which are usually the first ones, cells are efficient in being able to regulate what enzymes do. You had a question back here? Student: What does it mean to say small molecules? Ahern: "What does it mean when I say small Molecule?" It's a non-protein. Molecules, we can think of these as substrates. Substrates in general are small in comparison to proteins. Cholesterol is a pretty good sized molecule but it pales in comparison to the size of a protein. You have a question here? Student: [Inaudible] Ahern: The question about what I mean by control. The answer is I mean they are not being able to have their activity regulated. On or off, up or down, whatever we want to say. Student: So a cell can turn off like you said but it won't necessarily turn off the first one, it can also turn off the third one because it wants to make whatever the first two enzymes are making? Ahern: Basically, her question is "does it always have to be the first enzyme that's regulated?" and the answer is it doesn't always. It's not always that case. We'll see an example when we talk about glycolysis. Glycolysis is the break down of sugar. It's a very odd pathway in its regulation. There are reasons why it is that way. But it doesn't solely regulate the first enzyme in the pathway. Yes sir? Student: When talking about cholesterol regulating one of its earlier precursor enzymes, would the terminology be negative feedback loop or down regulation or how is that generally described? Ahern: For cholesterol regulation? Student: Yeah for the one you described. Ahern: For cholesterol's regulation, the term I use is called feedback inhibition. But we'll talk more, actually, let me say, just to say that, I will talk about a specific feedback inhibition next week that will be a bit more relevant for us because we haven't talked about cholesterol. Good questions. Good thinking about this. I bring this up because when we look at the kinetics of allosteric enzymes, we see something very interesting. If we plot V vs S and we have an allosteric enzyme, here's what we see. What does this look like to you? Have you seen a curve like this before? You saw it for hemoglobin, right? You saw it for hemoglobin and, "but Kevin, hemoglobin isn't an enzyme!" What were we plotting when we were doing hemoglobin? This is why the axes are important. What were we plotting when we were doing hemoglobin? We're plotting oxygen concentration so that would behave like a substrate. That's similar. What are we plotting on the Y axis? You won't remember, that's fine. We're plotting what percentage of the enzyme is saturated is what percentage of the enzyme is bound with oxygen. Here we're plotting reaction velocity. Those are very different things, so why does this curve look like the curve for hemoglobin? Is it a coincidence? I'll give you a hint. It's not a coincidence. Any thoughts? Elizabeth? Student: [Inaudible] Ahern: Is it similar to the cooperative binding? There is some parallel to the cooperative binding. That's not really the answer to the why it is sigmoidal. There is similar behavior to cooperative binding. We'll see shape changes, we'll see that next week. I tell you what I'm going to do. I'm not going to answer that question. I'm going to leave you guys to think about that and that will be your extra credit on the exam. What was it? Why does this plot resemble the plot of hemoglobin? And I'm giving you a hint. It's not just the cooperativity. So that will not be the answer. I'll give you another hint. The answer lies in what the Y axes are telling us. In this case, we're looking at velocity. In the case of hemoglobin, we're looking at percent bound with oxygen. I've already told you your extra credit question for the exam. You can consult. I will not tell you the answer if you consult with me. I would ask you to not consult with each other. You want to consult with people, that's fine. But I want each person doing it on their own. Not a group project for example. It's kind of a fun activity. Now, the last thing I want to talk about relevant to our enzymes is a very important consideration for drugs and enzymes. So when we think about medical applications or medical implications of enzymes, we think about, "well how do I inhibit enzymes?" Because inhibiting enzymes is a way that many drugs will actually work. If we examine the different types of inhibition that can occur, there are three main ones that people talk about. And we're only going to talk about 2 of them in this class. We're going to talk about what's competitive inhibition and non-competitive inhibition. We're not going to talk about un-competitive inhibition because I find that students find it confusing I don't think we really need to go into that to understand the basic principles of inhibition. Let's think about what happens with competitive inhibition. The word competitive tells us something. Something is competing with something else. In the case of competitive inhibition, we have a substrate that looks like this in green. It's shown here in green. The competitive inhibitor is a molecule that in this case resembles the normal substrate but it has something different. It has something different about it. Now, it's similar enough to the normal substrate that the enzyme binds it without even thinking about it. It binds it. But when it binds it, it's just kind of stuck there. It doesn't do anything on it. Whereas the case of a normal substrate, the enzyme catalyzes a reaction on it. Now I want to emphasize that these two methods that you see on the screen are reversible. They are not covalent, they are reversible. That is the enzyme can bind the competitive inhibitor but it will also let go of it. If it bounded and did not let go of it, we would have a different kind of inhibition that I'll talk about later. But both competitive and non-competitive involve reversible inhibitors. The enzyme can bind them, the enzyme can let go of them. Let's think, so that's a general thing. I'll talk about the kinetics in a second. By contrast, there's a very different mechanism called non-competitive inhibition. And it's depicted on the bottom of the screen. In this case we have a, actually this is uncompetitive, in this case we have an enzyme that binds to a normal substrate, here's the same enzyme, but instead of the inhibitor resembling the substrate as it happened with competitive, the non competitive inhibitor binds to a different portion of the enzyme. And you can see what's happened with this non-competitive inhibitor. It is bound to a different portion of the enzyme. It didn't resemble the substrate at all and it caused the enzyme to change shape and the enzyme no longer functions. That's a very fundamental difference between competitive and non-competitive inhibition. In competitive inhibition, the inhibitors will almost always resemble the normal substrate. In non-competitive inhibition, there will almost always not resemble the normal substrate. And as a consequence, they work in different places. Competitive inhibitors work at the active site, non-competitive inhibitors work at other sites on the enzyme. Yes, sir? Student: Does the substrate have to be in the enzyme for the non-competitive inhibitor to work? Ahern: Does the what? Student: Does the substrate have to be in the enzyme for the non-competitive inhibitor to work? Ahern: Does the substrate have to be in the enzyme for the non-competitive inhibitor to work? No it does not. It may actually prevent in some cases. In this case, we see it bound. But it could for example prevent the normal substance from binding. Yes, sir? Student: [inaudible] Ahern: I'm not going to talk about the un-competitive because it actually is very different and involves action of catalysis. I can't do that here, no. I can tell you separately if you'd like to talk about. Student: Oh yeah, I had a question about it to. I wondered, does it require energy to reverse non-competitive than it would require to reverse competitive? Ahern: Her question is "does it require energy to reverse these?" And the answer is energy is not involved. Energy is not involved. So we're thinking of either competing for the active site or binding something else. That's the only two possibilities that we're going to consider here. Let's think about this. Here's an example of a competitive inhibitor. Methotrexate is a man-made drug. Dihydrofolate is a normal substrate for an enzyme that's involved in nucleotide metabolism. Our cells need to use dihydrofolate ultimately to make nucleotides. If I give cells methotrexate, which resembles dihydrofolate, that same enzyme that uses dihydrofolate will bind to methotrexate and will not function. They're competitive inhibitors. They both bind to the active site of the enzyme. The enzyme can catalyze the reaction using this guy. It cannot catalyze a reaction using this guy. If I treat cells with methotrexate, these cells will ultimately die. If I don't take it away. If I don't take it away, these cells are going to die. Methotrexate is used in some types of chemotherapy. Because cells that can't make nucleotides can't divide and die. If I just gave it to a person and I didn't give them anything else, what's going to happen? They're going to die and that's the end of it. However, if I give them methotrexate for a short period of time, ala chemotherapy, and then take it away by flooding the cells with a normal substrate, then what happens is the cells that divide the most rapidly are the most sensitive. They have the greatest need for nucleotide, this might be a cancer cell for example, and so I have selectively killed rapidly dividing cells like cancer cells. I may also kill other cells that are rapidly dividing like hair cells or intestinal cells. And so use of this drug may have very nasty side effects for people taking chemotherapy. There's many other examples of chemotherapy but you can see how and why people might lose their hair, why people might feel very nauseous because they're losing intestinal lining and it's not being replaced properly. But this drug is very effective on some types of rapidly growing cells. Methotrexate is used for other purposes as well so I don't want to say it's only a chemotherapy. Dose is going to be important. Dose is going to be important. Small quantities that's used to treat certain problems. That's a competitive inhibitor. Let's think about what happens with a competitive inhibitor. This figure I don't like. It's too confusing in my opinion, so we're going to make it simple. We're going to focus on this green line and this black line. And we're not going to focus on these other two green lines. We're also not going to pay any attention to this up here. I'm going to talk you through what happens with competitive inhibition. Alright? You've seen that enzymes that are not allosteric enzymes behave with a hyperbolic plot when I do V versus S. Velocity versus substrate concentration. Relative rate and velocity. If I take, and I'm going to go back and describe that reaction that I described to you before where I did my experiment. I said if I do a V versus S plot, what did I do? I took 20 tubes, I put the exact same amount of buffer, the same amount enzyme and I put varying amounts of substrate. Each tube had a different amount of substrate. And why did I put only the substrate varying? What did I say? I only want to have one variable. If I had 2 variables, I've got a problem. So I want to measure the inhibition of this enzyme using an inhibitor. Am I going to put varying amounts of inhibitor or am I going to put the same amount of inhibitor? I'm going to put the same amount of inhibitor, right? Each tube is going to get the same amount of inhibitor and the only variable is going to be my substrate concentration, right? Let's think about this for a second. Let's imagine when I start my experiment, my tube number 1 has, let's say, 10,000 molecules of substrate. And into each tube that I'm doing of my 20 tubes, I put in a hundred thousand molecules of inhibitor. That tube #1 was going to be the most likely thing that the enzyme is going to see. It's going to see the inhibitor because I've got 10 to 1 times the inhibitor, right? Tube #2 I increase the S. I increase the S and now I've got 20,000 molecules of substrate and 100,000 molecules of inhibitor again. It's going to be twice as likely as the first tube that the enzyme's going to find substrate but it's still going to be much more likely it's going to find inhibitor, right? I keep adding, I get to the 20th tube, and by the 20th tube, I've got 2 million molecules of substrate and 100,000 molecules of inhibitor. Now what's going to be the most likely thing the enzyme's going to find? It's going to be the substrate by a factor of 20 to 1. Right? If I go far enough out, I might have a thousand or ten thousand times as much substrate as I have inhibitor. 99.9% of the time, the enzyme, when I get to high substrate concentration, is going to be finding substrate and .1% of the time, it might be finding inhibitor. Everybody follow that? At very high concentrations, that's what's going to happen. Just a second, okay? What's going to happen to the V max? It turns out because I'm competing, when I'm competing, my substrate is winning the race. It's out competing the inhibitor because the concentration factor is beating it out. Everybody follow? In essence, 99.9% of the enzyme out here is active. I can't tell them apart from 100%. I can't measure that accurately. So when I'm measuring competitive inhibition, V max stays the same. It stays the same. It might take awhile to get out here but ultimately it will be out here. Did you have a quick question? Student: [Inaudible] Ahern: "Do we have to account for the different affinities between the inhibitor and the enzyme?" At one level yes, but not for a V vs S plot, no. If I said to you, if I look at tube #1, what percent of the enzyme would you say is inactive most of the time? I had 10,000 versus 100,000. I might only have about 10% of the enzyme active. Right? Because 90% of it's going to bound with an inhibitor and when it's bound to an inhibitor it's not doing anything. Everybody follow? I might expect that that velocity would be lower because velocity is going to depend upon how much enzyme they got active. Out here, I got 99.9% of the enzyme active. It doesn't matter. It's essentially 100%. I've out competed the inhibitor and that's the important part of competitive inhibition. Competitive inhibition, the substrate outcompetes the inhibitor ultimately at high concentration. The substrate wins. So V max does not change if I compare uninhabited vs inhabited enzymes. What happens to KM? Good thing to think about KM. What's going to happen to KM? Is it going to be the same also? Nobody wants to stick their neck out for that one. Let's think about what this means. Here is our V max of 100 let's say. That means half of V max is going to be right here at 50, right? Because, yeah, half of V max can be there because KM, I have to measure the velocity of V max over 2. So there's V max over 2 right there. What is the substrate, what is the KM for the uninhibited reaction? It's right here. What's the KM for the inhibitor reaction? It's over here. What has happened to the KM? It's gone up. It's gone up. So when we have competitive inhibition, KM increases. V max does not change. Everybody with me there? Questions? With competitive inhibition, V max does not change. KM increases. Now I think you should be able to logic that out in your heads. I think that's important to be able to think through because it saves you having to memorize something else. Now, this stands in contrast to non-competitive inhibition. Let's look at non-competitive inhibition. Non-competitive inhibition, they're not competing. I told you in the case of competitive inhibition that the reason that the substrate won was because the concentration of the substrate was outcompeting the inhibitor. In non-competitive inhibition, the name tells us they're not competing. Nothing stops the non-competitive inhibitor from binding to the enzyme site in the first case. The substrate could outcompete the inhibitor. They were fighting for the same place. In non-competitive inhibition, we don't see that fight going on. Let's imagine, if you will, in non-competitive inhibition, that I have 100,000 enzyme molecules in each tube. Every tube has the same amount of enzyme. And I put in my 10,000 molecules of inhibitor. What's going to be the percentage of the enzyme that's going to be inhabited in tube A? What percentage will be inhabited? 10%, right? One in 10 will be inhabited. 10,000 of those are going to bind to those 100,000 enzymes. 10,000 enzymes are going to be inactive and 90,000 are going to be active, right? What's going to be the case in tube #2? Exactly the same thing. Because I had the same amount of enzyme, I had the same amount of inhibitor and then tube #2, I've got 10% of my enzyme is inhabited. And tube #3, tube #4, tube #5, and all the way up to the highest concentration of substrate that I use, I always have 10% of my enzyme knocked out. Now we talked about V max, I said something about V max that it wasn't specific for an enzyme. What did I say about V max? What did it depend on? The amount of enzyme that I had. Right? So if I did a reaction, I did a series of V vs. S with a given amount of enzyme, I would get my plot. I would get a V max. If I did a different set of reactions where I only used 90% of the enzyme, would my V max be the same? It would be lower. Non-competitive inhibition, the V max is lower because you have a fixed percentage of the enzyme that is inhabited. The other side of this will surprise you. And I'm not going to go through it here, but the other side of it is that non-competitive inhibition, the KM stays the same. How can that be the case? Well, let's think about it for a second. Do you suppose what I said about KM, is it a constant for an enzyme or not? Yeah it is. Yeah it is. KM is a characteristic of an enzyme. For a given set of conditions, the KM will be the same independent of how much enzyme I use. That was one of the beauties of KM. It was a characteristic of an enzyme. I can compare KM values between enzymes. In fact, we did that the other day. We compared KM values of enzymes. So the KM I get if I have 100% of the enzyme will be exactly the same KM I would get if I had 90% of the enzyme. KM does not change in non-competitive inhibition. Now the last thing I want to say and I know it's a little rushed but I want to get it in here for this exam. Student: On that graph... [inaudible] How would you... Ahern: Well, I can point, this one has got a lot of inhibition that's gone with it. So I don't like the way this graph is drawn. Come see me separately and I'll show you. But the argument is that the KM independent of the amount of enzyme that I use. Student: Yeah, I understand. It was just the graph. Ahern: Now, you've learned about Lineweaver-Burk plots. I want you to think about what those mean relative to these types of inhibition. Here is a Lineweaver-Burk plot that shows an uninhabited reaction in black and an inhabited reaction in red. I've plotted the same data, taken the inverse of everything, and look what's happened. They cross at the Y axis. If you recall, the place where a Lineweaver-Burk plot crosses on the Y axis is one over Vmax. We're looking at competitive inhibition because Vmax doesn't change. Competitive and non-competitive will have exactly the same value at Vmax. What does change is KM. KM gets larger and that means minus 1 over KM gets closer to zero. This plot is what we would see for competitive inhibition if we did a Lineweaver-Burk plot. By contrast, if we did a non competitive inhibition reaction and compared it to a normal reaction, we see that the two lines cross at minus 1 over KM. That's not surprising because the KM values for those two are the same. Why is the red higher? We're plotting one over Vmax, not Vmax. One over a smaller number gives you a larger number. So this is what the plot looks like when I do that reaction for a non-competitive inhibitor. Questions on that? Shannon? Student: Can you say where that crossover is? Ahern: Minus 1 over KM right there. Because KM is the same for the two. That's where the material for the exam will stop and I will see you guys on Friday. I will announce the time of the review session on Friday for sure also. [END]
Medical_Lectures
17_Biochemistry_Carbohydrates_II_Lecture_for_Kevin_Aherns_BB_450550.txt
Kevin Ahern: Should we just call off class today? What do you think? Student: Yeah! Kevin Ahern: And then go twice as fast on Monday? [laughter] Until we get to Monday, right? I'm going to get through party pretty much carbohydrates today. I'm not sure if I'll finish it completely, but if I do, great. If not, then we'll... go twice as fast on Monday! [laughter] Either way, you lose, right? Last time, I said just a few words about steric hindrance and its effect on the conformation that sugars may have. I talked about the boat form. I talked about the chair form. There are similar conformational considerations on five-membered rings, as well as for six-membered rings. You are not responsible for them—I'll show them to you because they're a little harder to conceptualize, but I think of them as sort of like an envelope. There's the top part of the envelope there, and where the top part of the envelope is up or down, but that's really, we need space models, I think, to have a better understanding of that, so we won't mess with those guys. As I alluded to earlier when I talked about carbohydrates, in general, we discovered that they come in a variety of forms and some of those forms include chemical modification to the carbohydrates. One of those common modifications is an oxidation that can occur. Oxidation is actually used in some assays to measure the presence of certain carbohydrates. One of these is an oxidation reaction, and the oxidation reaction is shown on the screen. This oxidation reaction happens in the presence of what are called "reducing sugars." So reducing sugars are sugars that are easily oxidized. Why do we call them reducing sugars? Well, because they are reducing something else. They are, in fact, reducing, in this case, a copper ion from the +2 state to the +1 state, and that means an electron has to have moved from the reducing sugar to the copper ion as we see here. That change of copper's configuration actually changes the color of the solution that it's in, so we can actually monitor the color and the amount of color change that occurs as a measure quantitatively of the amount of a reducing sugar that is present. Now, reducing sugars are, as I say, called that because they are easily oxidized. Notice this guy is an aldehyde, and I hope you remember from your organic chemistry that aldehydes are oxidized more readily than ketones are. In general, we'll see sugars like glucose will be much stronger reducing sugars than sugars like fructose, for example, which is a ketose. Fructose does or can, to a limited extent, act as a reducing sugar, but for all practical purposes it's not a reducing sugar, certainly not in comparison to glucose. The oxidation of an aldose, like glucose, creates a carboxyl group as a result of that oxidation and makes an acid where there was an aldehyde. That is obviously a fundamentally different structure than what we have with a standard aldose. Notice that in order for this oxidation to occur, the sugar must be in the straight-chain form because in the ring structure we don't have an aldehyde, we have a hemiacetal, and that hemiacetal is not oxidizable, whereas the aldehyde is. One of the things I did not tell you last time about the straight-chain form, I showed you how that a sugar could go from ring to straight-chain and back to ring and so forth, but I only very briefly mentioned the fact of something that prevents that conversion, and that things that prevent that conversion are alterations to the hydroxyl on that anomeric carbon. So if we alter this hydroxyl right here, in any way, then we will not be able to go to the straight-chain form, and, as a consequence, we would not have a reducing sugar. There are quite a variety of modifications that can happen to sugars, and, no, I'm not going to ask you to know all of these in any stretch of the imagination. But I do point them out to you because you'll see them in a variety of biochemical molecules. Fucose is a modified sugar and you'll noticing that it's lacking in OH or a CH2OH on this last terminal carbon, up here. You'll also notice that it's in the L configuration. I said that we predominately see things in the D, but we do see some things in the L configuration. Fucose is one of those that we find in the L configuration, and you'll notice that the L configuration means that that last carbon goes down instead of going up. So the D carbons are always going up, the L carbons are always going down, in that ring structure, as you see there. The addition of N-acetyl groups to the ring of sugars does, in fact, commonly occur, and we'll see a couple of examples of that today. There are polymers that we will talk about later that occur in nature of modified sugars, like N-acetylglucosamine, and these polymers have interesting properties. The exoskeleton of insects, known as chitin, for example, is a polymer of this guy right here. You'll notice that these alterations that have occurred in both of these cases did not affect the anomeric carbon. So if I were to ask you if this guy would be a reducing sugar, I would hope that you would tell me "yes," because the anomeric carbon is still intact and still could go back to the aldehyde form. Whenever we alter the anomeric carbon, in any way, we create a compound known as a glycoside. So the alteration of the anomeric carbon creates a glycoside, and glycosides have many shapes, many forms. They are natural compounds. Some of them are very nasty. Some of them are very innocuous. So the term "glycoside," per se, doesn't have any negative connotation, it's just that some really nasty compounds are glycosides and others are not. A common non-nasty compound that we see is when we link together sugar subunits. When we link together two different, or in some cases hundreds of different or thousands of different sugar subunits together, most commonly the linkage will occur through the anomeric carbon, creating a glycosidic bond. So a glycosidic bond occurs when we have a glycoside, and you'll notice that we call this an alpha-1,4-glycosidic bond. It's named from left to right. Here's the alpha configuration. This is, obviously, carbon number 1. So it's alpha 1, and the "alpha" means that that's in the down position, going over here to position number 4, over here. Another modification that we see on sugars, and we'll see this especially as we start talking about their metabolism, is that of phosphorylation. The addition of a phosphate to a sugar has the effect of actually increasing its energy. So sugars and modified sugars that have phosphates on them will have higher energies than those that don't. That's partly because the phosphate itself is fairly negative and really likes to be released away from all these hydroxyl groups. We'll talk more about those in a bit. So those are the modified monosaccharides. The disaccharides, as its name implies, contain two sugar residues. Your book has one of the most idiotic possible structures it could conceive of for sucrose, so don't even look at this sucrose. It will confuse you. I'll show you in a second a much better figure for that. The reason it's stupid is that they had to flip this guy out to actually draw it in this way. The normal configuration of this has, or the normal way to draw it, actually has the fructose underneath the glucose, but they're saving ink by doing this. It's a really dumb structure. Well, the important thing that we look at in this is we see, again, we have glycosidic linkages, in this case, of sucrose. Sucrose is comprised of one glucose unit linked to one fructose unit. You say, "Whoa! "Am I going to have to learn all that nomenclature?!" Well, not really. I mean, alpha-D-glucopyranosyl-1-2-beta-D-fructofuranose. Whoo! A mouthful, right? I think if you know that glucose and fructose are joined together to make sucrose, that's pretty good, and I think that you will need to know the structure of sucrose, so I'll show you a much better structure of that in a second. So sucrose is one of the sugars that you'll need to know the structure of, but this structure will definitely confuse you. There are other disaccharides that are important in nature. Lactose is one of them. Lactose is also known as milk sugar. It's also a sugar that's very commonly confused by students with a molecule known as lactate. Lactate is not a disaccharide. Lactate is a byproduct of metabolism. We'll talk about it later. Don't confuse lactose and lactate. They are very different molecules. Lactose is a disaccharide comprised of one unit of galactose, seen on the left, linked to one unit of glucose. We can see that the linkage here is a beta-1,4, meaning this bond would go up to an oxygen and then link to the down on the glucose, as shown over here. Would you say that lactose is a reducing sugar or not a reducing sugar? It is. One glycosidic bond, but one anomeric carbon remains open. This guy could still go to straight-chain, still get to the aldehyde form and still become oxidized. Another compound, maltose, is comprised of two molecules of glucose, linked alpha-1,4, as we can see here, and I do think that you should be able to at least illustrate an alpha-1,4, or an alpha-1, whatever, bond, if I ask you such a thing on an exam. Yes? Student: So the first alpha or beta refers to the linkage, and the second, in, like, in the really long word... Kevin Ahern: Mm-hmm. Student: so it's alpha-D-glucopyranosyl, so that first alpha refers to the bond going down, correct? Kevin Ahern: This is referring to the bond going down. That's correct, and, as I look at this, you know, there's another error? That is missing a hydroxyl. Student: Yeah, that's what I was wondering. Kevin Ahern: Yeah, yeah, yeah. Student: Is that supposed to be ... Kevin Ahern: That should be a hydroxyl, right there, instead of an H. Student: Which is what makes it a beta. Kevin Ahern: Yeah, yeah, yeah. Student: Okay, that's why I was confused. Kevin Ahern: So this is a beta. She's exactly right. This is a beta. That should be an OH there. The book is really bad. They decided to randomly change all your figures for you. That's why I say, don't even look at that structure! I hadn't even noticed that part. Let me show you a much better structure for sucrose... a very expensive drawing. [laughter] I didn't save any ink doing this, but I will tell you that I think it actually much better depicts the way that sucrose actually looks. Here is the fructose. There is the beta. Well, actually, no, I take it back. You know, that other structure is right. Hold on. Let me go back to... This is why that structure is stupid. The structure is right but I see why you're confused. Notice that this is 1, 2. This is not carbon-2. This is carbon number 6. So that's why, and you say, "Well, it's going down." Well, they had to flip it to make it go. So that's why I say, and you see the confusion that arises from that structure. This actually shows it much more accurately. It's beta because the OH is going up. This OH is going down. Now, sucrose is interesting. Is sucrose a reducing sugar or not a reducing sugar? It's not a reducing sugar. It's one of the very few sugars that I refer to as a diglycoside, meaning that both of the anomeric carbons are tied up in making this molecule. It's not a reducing sugar. So we see that, and we see versus that. You might wonder about the reducing sugar part, and you think about this, you think about, well, we sweeten drinks with sucrose, right? We sweeten drinks with sucrose. Sucrose is not a reducing sugar, meaning it's not readily oxidized. If I put sucrose in solution and I don't put a bunch of bacteria there to eat it, it'll stay there quite a while. It's not going to chemically oxidize. You think, "Oh, but if I put glucose in there, it will chemically oxidize," and the answer is, yes, it will. One of the reasons that soft drink manufacturers use high fructose corn syrup is because fructose is chemically stabler. It's not necessarily better for your body, but it's certainly chemically stabler. That's why you see high fructose corn syrup being used. So if you're going to draw the structure of sucrose, use the one I have on the board. You'll be much better off than if you use the idiot one that your book uses. So much for disaccharides... moving on to polysaccharides. As the name suggests, polysaccharides are molecules that contain many sugar subunits. One of these is glycogen. Glycogen we'll talk a lot more about in about the last week of the term. Glycogen is a very important polysaccharide. It is the primary storage form of glucose in your body. Your body doesn't store free glucose, as such. The reason? As I will say many times this term, glucose is a poison. Glucose is a poison. If you don't believe me, ask somebody who cans and makes preserves. Why do they put so much sugar and so much glucose, or sucrose, for that matter? Why do they do it? You can take a jar of jelly and leave it laying out on the counter for a long time before anything will happen to it, because it's a poison. That's why. So your body doesn't want to keep much free glucose around. It stores it in the form of a polymer. The polymer that we store it in the form of is glycogen. Our liver is full of glycogen. Our muscles are full of glycogen. Glycogen is not a poison. Our body tolerates glycogen very well. So we can release glucose in small quantities. In small quantities, glucose doesn't kill us. In large quantities, glucose nails our kidneys, glucose nails our eyes. That's why diabetes is such a nasty problem. Our body is telling us, "This stuff's a poison." So we've got to find a way to keep it. Now, this is a polymer of glucose subunits. It may have thousands of glucose subunits, and the linkages between the glucoses that we see is interesting. Let's not focus on the top sugar at the moment. Let's just focus down here at the bottom. If I take and I make a polymer of glucoses and I make them only with alpha-1,4 linkages, I create something that we call amylose. Amylose is a component of starch. So amylose is a polymer of glucoses with only alpha-1,4 linkages... just long, straight chains. Glycogen has a polymer of glucose alpha-1,4 chains, but about every 10 glucoses, we see a 1,6 branch. So now, instead of having something that's very long and linear, we have something that's forked, and forked, and forked, and forked. About every 10 residues, we find another fork. That fork turns out to have tremendous implications for life as an animal compared to life as a plant. Amylose we find in plants. Glycogen we find in animals. I'll talk about those later when we talk about glycogen metabolism, but I want to plant that idea in your head. Both glycogen and amylose contain only glucose. They contain only glucose. Actually, since I'm talking about this, let's go here. When you hear the term "starch," starch is a very common term used to describe carbohydrates in plants. Potatoes are full of starch. Corn is full of starch. Starch is really a mixture of compounds. It's a mixture of amylose, that I described to you, and a branched form of glucose of its own. So plants have a little bit of branched glucose molecules. It's not called "glycogen." It's called "amylopectin." A-M-Y-L-O-P-E-C-T-I-N. It's similar to glycogen, but instead of having branches about every 10 residues, which is what glycogen has, amylopectin has branches about every 30 to 50... not nearly as branched. So when we use the term "starch" that's a mixture of those two compounds: amylose and amylopectin. If we look only at the alpha-1,4 bonds, we see these guys can form sort of nice, circular helices. They're kind of cool looking. These are a little bit like a related compound known as cellulose. Cellulose is plant polymer. It's a polymer, again, only of glucose. But instead of having alpha-1,4 linkages, cellulose has only beta-1,4 linkages. That very slight difference, of having an alpha-1,4 linkage versus a beta-1,4 linkage, converts cellulose from being something we can digest into something that we can't digest. We can digest amylose. We can digest amylopectin very readily in our digestive system. We have enzymes that will break them down because they're full of alpha-1,4 bonds. Cellulose has beta-1,4 bonds. We can't touch 'em. The reason that roughage is roughage is because we can't digest the cellulose that they're full of. So we can't go out and eat grass. We can't go out and eat things like that and get energy out of them. To do that, we'd have to have an enzyme that allows us to break down beat-1,4 bonds. We don't have such an enzyme, but animals known as ruminants, like cows, don't contain the enzyme either. You thought I was going to say they had the enzyme, didn't you? They don't contain the enzyme either. They actually contain, in their rumen, which is a modified stomach, they contain a bacterium that has an enzyme that will do it. So the bacteria in the rumen of ruminants contains an enzyme known as cellulase. Cellulase breaks down beta-1,4 bonds between glucoses and converts grass, green things, et cetera, from just being roughage and things that kind of go shooting through you, into something that actually breaks down into glucose units and gives them energy. So that's why a cow's out there eating a field all day, because it tastes good and because, of course, they're getting energy from that. Those are the simple polysaccharides. There are many polysaccharides. As I mentioned, chitin is a polymer of N-acetylglucosamine and it forms the exoskeleton of insects, and there are other polymer examples, as well. There are some modified polysaccharides that we can take a look at. The first group is known as the glycosaminoglycans. Even though I stumbled on saying the word, it's not a difficult word. Glycosaminoglycan tells you what the structure of this guy is: "glyco" meaning "sugar"; "amino" meaning it has amine groups "glycan" referring to the polymerization of the sugar. So glycosaminoglycan is a polymer of sugars that contain at least one amine group. Now you can see by looking at the structures on the screen that, in addition to containing at least one amine group—there's an amine group for this guy, there's an amine group for this guy, there's an amine group for this guy, et cetera, et cetera, et cetera inaddition to containing these amine groups, we see oxidation. We see molecules or we see portions of molecules that have negative charges. Here's a carboxyl group. Here's a sulfate that's been put onto here. Here's another sulfate, sulfate, carboxyl. There's a sulfate on the amine. There's a sulfate there. There's a carboxyl. We see that every one of these guys has at least one negatively charged group. You're not going to have to draw the structures of these. Don't worry. You're not going to have to memorize which ones have which there. But you should know some common things about the glycosaminoglycans. They all contain amine. They all contain at least one negatively charged section and these are the repeating subunits. These are polymers. So the repeating subunit, in each case, is a disaccharide, and that repeating subunit may be repeated thousands of times. So because we have a polymer that goes on thousands of times, and each subunit contains at least a couple, or one or more of the negative charges, the glycosaminoglycans are what we refer to as "polyanionic," "anionic" referring to negative charge, "poly" referring to many. These polyanionic substances turn out to have some really interesting properties, chemically. We dissolve them in water. Our body uses them in several ways. One of the ways in which it uses them is as a lubricant. Hyaluronic acid, for example, or hyaluronate, is the lubrication material of our joints. Dissolved in water, these compounds get very slippery and slidey. Snot is full of glycosaminoglycans. It gives you an idea about the slippery, slideyness of it. These polyanionic substances don't like to interact with each other. They all repel each other because they're all full of negative charges. You put enough of them together in solution, you really will alter the chemistry of that solution and create something that's like we think of as an oil or something, in terms of how slippery it is, but in fact it's an aqueous solution. That's a pretty cool property of our system. Heparin up here is an interesting compound. Heparin is an anticoagulant. Student: Are most mucuses in biological organisms composed of glycosaminoglycans? Kevin Ahern: They contain at least a significant Component of that, yes. Heparin is a very powerful anticoagulant. Looking at this compound, any ideas how it might work as an anticoagulant? I'll give you a hint. It doesn't look like Vitamin K, so it has nothing to do with that. What are those negative charges? Do you suppose those might be useful for something? Do we have any negative charges we talked about in blood clotting that had any significant impact on a process? Kevin Ahern: Calcium! Why was calcium important? Calcium was important because it was what the prothrombin modified side chains grabbed ahold of at the site of the wound. If this guy is grabbing calcium, what do you suppose is going to happen to the availability of calcium at the site of the wound? It ain't gonna be there. Heparin is very useful in that sense, and these negative charges can help sequester calcium. Proteoglycans. What the heck's a proteoglycan? If I talk about a glycosaminoglycan, what's a proteoglycan? Let's imagine I take a glycosaminoglycan and I attach it to a protein or multiple proteins. That's what's happened here. When I do that, I create a proteoglycan, meaning that it's a combination between a protein and a glycosaminoglycan. I told you that the glycosaminoglycans were polyanionic. I said that altered their chemistry. I said they really didn't like each other. They repelled each other, and you can look and see how they are staying as far away from each other as they possibly can. A visual representation of what I just told you about the properties of the glycosaminoglycans. The protein is anchoring everything there, on the inside, but we see these guys, the polyanionic sides of these, are getting as far away from each other as they can. So a proteoglycan is a protein linked to a glycosaminoglycan. There's a schematic representation of what I just showed you. Saccharide hybrids. We'll actually talk about this later in the term. We sometimes see sugars linked to other things. I'm going to talk about them being linked to proteins more specifically in just a minute. Here we see a molecule of glucose being linked to a nucleotide. It's being linked to a nucleotide. UDP is a nucleotide. UTP, of course, being uridine triphosphate; UDP being uridine diphosphate. We'll see that in the process of making glycogen, cells make this intermediate, right here, UDP glucose. The reason they make this intermediate right here is this guy is full of energy. It's a very high energy intermediate that we call an "activated intermediate." I'm going to define that term for you. An activated intermediate is a molecule that has a high energy bond... so an activated intermediate is a molecule that has a high energy bond and it uses the energy of that bond to donate a part of itself to something else. So an activated intermediate is a molecule that has a high energy bond, and it uses the energy of that bond to donate a part of itself to something else. What we'll see when we talk about glycogen metabolism is that, in order to add a glucose to a growing glycogen chain, it takes energy, and the energy for doing that comes right there. This guy is an activated intermediate because part of itself is becoming donated to a growing glycogen chain. This guy down here will become donated to a growing glycogen chain. The glycogen chain will get one unit larger in the process of making that glycogen. Questions about that? I'm sailing through things today. Let's see, glycoproteins. Glycoproteins, as their name suggests, are combinations between sugars and proteins. You might say, "What's the difference between a glycoprotein "and a proteoglycan?" A glycoprotein versus a proteoglycan. Well, let's think back to proteoglycan. A proteoglycan was a protein linked to a glycosaminoglycan. The "glycan" names are common to the two. A glycoprotein doesn't have the glycosaminoglycan. It has a relatively simple sugar on it... or oligosaccharide, as we will see, meaning it only has a few sugars, and they're not glycosaminoglycans. So a glycoprotein is a protein that has a few relatively simple sugars on it. There are many examples of glycoproteins. One of the more common ones that we talk about are those that are the blood group antigens. We talk about the various blood types, O versus A, AB, et cetera, and these characteristic blood types arise from the presence of specific glycoproteins on the surface of blood cells. So a person who has O-type blood will have this, A-type blood will have this, et cetera. And there's my little bouncing ball. Get out of here, bouncing ball. We can see that the composition—here's the protein part down here, here's the carbohydrate part up here—the composition of these are not very different. Galactose, N-acetylglucose, galactose and fucose. This guy over here, to have an A antigen, has galactose, N-acetylgalactose up there. This guy up here has galactose up here. But the immune system recognizes these differently. The immune system recognizes these differently, so you have to careful, obviously, which blood you put into which person, because if you put blood into a person that their immune system recognizes as foreign, they will attack that blood and kill the person. So understanding what type of blood that one has is important, and that typing occurs as a result of these signatures that are on the surface of the red blood cells. The signatures themselves turn out to be quite important not just for blood typing but also for tissue typing. When we, for example, give a transplant, we transplant an organ—maybe we're transplanting a liver from one person to another—tissues have their own identity markers that are on them that say, "Hey, here's what I am," and the immune system says, "Oh, that's what you are. "You're part of me." You take a tissue from a person that has a different kind of marker on their liver than what the recipient has and you're going to create incompatibilities and you're going to see rejection of that organ. So a major consideration in success of transplantation is matching the antigens that are on the surface of donated tissues. That helps immensely in the immune system leaving it alone, not attacking it as a foreign invader. Again, these arise from the composition of oligosaccharides on the surface of cells. Student: So what's the O-positive and O-negative? What's the positive and negative? Kevin Ahern: The positive and negative refer to the Rhesus factor and that's something separate from this, so I won't go into that here. Student: Well, I see that we have an antigen for A and for B separately, but what about people that are AB blood type? Kevin Ahern: They're going to have an immune system that's going to recognize combinations. When we look at the glycoproteins, we see that they typically contain two classes of linkages... two classes of linkages. One group of glycoproteins are what we call N-linked, meaning that they have a link through the nitrogen. They have a link through a nitrogen that joins the carbohydrate to the protein. The O-linked have a link through the, in this case, through an oxygen between the protein and the carbohyd—I'm sorry. I've got it down here. It's up here—the nitrogen and the carbohydrate, or the oxygen and the carbohydrate. So N-linked have a link through a nitrogen. O-linked have a link through an oxygen. It turns out that—well, actually, I should also point out that the N-linked are linked through the side chain of asparagine, whereas the O-linked are linked through the side chain of a serine. So this is the protein up here. There's asparagine side chain. There's a linkage. Serine side chain, there's a linkage. Different glycoproteins are made in different places in the cell. N-linked glycoproteins are made starting in the endoplasmic reticulum, but they get processed further in the Golgi apparatus. So there's a transport. There's a travel that's happening in the cell. O-linked glycoproteins are made solely in the Golgi apparatus. Now, this license plate, if you want to think about it, of carbohydrate residues that are there on a protein, proteins have little identity markers on them, in the form of carbohydrates, that tell the cell where this protein's supposed to go. Is this protein destined to go outside the cell? Is the protein destined to get buried in the membrane? Is the protein destined to work in an organelle inside the cell? Different tags on these proteins will tell the cell where this particular protein is supposed to go. The endoplasmic reticulum and the Golgi apparatus play a role in putting those tags onto proteins appropriately. Now, when we look at the N-linked glycoproteins, we discover that they have a common core. This common core you're not going to have to memorize, but you can see that it consists of five modified sugar residues. There's N-acetylglucose, N-acetylglucose and three mannose residues there. The ones in boxes we find commonly among all of the N-linked glycoproteins. The things out here will vary from one protein to another, because, remember, the composition of what's out here may tell the cell, "Hey, I'm destined to go into the nucleus." "I'm destined to go out to the cell membrane." "I'm destined to get kicked out of the cell altogether." "I'm destined to go to the lysosome." So these guys out here vary from one protein to another, but these are the common core that we see of the N-linked glycoproteins, the portions shown in gray. Yeah? Student: How far beyond that on the chain do those variable units usually extend? Kevin Ahern: How long, he's asking, is that, commonly, is that oligosaccharide that's out there? It's not typically overly long. What we see more commonly is we see variation of different sugars that are placed in there, and that gives a lot of different combinations and possibilities, but they're not real long, no. They're not like a long polymer, no. There's some more examples, and we see some modifications occurring there: sialic acid, galactose, N-acetylglucose, et cetera. I talked about the endoplasmic reticulum. I talked about the Golgi apparatus. We see the endoplasmic reticulum, out here. We see that basically migrating to the Golgi, transporting proteins, and the Golgi buds off, and those buds of the Golgi then transport specific proteins to specific places. All of those that are destined to go outside the cell will bud off in a specific place and go out and get kicked out of the cell, for example. One of the things that happens in the synthesis of N-linked glycoproteins is interesting. As I said, it occurs in the endoplasmic reticulum and it involves a large molecule here, known as dolichol phosphate. It's interesting what happens with it. When we look at how these carbohydrate residues on a glycoprotein are actually made, they're actually built on dolichol phosphate. So the carbohydrate portion is built on dolichol phosphate. Where is dolichol phosphate? Well, dolichol phosphate, if you look at it, it's got a long nonpolar tail and it's got a polar end. When the cell starts making that carbohydrate forked thing that's going to be on the glycoprotein, this phosphate part is sticking out of the endoplasmic reticulum. It's facing the cytoplasm side. It's sticking out, so here's this hand sticking out. So it's on this hand sticking out that this initial portion of the glycoprotein is put on. So we put several residues on the outside, up here on this phosphate, and at some point, and I almost think it's magically, at some point, this molecule inverts and comes in. So instead of projecting outwards, now this guy that has these carbohydrates on it comes to the inside. It's actually a flip that occurs so that now this is on the inside. Well, it's on the inside of the endoplasmic reticulum where the protein is found. The protein then gets a hold of this targeted carbohydrate that's on there, gets put on there, and they get modified a bit more in that process. Once it gets taken off, the dolichol phosphate flips back out and starts the whole process again. So this dolichol phosphate's critical in the synthesis of that carbohydrate portion of the glycoprotein, and it's a combination of things occurring outside the endoplasmic reticulum and then flipping in and basically delivering it to the inside. Student: [unintelligible] through some unique [unintelligible]of some type? Kevin Ahern: As far as I know, and as far as is known, there's not a particular thing it comes through. I think it's thought that there actually is a protein that facilitates its movement, but it's not coming through anything. It's actually literally just flipping itself like that. And that's kind of odd, because when we talk about membranes, later, we'll see that it's very unusual for membrane lipids to do that flip. But this guy does it and it's kind of cool. In the case of membrane lipids, one of the things we see is there's enzymes called flippases that actually help that flipping process. So it may be that there's something like that that's helping this, as well, but it's not going through anything, no. You guys look tired. Shall we sing a song? Shall we sing two songs? Okay, I have two songs. They're both relevant. They're actually both easy to sing, too. You guys liked the extra credit on the last exam, right? Students: Yes. Kevin Ahern: What was the deal on the last one? Sing loud, okay? Let's sing loud, okay? We'll start with this guy. I've never sung it in class before. It's called "Hyaluronic Acid." ["Hyaluronic Acid" to the tune of "Rudolph"] Everyone: Hyaluronic acid, acting almost magically, placed just beneath the kneecap, lubricating the debris. Better than joint replacement, simple as 1-2-3, if it can stop the aching, you will get to keep your knee. When the pain is getting bad, try not to be sad. Just go out and have a talk, with your orthopedic doc. Beg him to use the needle. To not do so would be a crime. Hyaluronic acid, working where the sun don't shine. [laughter] The next one is also very easy to sing, and it is to the tune of "Hark the Herald." [All singing "Hark the Sucrose"] Everyone: Carbohydrates all should sing, glory to the Haworth ring. Anomeric carbons hide, when they're in a glycoside. Glucopyranose is there, in the boat or in the chair. Alpha, beta, D and L, di-astere-omer hell. Alpha, beta, D and L, di-astere-omer hell. Have a good weekend! [scattered applause] [END]
Medical_Lectures
Introducing_MRI_Review_of_Vectors_2_of_56.txt
the other piece of background which I just want to make sure that we're all okay with this is that we're going to be dealing a lot with vectors and this is something I'm sure everyone is very familiar with but I just want to make sure that we're all on the same page so if we think about some coordinate system right where we have our X and Y AIS so if I show you a vector in this coordinate system right this is telling us something about some phenomena whatever it happens to be might be a magnetic field might be something else this Vector tells us two features of whatever it is that we're talking about one is that it tells us the orientation which is the orientation of the arrow the other one is that it tells us something about the magnitude so if this Vector would represent velocity this is telling us the direction in which whatever it is is moving and the length of the vector is telling us how rapidly it's moving the point that I want to just be totally clear about is that we can take this vector and decompose it into two components each one of these right parallel to the axes of our coordinate system and these component vectors right so if this one is going to be if we're going to call this V then we could call this let's say VX and VY so these component vectors if we add them together we get the blue vector V we can take the resultant or sum of those the blue vector v and decompose it into these two regardless of which way I display this to you as the two components or as the resultant or sum it's exactly the same thing that's the one thing I just want to be clear that I can display what's going on with this Vector right with equal Fidelity as these two components okay is everyone's okay with this any questions okay good all right
Medical_Lectures
04_Biochemistry_Protein_PrimarySecondary_Structure_Lecture_for_Kevin_Aherns_BB_450550.txt
Kevin Ahern: ... have a good weekend? Anybody remember it? Or did it just go [makes swooshing noise] and it was gone, right? You wake up and all of a sudden the weekend's gone. I can't quite get the volume what I want. Can you hear me up there okay? Okay. I have been pleased. I've been talking with quite a few of you that are working the buffer problems, and that's a good sign. In my experience, getting started early is an important component. And making sure that you understand those is important as well. They have different levels of complexity. And I will tell you that, when you get to the one that has the exceeding a buffer capacity, I will not expect you to calculate the pH after you exceed the buffer's capacity. That's been an anxiety for a few students. I'm not going to do that. The problem is there mainly for you to recognize when you have exceeded a buffer's capacity. So the TA's are going through the problems in the recitations and hopefully that is helpful. Is that helping or not helping? Or what's your experience? Student: Helping. Student: Helping. Kevin Ahern: Helping? Okay. Good. Not helping? Surely it's not unanimous. Nobody's going to say it, right? Okay, good, Alright. What I want to do today is dive more into protein structure. So I said some things about amino acids. The TA's will be going through some calculation of charge problems for you. And I would also tell you that there are videos of me solving problems of protein and amino acid charge that are on the class website. In fact, I will just show you that, since I've had a couple of questions. The videos for those are over here on the right side. They're different from the videos that are over here. So if you look over here, you'll see some videos of me working a bunch of problems on that. Okay. Well, last time I got started talking about primary structure of protein. And I will remind you that when we talk about protein structure, we can think of it as occurring at four different levels, primary, secondary, tertiary, and quaternary. And so today I'm going to talk about primary and I will also talk about secondary, and I don't know if I'll get into tertiary or not, but if I get through secondary today I'll be happy. The primary structure of a protein is the sequence of amino acids comprising a protein... the sequence. Now, the sequence is absolutely essential. As I mentioned last time, the sequence determines all of the other structures. The primary, that is, the sequence of the protein, the primary structure, determines what the secondary structure will be, the tertiary structure will be, and the quaternary structure will be. So that primary sequence of a protein is important. When cells have mutation and that mutation affects a coding for a protein, the mutations that affect the sequence of the amino acids will be the most important ones and will be the ones that have the most drastic effects. Okay? So that primary structure is very, very critical. Now, this shows, on the screen, a polypeptide. It is, unfortunately, not the best polypeptide they could have picked. But I guess for our purposes, for what we need at this point, it'll work okay. What we see is a polypeptide that has one, two, three, four, five amino acids in it. And we see the two ends that I talked about before. You remember that every polypeptideó by the way, I use the term polypeptide and protein interchangeably. Technically that's not right. But it's a fine line of distinction and I'm not going to make that distinction this class. If I say polypeptide or protein, We will use those terms interchangeably. Now, as I said at the very end of the lecture last time, there are ends of a protein. There's an amino end, which always has a free alpha amino group. We can see this is the amino end over here because it has a free alpha amino group. And we see this is the carboxyl group over here because it has a free alpha carboxyl group. That's the only place in a polypeptide where we will see a free alpha amino and a free alpha carboxyl. And the reason for that is because the peptide bond, which you see right there, gobbles up a free carboxyl and a free amino. So every time you have a peptide bond, we lose a free alpha carboxyl and a free alpha amino. So we see peptide bond, peptide bond, peptide bond, peptide bond. Okay? Now, another thing that we see in this schematic is that this actually is a nice simplification of the structure of a protein. This is an R group. This is an R group. This is an R group. This is an R group. This is an R group. You notice the pattern, up, down, up, down, up. So we see alternating sides of this structure that the R groups are on. Now, that's not totally surprising. Some of the R groups are rather large. Look at the size of the R group on tyrosine. Look at the size of the R group on phenylalanine. If we try to put them on the same side of the polypeptide, they're bulky, and we're going to run into problems with atoms that don't want to be close together. We've already seen the energy issue with that. Proteins arrange themselves by shifting bonds to keep those R groups, as much as it can, away from each other. So one of the ways it does it is based on what you see right here on the screen. As I will describe to you in just a little bit, that orientation gives us a configuration we refer to as a "trans," and I will explain why that's the case in just a second. The other thing I want you to notice about this is that these, uh... I guess I've said it. There are peptide bonds that are joining each of the five amino acids together. Now, there's a better schematic that I'll show you in just a second that will depict this for you a little bit more clearly. I do want you to notice right here the alpha carboxyl group... I'm sorry, the alpha carbon group. The alpha carbon group right here is the one that has the R group on it, the R group on it, the R group. So the alpha carbon is going to turn out to be an interesting carbon in this overall structure. Schematically, what I showed you on the last figure is this right here. And I told you in words. There is the first R group, there's the second, there's the third, there's the fourth, and there's the fifth. And the R groups have arranged themselves so that they're pointing away from each other. Peptide bonds are interesting structures. Peptide bonds are something that can form what's called a "resonance structure," and you learned in organic chemistry that resonance structures arise as equivalent electronic configurations for certain atoms. The resonance structure of a peptide bond, this structure is essentially equivalent to this structure on the right. Well, this structure on the right, as we look at it, has a double bond. And what we learned about organic chemi-, what we learned in organic chemistry about double bonds is the fact that there are some specific stereochemical orientations that can happen with those. Those specific orientations can create what we think of as cis bonds and what we think of as trans bonds. So here is an alpha carbon. Here is an alpha carbon. And you will notice that they are oriented, with respect to the double bond, in a trans configuration, this one being up, this one being down. Okay? Now that turns out to be very, very important for understanding the overall structure of a protein. So even though that resonance structure isn't a double bond all of the time, it behaves as if it is, almost all of the time. So this cis/trans nature of these alpha carbons are very, very important for us to understand protein structure. So I'll emphasize again that resonance structure gives rise to what are cis or transóand, by the way, cis or trans can exist. What we see when we analyze the structure of proteins is that the trans is very strongly favored. If the trans is not very strongly favored, we can imagine that, when we have a cis, we would have this bond going down. And as this bond goes down, the R groups may get into each other's face, and that's exactly what happens in some cases. So having that structure as a trans is important. This now shows the peptide bond, as you see here, the carbon to nitrogen, as if it were a double bond. Well, what you remember, again, from organic chemistry, is that that double bond defines a plane. Double bonds don't rotate. Double bonds are fixed, and that forms a plane. The plane of that double bond is shown, what you see on that blue in the screen. Okay? Again, we see the alpha carbon in that trans configuration relative to that double bond. Notice, now, again, the R group is sticking out. There's the R group sticking down. The R groups are oriented away from each other as much as possible. Okay. Now. This figure shows us what that structure will look like if we try to put that peptide bond into a cis configuration. If we try to put it into a cis configuration, now, here's our big bulky R group, here's our bulky R group. Look. They may run into each other. It's for this reason that we see the trans configuration favored something like 99.999% of the time. At least 99.999% of the time, we see the trans double bond, or the trans configuration favored, not the cis configuration. We do occasionally see the cis. Now, one of the amino acids actually favors, at least relatively favors, the cis configuration. It's the amino acid called proline. And when I described the structures of the amino acids to you, I neglected to point out something very important about proline. Proline is the only amino acid whose R group makes a bond with the alpha amino. We can see that right here. Here's the alpha carbon. Here is the R group coming off, and we see that the R group makes a bond with the alpha amino group. Now, the significance of that is because this is a bond to the alpha amino, there is less flexibility associated with a proline. Prolines do not have as much ability to rotate bonds as do the other amino acids. Further, we see proline has some things hanging off the end of it. Okay? And those things hanging off of the end of it, in either case, can get in the way of a trans or a cis. So proline is an oddball, as far as the amino acids go. And proline has a very strong effect on the structure of proteins in which it's found. It's not uncommon, where we find a proline in a protein, that we actually see something called a "bend." And I'll explain bends to you in a little bit. But bends arise because proline is not very flexible and it has some real structural limitations, and the rest of the protein has to go along with whatever proline defines. Now, this favoring the cis is only relative. The trans for proline is still strongly favored. Probably 99% are still in the trans. But about 1% of the time, it'll flip into the cis. The other ones won't have that happen nearly so frequently. So even though proline is more relatively favored, it's still probably 99% of the time hits the, has a trans configuration set for it. I'll have a lot more to say about proline as we get going further along, talking about structures of proteins. Questions about that? Student: Kevin? Kevin Ahern: Yeah? Student: So it only favors cis more than the other amino acids. Kevin Ahern: That's correct. Student: But it's still favored as trans. Kevin Ahern: It's still favored as trans. That's correct. Yes, sir? Student: Could you point to the alpha carbon group? Kevin Ahern: The alpha carbon is on this guy, right here. So you see the alpha carbonóuh, let me seeóyeah, the alpha carbon's right there. So you see that its R group is bending back over on this guy. See it? It's making that bond with the alpha amino group. That's the only amino acid that does that, and that causes a structural limitation on the overall protein at that point. Yes, sir? Student: Could you go back to the web page that showed the amino acids in the sequence of the primary structure [unintelligible]. Kevin Ahern: Yeah. You're talking about back here? Student: Yeah. So are all the alpha carbons in the same configuration in terms of R and S? Kevin Ahern: Are they in the same configuration with respect to R and S? Student: Yes. Kevin Ahern: Uh, buh, buh, bah... I would have to sit down and do it. I don't know it off the top of my head. Yes, Janet? Student: So was this, in trans proline, it can rotate, so it is rotating for that, but it just isn't as flexible around the carbon-nitrogen? Kevin Ahern: It's not as flexible around there. So it's rotating. Remember, we've got a double bond. It can be in the cis or trans. That's the rotation that we're talking about. And so the flexibility I'm talking about refers to the rest of the molecule. The cis and trans of the peptide bond are still capable of flipping. And as I will show you in just a little bit, with proline, what happens is, because that alpha carbon is in a ring, we don't have the flexibility of the alpha carbon bonds to rotate in the same way that we have in the other amino acids. Okay. Where are we at here? Well, now, after I've said something about thatóI've told you that the alpha carbons are very important for us to understand something about protein structureóit's important that now that we think about the alpha carbons in that overall scheme. Here's our peptide bond, right here. The peptide bond is behaving as if it's a double bond. There are three bonds that are of interest to us, however, in a given amino acid. Here's a bond between the alpha amine and the alpha carbon. Here's a bond between the alpha carbon and the alpha carboxyl. Alright? Only the peptide bond is capable of being a double bond. These guys are each capable, they're perfectly single bonds. And single bonds, you recall, can rotate. Rotation is very, very important for the overall structure of a protein, because rotation gives enormous possible structures that can arise. Alright? Now, I'm going to introduce a concept that I want you to have a general understanding of, but I'm not going to go into the specifics of the actual angles. Somebody's already asked me, "What's the zero point for the angles?" And that's not really what's important for us, okay? Because we have the ability to rotate across these two double bonds, we could imagine that the structure of this protein will partly be a function of how those rotational angles are, in fact, set up. Let's think about this. This guy, right here, is part of a plane. Right? We can think about this guy as being the plane of one peptide bond. Alright? So here's my peptide bond on the left. On the right, I've got another peptide bond. It's also a plane. Okay? Everybody with me? Peptide bond on the left, peptide bond on the right. Okay. When I put my thumbs together, the place where my thumbs are make that alpha carbon. When I pull this up, it rings. The alpha carbon's in between my thumbs. Alright? Now, what happens is there's rotation that's possible. Those planes themselves can rotate around that alpha carbon. And now we start thinking, "Oh, wow. "These rotations can, in fact, also have limitations "in terms of the things that are out here." The things that are out here may start bumping into each other. So there's going to be some limits on the way that this guy can rotate and on the way that this guy can rotate. Those two bonds are called phi and psi. And phi and psi, specifically, are rotational angles around the alpha carbon. Phi is between the alpha amine and the alpha carbon. Psi is between the alpha carbon and the alpha carboxyl. Phi and psi. So you'll hear a lot, when you talk about protein structure, with respect to what phi and psi actually are. Everybody understand phi and psi? Now, keep in mind, they are rotational angles. We're not talking about this kind of angle. We're talking about rotation. Rotational angles have some very, as we will see, some very strict limits on them because of the spatial considerations that we talked about before. Those spatial considerations cause major limits for what things are actually stable. Now, there's a famous Indian scientist named Ramachandran. You don't need to know the name. But Ramachandran was very astute in recognizing the importance of these phi and psi angles, because he recognized that those were the primary variables in determining certain structures of proteins. And so he plugged it into a computer. And he plugs it into a computer and says, "Here's the geometry of the peptide that I've got. "Here are these groups that are floating out here in space. "Where are things going to be too close together? "Because I know if I get too close together, "the energy getsówhoa!óprohibitive." So he plugs it into a computer and just starts rotating through space and determining where the angles are that are stable, that is, that have relatively low energy, they have plenty of room, and where are the angles that are very unstable, where they have high energy and would come apart. So he created something we call a "Ramachandran plot." Ramachandran plots plot the angles of phi and psi. Now, I'm just showing you one. I don't want you to panic with this, okay? This is mostly informational. I'm not going to ask you to interpret a Ramachandran plot. Okay? However, Ramachandran plots are very interesting, because what we see when we look at a Ramachandran plot, what you see is exactly that. You see, on the y-axis, psi through 360 degrees of rotation, from +180 to -180. For our purposes, at the moment, it doesn't matter, and, as a matter of fact, it doesn't matter at all, for our purposes, where zero is. It's an arbitrary starting point for us right now. Similarly, phi, along the x-axis, goes from -180 to +180. So he asked the computer, "Tell me where the regions are that will be the most stable, "that will be the lowest energy, "not the most ones that are inhibitive." And what he found is that there were two major regions that were there. One was right here and one was down here. Okay? No, you're not going to memorize those angles or any of that sort of stuff. But this is interesting. A good deal of that space that was out there, that was possible for rotational angles, gave rise to structures that were not very stable. That meant that there was relatively limited amounts of angles that gave rise to stable structures, and they seemed to be clustered pretty much into two regions. We'll see that those two regions turn out to be very important for our understanding of the next level of protein structure. Everybody with me on this? Questions about Ramachandran plots? I'm going to say a little bit more about them in just a little bit, also. That's what I want to say about primary structure. And when I'm talking about Ramachandran plots, as we will see, we're starting to talk about the next level of protein structure. That's called secondary structure. Now, secondary structure I'm going to give you a definition secondary structure is the next higher level of protein structure, and it arises as a result of interactions between amino acids that are relatively close in primary sequence. I'll repeat that. Secondary structure arises as a result of interactions between amino acids that are close in primary sequence. They're close interactions. We don't see things very far away interacting in secondary structure. Now, technically, secondary structure involves a regular repeating structure, as well. I didn't include that in the definition, but technically, it does mean it's a regular repeating structure. I'm going to show you some regular repeating structures in just a bit. The regular repeating structures I'm going to show you arise becauseof those limitations that we saw in the Ramachandran plot. Well, let's think about this. Here is a regular repeating structure we commonly find in proteins. It's one of the structures for which Linus Pauling was recognized, ultimately with a Nobel Prize. It's called the alpha helix. The alpha helix, as you can see by the structure on the screen, is a regular repeating structure. It's a coil that goes on and on. You've seen DNA. Everybody's seen DNA. But DNA is a double helix. This is a single helix. Now, this shows three different views of an alpha helix. And from this perspective of the alpha helix, we can see, certainly, here, the helical nature of it here. It's not quite so easy to see the helical nature of it here. But what we see is that there are some hydrogen bonds. See those green dots right there, or the green dashes? Those are hydrogen bonds that are helping to stabilize the secondary structure, this structure for an alpha helix. Hydrogen bonds are stabilizing this structure. Very important point, the most important bonds stabilizing secondary structure are hydrogen bonds, and they're happening within a few amino acids of each other. Now we could go through and we could do all the business of how many amino acids they are apart and how many there are per turn, and so forth, and I don't think that really tells us anything that's important about the structure. The most important things about this structure are the regularity, the hydrogen bonds, and the last thing I'm going to mention, which is this thing, right here. If we notice in that third panel, in C, we can look sort of down the barrel of the alpha helix. And when we do, look where the green groups all arise. How have they been arranged, guys? Outside the helix. Again, we start coming back to this important point about bulky molecules need their space, R groups are oriented in an alpha helix to the outside. Now, one of the things that we will see about alpha helices is that they are parts of the overall structure of proteins. Some proteins have almost exclusively alpha helix, and that's all they have. They just go on and on and on, kind of like the EverReady bunny. Ha-ha. Alright? In other cases, more commonly, we see that they go on for a ways and then we see another structure arise, etc. If I have a protein that really only has alpha helix, I have something called a "fibrous protein." A fibrous protein. A fibrous protein has primary structure, it has secondary structure, but that's primarily about all it has. And it goes on and on and on and on. Example? My hair. Hair, has keratin, that has a structure that just goes on and on and on and on. It's fairly boring, as proteins go. We'll see some much more interesting proteins than that. But fibrous proteins have that characteristic. Now, this shows you the orientationóand it's showing you on that schematic figure that you saw before where the hydrogen bonds are located. Notice that this hydrogen bond that is forming with this amino acid that's several amino acids away from it, this hydrogen bond couldn't interact like this unless there were a coil. It's the coil that allows the hydrogen bonds to form, and, conversely, it's the hydrogen bonds that help to stabilize the coil. There's a carbonyl. There's a hydrogen. And as we saw on the very first day, those are really good pairs for making hydrogen bonds. Here we go back to our Ramachandran plot. What do we see? There is where the alpha helix is found. The alpha helix, when we look at all the alpha helices that are out there, we see that all the alpha helices map in this region very, very tightly. And there are a lot of things with very similar angles to alpha helix, out here, for example, that have Ramachandran angles very much like it. The alpha helix is in a very, very stable region of the Ramachandran plot. That's not surprising, not surprising, at all. Makes sense? I'm going on and on and on. You guys want a joke? Student: Absolutely. Kevin Ahern: It's a little dull in here, right? So this is one of my favorite jokes. There's this little guy, named Artie. And he wants to be a hit man. His dream is that he can go and he can kill people for a living. Make a lot of money in this, right? He's got a career set for him in the mafia or something, right? He decides to go out and do this. So he figures, "Well, I gotta get started." So he goes out. He makes a little note. He tacks it up on the bulletin boards around town and he tacks it up on the telephone poles, and so on, and so forth. And it says, "Will kill someone for cheap." "Will kill someone for cheap. And so he's got his thing all up and this guy calls up and says, "Yeah," he says, "uh, I've got somebody I want you to kill." He says, "Oh, yeah?" He says, "Yeah." He says, "I'd like you to kill my wife." And he says, "Okay." He says, "How do you want me to kill her?" And he says, "I want you to strangle her." No problem. He writes this all down. "Where might I find her?" He says, "Well, as a matter of fact, she's at the grocery store right now." He says, "Okay." He says, "Can you kill her right now?" He said, "Yeah." And he says, "Well, how much would you charge?" And Artie says, "Well, you know, I'm getting started." He says, "I'll do it for a buck." [laughter] That's how you get started, folks, you know? He said, "I'll do it for a buck." The guy says, "That's great! Yeah!" So Artie trots off to the grocery store. He gets out there and he looks, and she's right there in the middle of the produce section. He looks around and there's nobody there. He goes up and he grabs her by the throat, strangles her, right there in the produce section. "Yes! I'm set!" Uh-oh. Somebody saw him. Somebody saw him. Alright? "Damn! This could be serious" He goes, he grabs this person and he strangles them! You can't have a witness, right? I mean, if you're going to get started, you can't have witnesses. So he goes and he strangles this person right there in the grocery store. He's all "uh-oh." There's a third one. He goes over. "My lucky day. This wasn't the way I envisioned this thing getting started." So he goes over, he grabs this person, he strangles them. He looks around and he goes racing out of the grocery store. And the police catch him. And the next day, the headline in the newspaper says, "Artie Chokes Three for a Dollar at the Grocery Store." [class laughing and groaning] Oh, that was bad, wasn't it? Artie chokes three for... [class laughing] I will tell you some jokes later in the term, and so I want you to remember, okay, that all I have to do is say, "Artie chokes three for a dollar at the grocery store," and you're going to laugh at those. You may not laugh at them, but if you do, then there's my punch line that works. Artie chokes. Here's an alpha helix. Here's a schematic representation of the structure of a protein, showing alpha helices. You'll notice that this is not a fibrous protein. This is a protein that has an alpha helix that goes for a little ways and then we have a, um, bend. And then it goes for a ways and then we have a bend. And then it goes for a ways and then we have a bend, etc. What we see about the structure in most proteins is that they have regular repeating structures for a certain region and then something kind of interferes with their ability to remain alpha helical in nature. I've mentioned one amino acid today that might interfere with that. What would you suppose that would be? Students: Proline. Kevin Ahern: Proline! Proline is going to have some limitations, in terms of angles, and proline may, in fact, interfere with the regular helical nature that we see here. Now, I'm going to show you an exception, actually, probably on Wednesday, to that, but proline is one that can really interfere with a regular repeating structure. A second structure that is a repeating structure that is, in fact, a secondary structure, is known as a beta strand. Let's, first of all, look where beta strands appear. Beta strands appear in Ramachandran plots, as you can see hereóno surpriseóagain, in the most stable region of a protein. You can see it's actually a bigger stable region of the Ramachandran plot. Student: Kevin? Kevin Ahern: Yeah. Student: There's right-handed and left-handed alpha... Kevin Ahern: I can't see you. Where're you at? Student: ... alpha helices, there's right-handed and left-handed? Kevin Ahern: Yes. There are right-handed and left-handed, but right-handed is, by far, the predominant, and, in fact, some people argue if left-handed even occurs. Student: Okay. Kevin Ahern: Yeah. But right-handed is the predominant form, yes. Was that the question? Or was there something else? Student: Sometimes I feel like if you flipped it over it'd be a left-handed one. Kevin Ahern: No. What you'll see is that the orientation, actually, if you want to come by my office, I'll show you an example of a right-handed versus a left-handed helix. And they do differ. And if you flip it upside down, it still remains a right-handed helix. It has nothing to do with the orientation. But come by, I'll show you, okay? Good question. Beta strands. Beta strandsóthis is a little harder to see in this imageóI'm going to show you actually a better image of that. It's going to look very much like that first one that I showed you, which was the up, down, up, down, up, down, right? That very first image, where I showed you the protein where the R groups were oriented up, down with respect to each other, is a very good model for what we call beta strands. And I call them beta strands because there is a strand. When I put them together in a bunch of strand, I make something called a "sheet." People commonly call beta strands "beta sheets" frequently, because they are arranged in sheets. Silk, for example, is composed of beta sheets. Now, look at the orientation here of the R groups. In this case, they're going out of the plane of the board, in. Out, in. That way, versus this way. That way, versus this way. And they're alternating as they are set up here. They are arranged so as to, again, space those groups out so that they're not causing problems energetically due to their close interactions with each other. Now, these are what are described as "antiparallel" and these here are described as "parallel." For our purposes, it doesn't really matter, but if you're curious, I will tell you, okay? This would be an exampleóthis one is parallel. That would mean we're going from alpha to carboxyl, and alpha to carboxyl in the same direction. Whereas, if they twist around like this, they're what's called antiparallel. That's all that that means. And strands can be twisted and turned, bent as appropriate, to form the structures necessary. Here is an example of a protein that has beta sheets, and they're arranged in the form of a barrel. We see barrel structures arising in proteins to help perform important functions. I'll talk about a couple of them later in the term. But these barrels are just like, literally, like a barrel is. So we see at the nanoscopic level that structures that we can recognize on a macroscopic level in the real world. Student: Kevin? Kevin Ahern: Yes. Student: Are there any beta sheets that have parallel and antiparallel structures? Kevin Ahern: Are there beta sheets that have parallel and antiparallel? I'm sure there are. I couldn't name one for you off the top of my head, but yes. Yes. So they're not exclusive to one way or the other. Now, I mentioned turns earlier today. Turns are very important because it's turns that interrupt secondary structure. Turns interrupt secondary structure. And when we see a turn, there's a variety of configurations it could have, but a common form has been identified that involves, in this case, four different amino acids. There are some that involve three. There are some that involve even more. But suffice it to say that this is a common structure that we see. Not surprisingly, one of these amino acids is proline, very commonly, proline. Now something that may surprise you is another one of the amino acids that's involved in this structure, commonly, and, again, this is not absolute, but commonly, is glycine. Glycine is the amino acid that has the smallest R group. It only has a hydrogen. And you would say, "Well, why would glycine be involved in a turn?" What do you think? There's this mumbling. What's that? Student: Because it's achiral? Kevin Ahern: Not because it's achiral. No. It's achiral because it has the small R group. But it's related to the small R group. Yeah. Student: Is it because it has reduced steric hindrance? Kevin Ahern: There's reduced steric hindrance. Glycine allows for a lot more flexibility. Glycine allows for a lot more things to happen. So it actually is favored, because we've got this limitation over here and we've got flexibility over here. Now, in fact, we may be able to do something that we couldn't do otherwise, and glycine will facilitate that. Again, it's not absolute. But we do commonly see that in turns. And I won't go through that. Now, I mentioned, with respect to alpha helices, that we see them in what are called fibrous proteins. We also see them in beta strands. As I said, silk is a protein that's comprised of beta sheets. That is a bunch of strands put together. Silk is also a fibrous protein. Your nails, your fingernails, are fibrous proteins. So fibrous proteins are comprised, as I said earlier, of primary structure and secondary structure, but they have very little tertiary or quaternary structure, as I will be talking about later. So here is an example of a fibrous protein. We see that, in this case, we have helical structures and the helical structures themselves are intertwined. This gives rise, as we will see, in some cases, to strength of structures. And I'll have an example for you, again, next time. This shows a very interesting characteristic of protein, or portions of a protein. These can be separate proteins or these can be portions of the same protein that are interacting with each other. Now, I'd like you to look at what's going on with this. When people first discovered these structures, they found something very interesting as they were examining amino acid sequences of proteins. By the way, when we find a new protein, we always want to determine its amino acid sequence, because, as we know, the amino acid sequence gives rise to everything else. And we know the amino acid sequence long before we know the overall structure of the protein. We can make some predictions, but our predictions, as I will tell you later, aren't as good as we would like them to be. So the first thing they noticed about this class of proteins when they discovered it was that it had a very interesting thing in its primary structure. Every seven amino acids or so, there was a leucine. And that was a little bit of a puzzle. Why is there every seven amino acids a leucine? And when they started determining structures, it all made sense. Leucine is one of the hydrophobic amino acids. It has a hydrophobic side chain. That hydrophobic side chain, as you recall, doesn't like water. When they examined thisóhere's one strand, with its every seven amino acids we see a leucine. Here's another strand, either part of the same protein or part of another protein, that every seven amino acids also has a leucine. And look what they are doing. They're interacting with each other. They're getting away from water by interacting with each other, and they're forming a structure that we call a "leucine zipper." This leucine zipper arises because of the regular nature of the alpha helix. That regular repeating structure is placing that leucine at the same place out there in space every time, allowing those leucines to interact. And so we could imagine that if we wanted to peel these apart, we would do it just like a zipper. And that peeling apart is going to be relatively easy to do because these are only hydrophobic interactions that are actually helping to hold these leucines together. And we'll talk a little bit more about that in a bit, okay? So leucine zippers arise because of the regularity of the alpha helix. Now, the structure that you see on the screen actually is not a secondary structure. The alpha helix is a secondary structure, but now we're starting to see what we refer to as tertiary or quaternary structure. In this case, if it's in the same protein, it's tertiary structure. And I'm going to show you some other examples of that in a second. But when we see different regions of proteins that are not close to each other interacting, we see what we call tertiary structure. Kevin Ahern: A leucine zipper can be a part of a tertiary structure, that's correct. I said I was going to talk about it next time, but I guess I'll talk about it today. I told you I had an exception to the rule about proline forming structures that are helical in nature. One of the most important fibrous proteins in nature is the most abundant protein in your body. It's known as collagen. Collagen is literally the glue that sticks you together. It holds us together. Without collagen, we are in trouble, okay? Collagen, to give you an idea, has a structure that looks something... like... this: coils of coils, like we saw before. Those coils of coils, when we analyze the sequence of amino acids comprising them, we discover something very surprising. Look at this sequence. Look at every place where you see proline, proline, proline, proline, proline, proline. It's full of proline! Yeah, we have a regular repeating structure. You might wonder, "What's that H-y-p?", "Hyp" stands for hydroxyproline. So not only do we have proline, we have a modified form of proline in here called hydroxyproline, and this guy is just bursting with this stuff! Yet it forms a regular repeating structure. How is that possible? Well, the answer is in red, on the screen. Again, glycine is there. And glycine is giving the space needed to form this regular repeating structure. Essentially, everywhere we see a proline, we see a glycine, okay? This is facilitating formation of that regular repeating structure. So this is one of our exceptions. As I said, proline is not an absolute thing for a turn. But there's an interesting story that goes with this and it's that story that I'll finish with today. Hydroxyproline is a modified form of proline. When I told you about the 20 amino acids, I said there were amino acids that got modified after they were made in a protein, and hydroxyproline is an example of one of those amino acids. It's put into the protein as proline, but then it gets modified chemically. That chemical modification involves putting a hydroxyl group on it, and that hydroxyl group ultimately comes from Vitamin C. One of the few reactions where we actually have a vitamin that's playing an important role in a chemical process. Vitamin C is ultimately the source of this. We make hydroxyproline and one of the reasons that we have to have hydroxyproline is so we can make strong collagen, okay? Now, this was discovered partly, originally, back in the days of the old pirates. The pirates would go out. Big, honking, hairy, stinky, ugly guys going out conquering the world, killing and robbing and doing all kinds of nasty things, and they would go out on the ocean for months at a time and they ate salted meat, because they did have refrigeration. They didn't exactly have arugula. They didn't have any fruit desserts. They had no source of Vitamin C. And so they'd go out as these big, hunking, ugly guys, and they'd come back as these puny little wimps. They'd develop a condition called scurvy. And that scurvy arises from Vitamin C, and I'm going to tell you what is involved in scurvy. What's involved in scurvy? Well, these hydroxyl groups that we put onto proline are very important. Because it turns out that these hydroxyl groups are reactive with each other such that, when we start putting them together, they'll make bonds with each other. We can actually tie these strands together with covalent bonds that arise from those hydroxyl groups on proline. If you ever braid your hair, you've got to tie it off with something on the bottom, right? Otherwise, the braid falls out. These chemical bonds that form between the hydroxyls are actually linking those strands together and keeping them from falling apart, and giving strength to the collagen. As a result of Vitamin C, you have strong collagen. You don't fall apart. If you don't have Vitamin C, you develop scurvy, you have weak collagen, and, literally, you fall apart. That's what happens with them. I was going to sing a song, but I think we will call it a day and save that for another time. See you guys on Wednesday. How you doing? Student: Are there any other hydrophobic amino acids that form zippers? Or is it just leucine? Kevin Ahern: Leucine zippers are the best well-known ones. Student: Okay. Kevin: That's a good question. [no audio] Kevin: [laughs] I should write one, I suppose, shouldn't I? Captioning provided by Disability Access Services at Oregon State University. [END]
Medical_Lectures
Review_Session_for_Final_Exam_for_Kevin_Aherns_BB_450550.txt
we have a chance to get dinner before coming good I did two so that was good so the Exum is what about 60 hours away oh [Music] pleas actually I think it's more like about 62 something like that so but who's counting right how's it coming is it good to have it the that early or not good but now you can focus on your other stuff though right I've been teaching the class for about 10 years and I've never had one this early in fact I've been teaching for about 15 years I've never ever had a class with an exam on Monday at 9:30 so you're the lucky ones I told you guys you were a special class so it was great singing today I was very pleased that was cool it sounded pretty cool it's pretty exciting it's no fun when you're up there singing and nobody else is singing along so it's it's good it's good to hear the songs you knew the songs though right yeah like one of the songs I didn't know so yeah how many had had ever heard the coke song before yeah you had to be as old as I am to understand that K song was very popular in the 1970s and I was just an infant at that time but um all right um so you guys know how these go um I'll handle questions and um then we'll put it out on TV Land for everybody I haven't yet posted the video from earlier today it's sitting on YouTube right now but I haven't uh gotten it prepared for viewing yet so it should be ready uh sometime later this evening I think so and then we're done all right so what you got yes pfk sorry pfk phosph fues so f26 activates it but then it's also regulated by ATP yes so so actually the regulation it's so that's a good question so let me just repeat what you said since people without the microphone wouldn't wouldn't hear that uh she said that she's interested in pfk and it's regulated allosterically by f26bp and um ATP and she said ATP was uh also regulated um regulation was through substrate is what what you said and that's not completely true no so the regulation is actually allosteric and the substrate regulation the substrate is just substrate just it's no different than any other substrate so the key thing in ATP is you get two binding sites so anyway let me I'll let you finish your question okay well um so how do do f26bp does it just activate it after it's been found by or how does that okay so question is how how does f26bp regulate it does it happen just after it's bound ATP or what that's kind of what the question is right okay so um ATP is a little confusing in this respect okay and um there's not an order to it to answer your your your question briefly there's not an order so we could imagine that uh if we think of order that if I had f26 BP present and it binds to the enzyme what's going to happen to the enzyme good exam question what's what's going to happen to the en en and what do the change shape what what kind of shape is it going to change to R it's going to go to R state right because it's activating the enzyme so the enzyme goes to the r State what happens in the r state from what we've seen with other enzymes binds the subrate binds the substrate better right so we can imagine an order with that okay uh the enzyme is odd and that's Al it's regulated allosterically by multiple things so other things could be playing a role in that that activation or inactivation as the case may be um and like other enzymes we see that the aleric activation is not necessary for catalysis either so the substrate combine even without that f26bp which is why I say there's not really an order to does that confuse you even further uh no okay so the two binding sites that it has then one that um pfk has one is the aler site for f26 BP and the other is for ATP so our question is the what's the nature of the two aleric sites on pfk I think that's what that is right well it's actually even more complicated than that because remember there's other things that can affect uh fruit uh pfk allosterically right so we can see affecting it we can we can see um um I'm not remember off all top of my head but there's there's like three or four things that can that can affect that and they will interact differently with the enzyme okay so amp I believe actually overlaps with the ATP uh binding site and because it binds slightly differently it favors the r State instead of the T State okay because the ATP is favoring the T State um but f26bp has its own binding site so this enzyme has several places it can bind things and the effects are different depending upon which one is bound so I'm not sure if I'm answering your question or not but that's uh kind of was there Connie um can you say the sequential model works for this when it comes to RS andc can I say the qu sequential model works for this no I can't okay and the reason I can't uh say the sequential model works is this the sequential model relates to The Binding of substrates not of alisic vectors so the sequential model has to do with one thing followed by another thing binding and not uh different substrates binding okay yeah question so I know I'm sorry back here I'll come back to you so this it's still onk regulation yes in yourls it something about and sure so that's a very good question it's a very good exam question too so uh so the um in the in the highlights I talked about the km of The Binding sites of ATP now it may seem a little bit confusing and I and I apologize for that uh technically km only relates to The Binding of substrate okay because we can measure the km uh as a of the The Binding of the substrate as part of the the Vmax calculation V-Max over two the substrate concentration that gives that gives us the km but if we think of km in a very broader sense the way we've used the term km is to measure the Affinity that the enzyme has for The Binding of a molecule let's not call it a substrate at the moment okay real good example how about hemoglobin it's not a substrate because it's not an enzyme reaction but we can say that the Affinity of oxygen for sub of hemoglobin for oxygen changes as oxygen concentration changes right so we can think of KM as a concept in terms of the Affinity that a protein has for binding to another molecule and that's important because when we talk about The Binding of ATP we have both a substrate binding of ATP and a nonsubstrate binding of ATP okay so The nonsubstrate Binding of ATP occurs in the allosteric site and we can imagine that there is an affinity as as such for that right how much Affinity does that allosteric site have for ATP and as I've talked about in class it's important to recognize that that Al Eric site on pfk has less affinity for ATP than the active site does as the substrate binding site okay and that's important because only when the substrate concentration is high do we want that aleric site bound when the substrate concentration is low we want that substrate binding site to be having preference it's taking that ATP because we want that reaction to go when the ATP concentration is low so when I talk about two different km that's what I'm I'm referring to does that does that make sense yeah so does pfk have different like I don't know this is for enzymes but does it have to and or is it just different sites that I'm not sure what what could you repeat the question um does pfk have different subunits like regulatory oh I see does pfk have different in other words is pfk like atcas we in atcas we saw regulatory subunits and we saw catalytic subunits and not all enzymes have that setup so pfk does not have that setup but it still works in an r&t State because it those binding those bound substrates can in fact change the protein shape and that's what's doing it so good good question yeah yeah you has a okay so I'll repeat she want me to repeat the question repeat my answer about the the Affinity of the of the enzyme for ATP okay so we have two sites to think about we have an alisic site which is a control site and we have a substrate binding site which is a catalytic site they're both on the same protein it's not separate okay when we think about those two sites the one that we want to have the highest Affinity is the substrate binding site meaning lowest km is going to be where the substrate is bound because we really want that enzyme to be able to grab that ATP when necessary the highest km is the allosteric site and that's because we don't want this enzyme binding ATP at that site unless ATP concentration is high think about this physiologically when the ATP concentration is high we want to turn off glycolysis we don't want this reaction going when the ATP concentration is low we want this reaction going right now somebody said um uh in a emailed question to me the other day something about well low ATP turns the enzyme on and I said well that's not really accurate okay it's one thing to say that high ATP turns the enzyme off but it doesn't mean that low turns it on there's nothing to turn on it's just not turned off okay so that that's important to keep in mind does that answer your question okay yes con just double check a higher C value means lower Affinity High cam always means lower high cam low Affinity low km High eff inverse relationship between the two how many people have their note cards filled out how many people have started their note cards how many people have writing the smallest they've ever written in their lives yeah I love looking at the note cards I mean it's incredible how tiny some of the writing gets it's like it's really awesome works of art yes they're front and back they're side if you can write on the edge okay what they what they all have to be is they all have to be written in your handwriting they can't be run through a printer they can't have something taped onto them like a figure anything it's all going to be in your handwriting and you have to put your name on there too okay and then we just can write our name Sharie yeah when you're done write it and Sharpie across the front of it that's fine yeah we want make sure you turn your card in yes sir my question has to reaction ofis yes that's correct so he's talking about the pirate kese reaction with a with a very large negative Delta g0 Prime yeah so when you're talking about yep basically my my question is if the Delta G Prime is already very negative why isation necessary oh that's a very good question it's a very good question so let me just show you the the figure and then I'll come back and answer your question I like that question actually it's a very good one um if we look at glyos what is question concerns is feed forward activation and feed forward activation um is important when we um think of think about where it occurs where my good figure is I think it's here okay hit the S okay so what is question concerns is um frutos 16 bis phosphate activates pyrovate Kines and uh pyate kyes already has or catalyzes a reaction has a large negative Delta g0 Prime so his question is well why do I even need fructose one 16 bis phosphate if I've already got a reaction that's very negative like that okay anybody want to take a stab on it before I do hit me what's that regulation it is regulation but his question is why does why do we have to have a positive regulation when we have this going on doesn't run doesn't go well that actually is important for turning it off not for turning it on so pull the products from it's important for pulling the products but again the it's already pulling the products so why do we have to worry about that well okay so it's important for pulling for the Alay reaction but his question is it's a very good question his question is this reaction is already pulling it's already pulling all that stuff so why why do it more the there's something else about pyate Kines liz1 for the Alay reaction you're talking yeah so the Delta G Delta G Prime for the AL reaction is very positive but that's not his question his question is why why do we have to you know activate this inside well that's what it's trying to do it's trying to get over the Alay hump but that's not the question the question is you've already got a reactions very favorable why why do it yeah well no it's not it's not because you're trying to make glucose no yeah um is it because on the other at no no because that this one isn't affected by ATP so you guys are all focusing up here you're not focusing down here this is where his questions at all right I'll tell you the answer all right are there other things that are regulating this enzyme yes there are are there negative Regulators of this enzyme will they turn this enzyme off yes okay so as the f16bp concentration increases what's the likelihood it's going to bind to a positive activator instead of a negative activator what's the other regulation of this enzyme phosphorilation phosphorilation made it less active this could help to make it more active okay so it's a very very good question I've never heard that question before but it's a very good question why that's necessary so this the answer is it's not rooted solely in the Delta g0 Prime because obviously the alric factor is not affecting the Delta g0 Prime it's affecting what percentage of the enzyme is active makes sense good question I should have asked that on the exam I would have stumped the whole class you repeat that answer repeat the answer yeah well the answer is that there's other things that can inactivate the enzyme the more F16 BP we have present the more likely it's going to bind instead of the negative things we B at the same spot well think about what do they do they TR convert enzyme to T or we convert enzyme to R they don't have tobine to the same site but once they convert it to T is the enzyme going to be very active no if the enzyme's bound to f16bp is it going to be very active yes okay so a greater percentage of the enzym is going to be active Okay probability it's a probability exactly yes um what does ATP on what effect does ATP have on py Kines um well as you can see right here it has a negative effect um and that that sort of makes sense if we think about we don't want this reaction uh going forward any more than necessary if we have plenty of energy keep in mind that this pathway is only generating how many net atps two right so most of the ATP that's coming from glucose is not coming actually from the pathway it's coming from whatever happens to this we'll talk about this next term so next term this guy is going to go to actil COA actil Co is going to get oxidized to CO2 and that's going to generate a ton of ATP so if we can prevent the formation of this okay we stop that glut of ATP because cells only have a certain amount of ATP they can make if well there's another correlator to this so if ATP levels are high and we start making this I talked about in class what the consequences of that are anybody remember what they are it goes to Fat because we make acet koay we can't take that acet koay any further and the body says oh we got plenty ATP let's turn it into something we can store fat that's the fructose business that's right it's why high fructose corn syrup is a problem remember that Kevin ahern's pet theory of why everybody's getting fat overweight gaining weight I didn't say grossly overweight I hope everybody isn't grossly overweight okay so I hope I'm not yesly is anytra be froming gsis and going with hyn syrup you can say like many more molecules you can't really do that okay you'd like to are you are you a chemistry measure by chance no okay it sounds like a chemistry question that's why ask uh the um you would like to do that okay but you've got to realize that we're focusing on one thing and this tracing this one thing all the way through but the cell's needs are diverse and pyrovate can go to many things for example so even though could I trace it all the way to you know how many fat molecules if this only went to acetel COA and only did that yes I could do that but the reality is is in the mix of the cell there's no way that that only gets focused into one thing yep yeah how is glucose phos regulated in gluc how is glucose 6 phosphatase regulated in glucano Genesis I haven't told you would you isn't one of the regulator it is would you like me to tell you if I tell you then you have to know do you still want me to tell you uh it's it's regulation as far as I know is not allosteric and it's regulated in a manner similar to hexokinase which is a substrate level that's a little odd so we don't we don't worry about it I've tried to keep it as simple as I could for you yeah that we have that one enzy and we can get hangover from having that enzy do I drinking tonight or is that the of course or maybe Monday right so okay so her question had to do with um could I talk about alcohol De idrogeno and the um how we lack that step Etc so yes I'd be happy to do that that actually relates to the three fates of pyate and that is actually right here and um let's see there's a better figure here okay so we'll actually talk about this reaction a little bit next term also but um before I do before I talk about this reaction let me back up just a little bit and give it a little perspective so the perspective the reason that this reaction is important as I noted in class earlier are because cells have to do Redux balancing okay so Redux balancing is a very very important thing here's glycolysis from glucose down to pyate all right and a critical step in that process is that single oxidation there's only one oxidation step that occurs in glycolysis it occurs twice because we have two three carbon molecules but it's the same reaction that oxidation step requires NAD and NAD is a limiting substance in the cell so because there's there's a limiting amount of a of NAD in the cell we have to recycle whatever we uh convert into nadh now if we have plenty of oxygen present and we'll talk a lot about this at the beginning of next term if we have plenty of oxygen present nadh dumps its electrons into the electron transport system and regenerates NAD and we're right back here so as long as we have plenty of oxygen present we have plenty of NAD present when we start running out of oxygen we start running out of NAD and then fermentation reactions become important I noted there were two excuse me two sets of fermentation reactions that occur uh one that occurs in animals and one that occurs in yeast and bacteria so the one that occurs in yeast and bacteria is the one that's depicted on the screen here and that is the nadh which is produced in this reaction is converted back to NAD by this reaction and I'll show you that reaction just a second but this reaction right here regenerates NAD which can now be used to keep this process going well the more we keep turning the cycle the more ethanol we produce and as someone in class noted ethanol is a little toxic for cells kind of like glucose is toxic for cells you can't get cells to make too much ethanol because they die so if you're fermenting making beer or something like that you get about 12% or so ethanol like wine or something and the the yeasts pretty much poop out they just can't do much more than that okay so if you want to have liquor or something like that you have to distill it meaning you know get rid of the water and concentrate the alcohol but um to answer your question relating to the hangovers that actually comes from the next figure which is this guy right here which is showing you in more detail that reaction that's going from pyrovate over to ethanol right this what I'm going to tell you right now you don't need to know I'll talk about it next term but it's a kind of a cool thing this reaction right here pyate to acid alahh is a very unusual reaction in that it's a nonoxidative decarbox silation okay it's a nonoxidative decarbox silation we lack the ability to do that in our cells we combine the oxidation and the de carox in the same enzyme okay so we can't go to this intermediate from pyate bacteria and yeast can do that and so when they get to this intermediate then it's a single step for them to convert their nadh back to NAD now what I said in class was we've got this enzyme we can't go here but we can certainly catalyze this reaction of course this reaction in US is detoxifying ethanol and making something else that's fairly toxic but that gives us the hangover and that's acid Alid so when we go out and we have a a lot of alcohol our liver converts this back my thing is beeping at me I think my battery is ding in this it is okay um our liver converts ethanol back to this guy and it's this guy that gives the hangover yeah well again about all we can do is either go here it turns out there's there's some chemistry here so what's acid alahh why is acid alide make us kind of sick anybody know think about chemically what's what's what's its nature come on organic chemist what is this wild sta in the darkest file of some sort is it a nucleophile of some sort well it's not what I'm looking for chemically what is what kind of molecule is it an alahh it's name tells you right and what do we know about alahh they're not reduced this is a reduction okay chemically they're very unstable they're very readily oxidized so what happens when you oxidize acid alahh is that dehr no come on folks oxidation go from an aldah what does an alide get oxidized to acid we get acetic acid out of this okay so you're making vinegar and that doesn't even take an enzyme acid aldhy is very reactive and so once you start making up much acid alahh it rapidly goes to acetic acid and that's probably what actually makes you ill go drink a gallon of acetic acid and see if you have a hangover that's Kevin ahern's pet theory about hangovers by the way so it kind of goes along with Kevin's pet theory about why we get fat yes ma'am so you said that n just make come from yeah it's a good question so Ned is a limiting resource in the body does that mean that you know why can't does it mean we can't make anymore or whatever it means that cells are very efficient things and in general cells never make more of things than what they need and so NAD is one of those things that it makes a certain amount of and that's pretty much what a cell has because these processes work so well yeah if it didn't then it cells I think would make a greater stock pile of them but because they've got these backup mechanisms like fermentation and so forth to to recycle um they don't need to make more than that and whenever cells can save energy by not making too much of something they're better off because they can use the energy for something else I'm going to challenge you guys with something here I've never had a class do really well on two straight exams never okay if you guys do I don't know I go dancing down the street or something so with all blue hair what's that with all blue hair with all blue hair okay seriously I'd be very happy if you guys if you guys really nail this exam so let's see you do it yes um when G um it can go into three different yes places yep um you if you said or not whether what PH okay yeah we know so let me yeah I'll just say a word here so um her question has to do with glycer I'm sorry glycer glucose 6 phosphate which I showed a figure that shows it can go it can go three directions that was actually in the glycogen metab let me show you that real briefly and uh fates of g6p uh and may see it um yeah okay there we go yeah so glucose 6 phosphate all right so glucose 6 phosphate can go to pyate that would happen through glycolysis glucose X phosphate can go to glucose that would happen by Genesis it can go to nadph and ribos by the penos phosphate pathway so uh we'll briefly talk next time about the penos phosphate pathway no you're not responsible for it here and you're not even responsible for knowing this but briefly I'll tell you ribos of course is a sugar that's needed for making nucleotides so we'll talk about that a little bit then and nadph is related to nadh nadph is used mostly in um anabolic reactions making things so we'll see when we're making fatty acids for example dph is needed so cells tend to be kind of biased they use NAD in oxidation reactions and they use any dph in anabolic reactions so they have to have a source of that and this turns out that glucose 6 phosphate gets oxidized in this Pathway to make n adph yes Connie question about the structure of pyate carox pyate carox structure question um you said it has biotin it uh pirate carox lades has biotin that's correct okay uh where does the does the carbon that um the carox use attach to the biotin does the carbon that the carox use attached to the biotin mean the carbon dioxide yes yeah so the carbon dioxide the reason biotin is used in carox reactions uh is because biotin actually has very good affinity for carbon dioxide so um we almost always see it associated with a carboxylase if I can find the figure again I will show you uh that is um up here let's see no I'm sorry it's down here um there it is bitin biting all right so um biotin is a fairly straightforward molecule that you can see there it attaches to the lysine side chain on the pyate um carboxylase and I thought I had one showing the binal carbon dioxide also yeah so there's the carbon dioxide bound to it right there other questions yes okay yeah it's real yes that's the first time I've been asked that question This this term so I I I I will answer that question for you here let me show you the figure that she's referring to which is this figure right here okay everybody's this is the most hated figure of the whole term would you guys agree with that or not yeah I think it was one of those to be more complicated you had one you hated more than this I think so okay well this usually ranks pretty high anyway in in the hate department so there's something that's very confusing about this and what her question is back here is frequently confusing to people so her question is this look at look at what's happening here so if I have insulin I said insulin was activating phosphoprotein phosphatase and phosphoprotein phosphatase was favoring pfk2 and pfk2 was favoring fructose 26 bis phosphate and that was favoring which glycolysis GL Genesis what's that it favors glycolysis right now it seems a little odd okay because insulin is favoring glycolysis we've already eaten a big meal and now we're breaking this thing down why do we want to do that because because glucose is a poison so now now the reason she's confused and the reason it's the reason it's a very good question is let's think about what insulin's doing insulin is stimulating the synthesis of glycogen that's an anabolic process right and it's stimulating glycolysis which is a catabolic process right you probably haven't thought about that before why is that a problem well it turns out it's not a problem it's only if it's in the same cycle right the reason that both of these are being stimulated is they both act to reduce the concentration of glucose and it's what what you said here glucose is a poison if you put it into glycogen it reduces the concentration of glucose if you break it down in glycolysis it reduces the concentration of glucose that poison concentration is reduced does that make sense is does it matter when each one is used meaning like when is glycogen formed and when is glucose when is glycogen formed and when is gluc glucose broken down simultaneously yeah yep and the opposite opposite effect let's let's go back to the opposite effect we do epinephrine what's going to happen epinephrine is going to favor gluconeogenesis which is a an anabolic process and it's going to favor glycogen breakdown which is a catabolic process why does the cell do that to get glucose out into the bloodstream both of them combined to make glucose out of the blood I me this system is pretty beautiful I think so um epinephrine epinephrine favors glucogenesis and glycogen breakdown both of those allow the liver for example to export glucose yes I'm sorry glucagon was the other one right glucagon acts very much like epinephrine does but glucagon only acts in the liver yes does this also occur in muscle cells this process also um this is process also occur in muscle cells I remember when I was when I was rewatching the video and thinking about it it didn't make sense for it to the current muscle cells that's a that's a good question Connie um I don't think it's an important pathway in muscle cells it's a very important pathway in liver cells because of the liver's role in modulating glucose um I I don't think that it's a very it's a good question I can't take off the top of my head but I don't think it's a very important pathway in muscle cells no the needs of muscles are very different than than what the liver is doing information overload yeah yeah yeah Connie sorry Qui question about that and it has like that's name pf2 and fbp2 does it have a name that incorporates both those two domains Charlotte okay um what's that Let's see we give a name of Lucy what was what do we call Lucy oh that was the UTP glucose P furl that's right yeah yeah um yeah I guess the way a lot of people refer to is by whichever one happens to be active at the time so when this is active you call it pfk2 and when this one's active you call it fppp 2 yeah it's it's a confusing it's it's a cool enzyme though I don't know if any other enzyme metabolism that's like this one yes um sorry can you define a reciprocal regulation can I define a reciprocal regulation certainly reciprocal regulation occurs when something that something can be a molecule that something can be a modification or whatever modification being say a phosphorilation def phosphorilation for example reciprocal regulation occurs when something has opposite effects on catabolic and anabolic Pathways that is the same ones like glycolysis and glucogenesis so when we look at a prime example of a reciprocal regulator f26bp is an excellent reciprocal regulator because it favors glyc is and it inhibits glucogenesis an example of a reciprocal regulation involving phos involving Co modification is that of glycogen phosphor and glycogen synthes phosphorilation of each of those converts glycogen phosphor as into the more active form and converts glycogen synthes into the less active form so I I um on a previous video gave my uh phone number out and said call me over the weekend you know you can do that and people say why do you do that you know and in a class of size I get a hand I get a few calls I get a handful of calls not too many people call but I've discovered I can't do that anymore because my videos are out there on YouTube I've actually gotten a few weird calls from people who have watched the videos on YouTube in various places so if you want to call me send me an email and I'll send you my cell phone number you can call me um yeah isn't it weird yeah Liz a little different of oh okay so question is can I talk about the different types of phosphorilation there are three okay three types of phosphorilation that occurs in in cells so uh let me give the only one you've seen so far is what's called substrate level so prime example of substrate level phosphorilation happens in glycolysis and if we go uh back to here that's not a very good figure uh if we go to reaction here okay uh this is called a substrate level phosphorilation the substrate phos level phosphorilation means that a high energy molecule is transferring a phosphate directly to ADP to make ATP this guy is full of energy it's absolutely bursting with energy transfers phosphate because it has more energy than ADP does transfers it to phosphate and we make ATP in the process so that's called substrate level phosphorilation and as I said in class it's a relatively minor Min source of ATP for us relatively minor source of ATP okay the second type of phosphorilation that we have in our cells is called oxidative phosphorilation and it occurs in the mitochondria in conjunction with the electron transport system we'll talk a lot about that next term okay and that one probably accounts in ourselves I don't I'm not a a stochiometric so I can't tell you that but I would I would wage you we're talking about at least 95% of the ATP and our cells coming from oxidated phosphorilation it may be more than that the third type of phosphorilation and by the way these phosphor relations all are involving the synthesis of triphosphates okay so these are phosphor relations making triphosphates making ATP the third type of of phosphorilation uh that occurs is called photo phosphorilation that's basically what's happening in photosynthesis where the energy of light is being captured and the energy that's captured is transferred ultimately to make ATP we unfortunately don't talk about photosynthesis in this class it's really unfortunate um but uh what you learn when you examine photosynthesis is that the electron transport system is very very very similar to U the chloroplast what happens in the chloroplast and so uh if you learn electron transport you have a pretty good idea what what's happening in uh photosynthesis so those are the three types of phosphor relations that make ATP yes sir why is glucose poison what is what is glucose poison why is glucose poisonous uh glucose messes with the osmotic balance of the cell and so if if you mess with the osmotic balance of the cell you burst cells that's why we'll talk next term again everything's next term but next term we'll talk about the need to balance uh the osmotic balance of the cell not so much with respect to glucose but with respect to ions like sodium and potassium cells are in a constant battle to balance the osmotic pressures because the membrane that we have is not very strong and if that pressure gets too great they burst very readily so cells have to continuously make I'm sorry burn ATP to balance that those um those pressures okay yeah um when you talking about hypoxia induction Factor you chart that had of enzymes that it stimulated yeah do we need to like memorize that do we need to memorize the chart that had that the enzymes from hypoxia induction Factor well she's declared that you don't so okay well uh in general what's the rule I have in the class don't need to memorize things unless I don't know you mentioned it in class though and you also gave it so in general if I mention it it's it's fair game right yeah so you didn't mention it so that's fair game okay yeah uh I'll be honest with you I mean to answer your question am I going to ask you to list six enzymes that are that are that are induced by that probably not but it's you know something related to that is fair game so I think you you should know what's what's there yeah can you just go over hydis real quick and talk about how specifically hydis because water is pretty abundant in the cell how specifically get t on one how does hydrolysis happen well how hydis right all the time okay um that's actually a bigger question we haven't talked about that but let me just uh try to address that um if we look at proteins for example okay um and we say well we've we've studied enzymes and U the enzyme uh that break uh peptide bonds are catalyzing hydrolysis reactions so why aren't those hydrolysis reactions occurring so rapidly in the absence of that enzyme is part of what you're asking I think okay and the answer is that you can hydrolyze proteins and you can hydroly nucleic acids um it's easier to hydrolyze proteins than it is nucleic acids but um the answer to your question is if you want to hydrolyze proteins it takes a fairly strong acid FES a strong proton concentration uh to break that Bond and U in the cell we don't have that happening we don't have those those High proton concentrations so in the environment of the cell where the pH is relatively low uh the peptide bond and for that matter the d u d um phosphoester Bond of of nucleic acids are are both quite stable all right um so that's uh probably not as satisfying an answer as you want but uh if you think about what has to happen happen in a sering protease uh to to basically hydroly a peptide bond we have to we have to create a nucleophile in order for that to happen and creating that nucleophile is what facilitates that overall process occurring so um again as long as we don't have those nucleophiles floating around in the the context of the cell we don't we have much more stability for for the bonds you sure not the cameras conic sorry can you talk about hypox hypoxia can I talk about hypoxia okay so hypoxia is a condition where cells run out of oxygen cells run out of oxygen so when cells run out of oxygen and again I think that's on here somewhere is it up other consideration there it is there we go yeah okay when do cells run out of oxygen well cells running out of oxygen it's a fairly normal sort of thing we think about well here's this uh muscle cell it's exercising it's running out of oxygen so cells have to be able to have conditions or be able to handle conditions where their oxygen concentration is is low maybe I'm a rapidly growing cell maybe I am a cell that is in a child and this child is going through a gross spur all right well before uh part of that growth Spirit not everything gets coordinated real well so I might have for example muscle cells that don't have a real good supply of blood vessels and so forth and so they don't have as good of a supply of oxygen as they otherwise would need yet the rest of the body is saying hey we're growing we're growing we got growth hormone we're going to go do our thing the body has to have a way of adapting when oxygen concentration are low that allow those cells to survive so cells have built into them this ability to handle what's called hypoxia low oxygen concentration that hypoxia um is um a uh is a condition that is low oxygen concentration and it causes activation of this protein called hif hypoxia induction factor or now somebody somebody in the class sent me a message saying hi makes um gluts it doesn't make gluts okay hif is a transcription factor which means it stimulates the transcription of the genes that make gluts so hif doesn't make gluts it's very important to be precise in your language hi does not make gluts it stimulates transcription of the genes that that code for gluts basically okay why is that important well gluts of course are glucose transport proteins they move to the surface of the cell membrane and the more of those you have on the surface of the cell the more likely they are going to be able to bring glucose in when oxygen concentration is low it takes more glucose to get the same ATP it takes more glucose to get the same ATP if we only if we don't have oxy in the cell and all we can do is ferment we get a total of two atps per glucose if we have oxygen available from glucose we get 38 atps so if we want to get the same amount of ATP in the absence of oxygen as we do in the presence of oxygen we need 19 times more glucose well having more gluts out there makes a lot of sense you put more things out there that can suck more glucose out of the bloodstream the cell's going to be more likely to survive well let's think about this you bring in 19 times as much glucose but let's say you've got a limiting amount of glycolysis enzymes right they're already saturated they're already at their Vmax right so in addition to making gluts wouldn't you want to make more enzymes of the glycolysis cycle so that you can handle all that glucose and that's the other thing that hif induces it induces the production of many of the glycolysis enzymes okay that allows the cell to do the two things that it needs well that's great if you're a rapidly growing child and you've got these muscle cells that need all this um um ability to handle U um uh low oxygen concentrations but those same sorts of things are built into tumor cells because tumor cells come from our regular cells so tumor cells that um uh have low oxygen concentrations will do like any other cell does they'll start making hif and hif will start making more gluts so they can get more glucose they will make glycolysis enzymes and in addition they will actually stimulate the growth they will stimulate F release factors that stimulate the growth of blood vessels and all those combine to successfully allow a tumor to grow not all tumors are successful the ones that we see and we know as tumors are those that are successful if they you know never can grow blood vessels they can't handle hypoxia they can't you know um U do other basic processes that are important they never become a tumor they just die and and the body gets rid of them does that help yeah but I just to put another question you mentioned in class that um the tumor cells will activate Hi when it's low in oxygen but you said it's low in oxygen because there's no blood vessels uh but how does activating hif help if there's also no sugars because there's no blood vessels well keep in mind these so the the cells of your body are bathed in fluid okay they are bathed in fluid so even though you don't have blood vessels right here you got blood vessels out here okay right and so everything that those cells are getting is coming through whatever fluid is in the spaces between cells and that's ultimately coming from blood vessels so it's just like putting a big suction device here no seriously I mean that's what that's what it's doing is it's just sucking more of that stuff into there because it it it it has the the additional gluts to to make that possible okay yeah so is H do it bind to an enhancer and it's like an activator so does hif bind to an enhancer and it's an activator and I believe it is yes okay we'll talk about enhancers next term but yeah yes does it make more n as well then does it make any more Ned as well um no no if it did it might be able to handle things a little bit better yeah yes B the boring effect yeah the bore effect so the bore effect comes in several ways that song today didn't tell you enough about the bore effect is that okay not not good enough all right so the bore effect uh way back here relating to hemoglobin okay um arises as a result of several uh things okay so let's uh start with uh this guy okay so the first observation of the bore effect was what you see on the screen that is if you take a sample of blood and you measure its oxygen carrying capacity at different phes within physiological range of course that when you measure at those phes the higher pH will have a higher affinity for oxygen then we'll the lower pH that means that when hemoglobin is found in a lower pH in the physiological range that it tends excuse me to give up oxygen well that turns out to make a lot of sense because rapidly metabolizing cells are producing protons and protons are lowering the pH bang okay cells that are rapidly metabolizing need more oxygen hemoglobin says I'm in a place where the pH is low I dump oxygen cells are happy that's the first component of the bore effect the second component of the bore effect is the observation that carbon dioxide also affects that process okay so here was this um original condition where we measured pH 7.4 no CO2 if we measure the oxygen binding at ph7.2 with no CO2 that's the other curve that you saw there but if we take the same thing now at 7.2 and we add CO2 we see even more dumping of oxygen well that again makes more s makes a lot of sense because rapidly metabolizing cells in addition to making protons are generating carbon dioxide so they're dumping carbon dioxide out into the bloodstream again this is a sign that hemoglobin uh picks up and recognizes that um this place the dump oxygen and that's basically what it does so those two things together um make possible The Dumping of oxygen at places where it's needed now the other side the bore effect has to do with the transport of carbon dioxide and ultimately protons back into the lungs so I talked about how carbon dioxide is bound to make carbamates it binds to those Aman groups and gets makes a calent bond and gets back to the lungs the protons can go on to histadine in some cases they can go on to other amines as well and get carried back to the lungs and the lungs it's a very different environment than was out in the the tissues that were being metab metabolized in the lungs the oxygen concentration is excruciatingly high and though I didn't really talk much about it this term in the past when I've talked about this in the class I've made the point that the oxygen concentration in the lungs is so high that it literally forces its way onto the hemoglobin and in forcing its way onto the hemoglobin it favors structural changes that favor the release of carbon dioxide so because of that carbon dioxide gets carried to the lungs but it doesn't go any further stru changes happen carbon dioxide is released and we exhale it so that's basically the bore effect at the molecular level a lot of people talk about 23 BPG as part of the bore effect but it's not it's a separate separate phenomena yeah tell them about what about blood clotting blood clotting are you in favor of are you opposed to that go ahead a uhuh right okay so uh her question has to do with blood clotting so let me go to the uh figure that I show you for blood clotting and um answer um her question this relates to um it's not enzymes it's catalytic strategies I believe and nope it's not uh not there next one alerian regulation okay so um if we look at the Cascade scheme right here okay her question is does a hemophiliac simply lack prothrombin or does it lack fibrinogen or what well we could imagine that if a person lacked fibrinogen completely that they'd be in pretty deep dooo right okay and we could imagine that they'd be in pretty deep Doo if they lack this so people and there's there's different types of um hemophilia but people who have hemophilia tend to lack I want to say it's this guy up here uh and this guy up here or this guy up here so they're further up in the signaling scheme not down here in the clotting scheme because remember if we get too much or too little clotting we bleed to death internally and so um probably have some of these things down here but they're lacking up in here it will take longer and it may not form in some cases very readily at all yeah so that's why the addition of these factors to the people's blood hemophiliacs allows them to have a much more normal clotting you guys are looking tired is it time for some acid alahh How's that gonna help I have a really silly question Connie has a silly question um for is it glycogen phosphor okay um the for B you have ATP and g6p that can help convert it into the Tate and I remember look at the figure and it could attach in two places okay both ATP attaches like there's two atps that attach there's two g6s that attach can it like mix and match in any way toap perform like mix and match what like attach one ATP and attach one g6p and it converts that into a T State you're right that's a silly question um no I sorry uh I expected a big laugh out of that so um the uh her question is that we've got a dier and can we get one of them phosphorilated and the other one not phosphorated and alic regulated and you could imagine a variety of schemes but that would probably be a question that if I asked you something like that on the exam I would be crucified for so I am not fond of crucifixion personally I mean personal preference it's it's my personal dis preference yes yes can I go over Western blotting okay Western blotting going way back oh sorry okay Western blotting we talked about uh methods of uh characterizing proteins and Western blotting uh is a technique and where do I have it uh it's up here down here I mean logical Techni okay Western blooding okay Western blooding is a technique that uses antibodies to basically flag a protein of interest in a mixture of proteins okay well imagine if you will that I had isolated all the proteins of a cell and I said well I want to flag this prote prot insulin that came from this mouse and so I have an antibody that will bind specifically to insulin and let's say that I've dyed my antibod so it gives a brilliant green color and I take my mixture of proteins right here and I dump my antibody into it and I see green and I go wow I can't tell anything why well the green came from the antibody I don't know if anything is bound it's all stuck in this tube so if I want to be able to um use this in a meaningful way I have to separate the proteins and then use the antibody to bind to those separated proteins because then I can see is it binding to everything that's in there is it binding to one thing that's in there or is it binding to nothing that's in there right so that's what western blotting is all about it involves running first of all an SDS page gel SDS page if recall allows us to separate the proteins on the basis of their size and after we've run that we transfer by using electricity actually those proteins that are in the gel slab we transfer them onto a membrane and we set the membrane up so that it literally binds to those proteins and won't let them go the calent bonds are actually formed okay so the proteins that were on the gel are now sitting there on that membrane and they're not going anywhere they're in exactly the same position they were on on the membrane I then take that that membrane and I put it into uh what calling in labatory we actually use um Glad bags okay you take take a bag all right and you uh have a buffer and in that buffer you place that antibody you let it mix for a period of time and then you wash off the excess so you take it you dump it in a few Solutions now if you have binding it's the washing isn't going to remove the interaction between the antibody and the protein whereas if you don't have B finding all the excess antibodies going to get removed so then you can look at this and say oh I did have that protein there here's how much I had and it's of this size and that's what western blotting TOS is make sense good yes when the proteins are that's a really good question okay one of the questions that people commonly ask is wait a minute SDS isn't that a detergent isn't that going to denature the proteins and wasn't that the idea this whole process in the first place and the answer is it can be a factor it can be a factor so just because you see nothing doesn't always mean that the protein isn't there maybe the antibody doesn't recognize the denatured form maybe there's other things blocking it from getting there and binding to that so uh you're exactly right that's possible several good questions tonight but I think we are at an end so um if you want to call and talk to me send me an email I won't get my number out to the YouTube crowd and um if not I'll see you in class on at 9:30 study hard what's the oddest phone call you've got nothing too odd but I've had people call up and pronounce that hey I saw your thing on YouTube do you really take questions okay sure yeah yeah yeah oh yeah yeah and I answer their questions too yeah is there a BB 452 also there's not there used to be but there's not no what is iten angen it's the one that let some blood vow a blocking e e e
Medical_Lectures
22_Biochemistry_Glycolysis_II_Lecture_for_Kevin_Aherns_BB_450550.txt
Ahern:...friday, and it will be like before so I want you to sit every other one and so number 1 being here and then every other one over. Number 1 being here and then every other one over. So if you count and you see that someone is sitting in 2, don't move to 4. They'll have to move. Make sure you sit in the odd numbered seats and then you'll be okay. And then over here, same thing. Starts and then moves over. And the same is true over there. That will align with everything down here. The sooner we get seated, the sooner we can get started, and that's important. Let's see, the material that we will have on the exam, everything I covered through Monday and on Monday, I showed this slide but I only talked through this one right here. I'll talk about these reactions down here today. You're not responsible for those on the reaction. But you are responsible for this one up here. So only the things that I talked through on Monday. I did the review session last night, I have posted the review session online, you can take a look at that, if you weren't able to make it and you'd like to see that. You'll notice on the review session that I say that I am taking questions. So I do this sometimes with students. If you would like to submit a question for me to put on the exam, or to consider putting on the exam, I would be happy to do that. I will take one student question and put it on the exam. Send me your favorite question. Send me what you would like me to put on there and I will think about that and pick one student question to put on there. You need to have me that by tonight. So if you get that to me by tonight when I finish writing the exam, I'll make sure the one student question is on the exam. There was a question about did we sing loud enough for the extra credit. The answer is basically you did, yeah. If there's any doubt, we're going to sing again today so maybe that would help just make sure there's going to be some extra credit or something, that would be good. So what I'm going to do today is actually kind of abbreviated. I'm going to talk through some more of the glycolysis. I'm going to talk about some consideration like other sugars that enter and some health considerations relative to that. And then I'm going to leave a little bit of time for questions if you want to ask questions. If you weren't able to come to the review session last night and you would like to ask questions here today, I will be happy to answer questions for you. If you want to leave, that's fine too. So whatever works for you. Clear as mud? Yes, sir? Student: So you gave us the figure in Monday's lecture, you had all the figures of the 10 steps, but you only went over the steps up to 8. Are we still responsible for the ones? Ahern: His question has to do with the 10 steps of glycolysis. That's what I was talking about here. This is step number 8, that's where I've stopped. So I'm not holding you responsible for steps 9 and 10. Student: Even though you did give us the steps. Ahern: Yeah, but I didn't talk about them. So, yeah. Connie? Student: You said we need to know the figures that were easy to remember and you mentioned the first 4 of glycolysis. And then you said "I might add some more figures to go along" but you never did. Ahern: I never did, would you like me to add some more? Class: No! Ahern: Okay, then I won't. [class laughing] How's that? Student: So the variations of glucose and fructose pretty much. Ahern: What I'm going to hold you responsible for knowing the structures of in glycolysis for exam will be the various structures of glucose and fructose, yeah. Glucose and fructose molecules. Like glucose-6-phosphate, fructose-6-phosphate, etcetera. Okay? We'll keep it simple. People last night said that they felt that the material is greater than the material in the first exam. Is that the general consensus in the room? Class: Yeah. Ahern: Really, all right. You know, it's always weird. I never had that perspective, so I think, "oh, okay." But then, I guess I look at it differently than you do too, so that's fine. The format of the exam will be exactly as the first exam. Points may be slightly different for each section, but you'll have a short answer section, you'll have a problem solving section, and you'll have a longer answer section. So all three of those sections will be there. I don't think time should be an issue. I tried to keep it like like I kept the last exam where hopefully time was not an issue. Yeah? Student: Will you tell us [inaudible] before the test again...? Ahern: Well, since I haven't written the exam yet, I'm not sure. It's not going to be significantly different from what it was before. There might be a few points here and there but I really don't think it's going to be, not think, I know it's not going to be significantly different. I've written about three quarters of the exam, so I know pretty much what's there. Yeah? Student: Can you post the curve from the last exam online anywhere? Ahern: I did. Student: Oh, is it? Ahern: Yeah. So it's on the schedule page. One other thing with respect to grading. The TAs have a big issue with being able to grade it during next week. So it means they got two monster exams and they're not going to be able to have it graded before Thanksgiving. I really wish we didn't have that happen, but there's not really a way that we have around that. So they've got biophysics and a biochemistry exam both next week and they're not done with those until Wednesday so there's no way to have the exam graded. So you can go home and have Thanksgiving and you won't hopefully think too much about your grade. When you come back, I can assure you we will have the exams ready for you when you come back on Monday. But the exams will not be available next week unfortunately. Yes, sir? Student: Previous exams that were resubmitted for rescoring, are those available in the office now? Ahern: Previous exams for people who had regrading requests, those are back in the BB office. Well, let's get into this material. I talked last time about this interesting enzyme, phosphoglycerate mutase, and I pointed out to you that a mutase has this odd system of operation where it adds a second phosphate and then it takes off the phosphate that was on there originally and as a result of that, there's an intermediate that has 2 phosphates and that's how we get to 3-BPG. So that's a really cool and interesting reaction. One of the reasons that that rearrangement of the phosphate is being made is to essentially create a high energy phosphate. And that's what happens in the next step of glycolysis. The next step is catalyzed by the enzyme known as enolase. And enolase catalyzes the removal of water. So water is taken out of this guy right here. That creates a double bond next to this phosphate and this phosphate is next to this carboxyl group. Phosphoenolpyruvate, which you can abbreviate PEP, and PEP has a lot of energy and PEP needs a lot of energy and PEP has a lot of energy. PEP is one of the highest energy molecules that you will find in your body. Very high energy and we recall of the earlier example that I talked about in glycolysis where we have a molecule that has a very high energy and it has a phosphate on it, it can transfer that phosphate to ADP and make ATP by substrate level phosphorylation and in fact, that's exactly what happens in the last step. In the last step, we see this high energy phosphate being transfered directly onto ADP to make ATP and that yields pyruvate. Now, this last step is a really interesting step. It's the step I like to refer to as the big bang. The big bang. Why do I call it the big bang? Well this reaction right here has a very large, negative delta G zero prime. Very large negative delta G zero prime. Now, notice that ATP is being made and in spite of that, it still has a large negative delta G zero prime. There's almost enough energy in this molecule to make two ATPs. There's almost enough energy in the molecule to do that. That's why I call it the big bang, because when this sucker explodes, it makes ATP and there's all this excess energy. What happens when we have excess energy? Yeah? Student: Is that the metabolism of people? Ahern: You're getting ahead of me. Slow down. [laughs] Yes. So we have all this excess energy and we're not making ATP. Well, what happens when we have excess energy in any reaction? Well, that energy is just lost. And we lose energy, we lose it as heat. This big bang reaction gives off a lot of heat. So one of the reasons that we get hot when we exercise is we're going through a lot of glycolysis and we're going through a lot of this reaction and we're generating a lot of excess heat on the side. So the reason we get hot is we're just not 100% efficient at making ATP. That excess energy is given off as heat. That's why we sweat and get hot whenever we're exercising. Pyruvate kinase, I'll say a little bit more about in a little bit and that may not happen actually until Monday, but pyruvate kinase, now this is very odd for glycolysis, pyruvate kinase is the 3rd enzyme that's regulated. This is the only metabolic pathway that I know of where the last step is regulated. It's actually regulated both allosterically and by covalent modification. It's very odd. Now, there's a reason why. I'm not going to tell you the reason today because it's not going to make any sense to you, but it makes sense when we look at the reversal of this pathway. The reversal of this pathway involves the synthesis of glucose and the synthesis of glucose is called gluconeogenesis. And it uses many of the steps of glycolysis. It doesn't use this step, but it uses many steps of glycolysis. It's important for the cell to be able to turn this enzyme on and off. If we think about it, we've got a very, very large delta G zero prime. If I can't turn this enzyme off, what's going to happen whenever I've got PEP? It's going to go almost completely to pyruvate. Almost completely to pyruvate. There will be times we don't want that happening. So the short answer to the reason we want to regulate this enzyme is because of the large delta G zero prime. We really want to be able to regulate that enzyme. So the 3 enzymes in glycolysis that are regulated are hexokinase, I said I won't say too much about that one, phosphofructokinase, which turns out to be the most important one for the most part, and pyruvate kinase. Not surprisingly, all three of those reactions have a fairly negative delta G zero prime. Now glycolysis as I said is unusual in being regulated at 3 places. And again, there are reasons why that's happening, but as you can see with this example, being able to turn off and reaction that has a large negative delta G zero prime is important for the cell. That's part of the reason why the cell has those three different enzymes that are regulated. Questions about that? Yes? Student: With that being the bottom that kind of chokes up things? ending the final step of the glycolysis process. Is there any kind of a storage or I guess battery area for the PEP? Ahern: Is there a place to keep PEP, is that what you're saying? Student: Yeah, is there somewhere where it's stored until PEP need to be converted? Ahern: Yeah. That's actually a very good question. So his question is, "does the cell store PEP around?" To my knowledge, it does not. When we look at the metabolic pathways involved here, we see that PEP goes to here and if we go backwards to the synthesis of glucose, then PEP is driven that way. I don't know of stores sitting around as such. Comment? Student: What did you say were the two ways that pyruvate kinase is regulated? Ahern: Pyruvate kinase is regulated both allosterically and by covalent modification. And I'll talk more about those when I talk about regulation. Back here. Student: Can you name the three things that are regulated in glycolysis? Ahern: The three enzymes? Yes. The three enzymes regulated in glycolysis are hexokinase, PFK, which is also known as phosphofructokinase, and pyruvate kinase. Those are the three regulated enzymes in glycolysis. To come back to your question as I'm thinking through my head about this, there are a couple of reactions where the phosphate of PEP is donated to something. So PEP in a couple of reactions, they're not major reactions, but in a couple of reactions, PEP serves as a high energy phosphate source kinda like ATP does, but they're not central reactions. Alright, so that's what's up with that. We need to, there's the overall summary, blah, blah, blah, no you're not going to memorize that. We do need to consider some things about glycolysis that are really important, though. And it's one that I sort of glossed over, but I want to come back and visit right now. And that's a phenomenon known as redox balancing. Redox balancing. That's the first time you've heard that expression. What in the world is up with redox balancing? Well redox of course refers to reduction oxidation. When I'm talking about balancing, I'm not talking about the fact that every reduction gives an oxidation and every oxidation gives a reduction. That's not what I'm talking about. So the balancing is something different from that. When I'm talking about redox balancing, I'm talking about the fact that cells have a limited number of electron carriers. Cells have a limited number of electron carriers. So so far, we've been saying, "okay, well here's “the glyceraldehyde-3 reaction, it gets oxidized “to form 3-phosphoglycerate, “I'm sorry, 1-3 diphosphoglycerate and NADH is produced." And we didn't think anything more about it. But what I'm telling you now is that cells have a limited amount of NAD. Cells have a limited amount of NAD. When there's plenty of oxygen in our cells, the NADH that's made goes and dumps off the electrons in the electron transport system and becomes NAD again. I'll repeat that because that's a very important point. When there's abundant oxygen in our cells, the NADH that's produced in oxidation reactions dumps its electrons into the electron transport system and becomes NAD again. As long as we have plenty of oxygen, we don't have to worry about balancing because we're automatically grabbing the electrons here, going over here, and dumping them and then coming back as a NAD again. We have to think about balancing when we don't have sufficient oxygen. We have to think about balancing when we don't have sufficient oxygen. And that's what you see depicted on the screen here. If I don't have sufficient oxygen, then NADH that's made here cannot dump its electrons into the electron transport system. We'll talk about that as a very important consideration next term. But sufficed to say in the absence of oxygen, if we don't do something with that NADH, it's going to accumulate and we're not going to have any NAD left. If we don't have any NAD left, what's going to happen to that reaction? It ain't gonna go. We got trouble. Well glycolysis is such an important pathway for the cell because it makes all kinds of useful things that we really don't, we really can't afford to have that pathway plugged up. But I told you that there are times that cells run out of oxygen and they have to be able to adapt to that. So they've adapted a mechanism that you see here where the NADH instead of dumping off its electrons to the electron transport system, dumps its electrons onto the pyruvate. When it dumps its electrons onto pyruvate, a couple of things can happen. If you're a bacterial or bacterium or yeast cell, that's the reason, that's what fermentation is all about. That's what they're making is ethanol. Notice that the byproduct of that is more NAD. And that NAD now can be reused back up here. We've just balanced the equation. So we've balanced it, we've regranted the NAD that we need and we didn't even have to have oxygen for it. That's why making beer, making wine is occurring in an environment where there's no oxygen, because the cell has to do what it can, and so it starts making ethanol and at the same time making NAD that it can use to keep this process going. Student: [inaudible]? Ahern: Her question is, "isn't ethanol bad for cells?" I'm talking about yeast in bacteria here, first. We do something different. But yeast in bacteria also don't like this too much. You can get up to maybe 12 to 15% ethanol before they knock out. The reason we distill liquor, we distill the alcohol in liquor is because the yeast and bacteria can't make it at a high enough concentration. They die by the time it gets to that point. They use a still to pull out the ethanol and make vodka and all the various things that are there. So the bacteria in yeast don't like it either, but they tolerate it a lot more than now do. We don't make ethanol. I'm going to show you something like that in just a second. In stead, we convert pyruvate into lactate. So when I showed you that 3 fates earlier of pyruvate, I said one was it could acetyl-CoA if we have oxygen, I said it could go to ethanol if you're a bacterium or a yeast. The third thing is it can go to lactate if you're an animal. Lactate is known as lactic acid. Some people like to think that because when you exercise heavily, excess lactic acid is produced and that's what leads to sore muscles. Other people dispute that so whether that's true or not, I won't try to weigh into that argument. But sufficed to say, lactic acid is a byproduct of heavy exercise because your muscles are using energy faster than oxygen can come to them. They've got to make something to keep that glycolysis process going. Questions about that? Yes, sir? Student: The site's just choking down the reaction with not having any NAD plus left over as an electronic receptor. If that actually occurred, would that be the major consideration or is the fact that you also have NADH that's in excess and runs around willy nilly reducing stuff in the cell? Ahern: His question is "are there other considerations “besides the fact that you run out of NAD here? “Can the NADH dump its electrons to other things?" The answer is basically not, no. What will happen is NADH will just accumulate but nothing else will happen. Keep in mind that I describe this as running out of NAD, right? But remember that NADH is a product and NAD is a substrate, right? So we could imagine that if we let that NADH get even a little bit too high, we're going to favor the backwards reaction. So that's the bigger consideration with NADH. So we get too much NADH, even if we haven't used all of the NAD, that reaction isn't going to go forward for very long because the product is going to accumulating and the delta G is going to becomes positive. Yes? Student: You mentioned in a previous lecture I believe that during these conditions that lots of the electrons, like the cell uses NAD to get rid of the excess electrons in these conditions . . . in reaction [inaudible.], more then a normal amount of electrons escape it, so to speak. Ahern: I'm not sure I understand your question. Student: So in this condition, when there's low NADH, is there a possibility of some of the electrons not being picked up and going around causing problems like you talked about before? Ahern: Okay, so his question is if you run out of, well, if you run out of NAD, the question is "do the electrons go somewhere else or do they cause problems?" I would say no. So you're basically going to stop the reaction when you start tipping the balance of products and reactants. That's really what's going to determine whether or not that reaction is going to go forwards. I wanted to say just a word about what happens inside of us and this is always something of interest to students. Here's the same thing on the screen that I showed you schematically on the last figure. We see, again, the same sort of phenomenon. We see that NADH is produced by this oxidation reaction over here. It gets used again when we're out of oxygen to remake NAD and we're back here. This again depicts what happens inside of bacteria and yeast. You don't see lactate on there. I'll tell you something that will surprise you. We have in our bodies abundant alcohol dehydrogenase. Why don't we make ethanol? Wouldn't that be a really cheap Friday night? [Class laughing.] Just hold your breath, right? And you'd have, you'd start producing this stuff, right? Well, it turns out that we don't have, this process actually takes this reaction right here and this enzyme we don't have. At least the enzyme that produces acetaldehyde. We have an enzyme that produces a related compound but we don't have the enzyme that produces acetaldehyde. So why do we have alcohol dehydrogenase? Any ideas? Yeah? Student: For breaking down ethanol, acetaldehyde is the first product for metabolizing ethanol. Ahern: So his answer, he says that you're breaking down ethanol and the answer is basically yes. Ethanol as we've described is not very compatible with our cells. No matter how much of the stuff we drink, our body is actually detoxifying or trying to detoxify ethanol with this enzyme and it's running the reaction backwards to acetaldehyde. The problem? Oh, just a minor problem that acetaldehyde causes hangovers. Yeah, so if you wondered why you get a hangover, you can blame that enzyme. Make sense? Now at this point, someone always says, "now let's see, if I go and exercise real heavily “and my oxygen is low, what's going to Happen?" There's all kinds of schemes with that. I'm not going to go into that. The thought of running heavily after you've drunk a bunch of beer just doesn't sound like a very fun idea to me. [class laughing] You know? Student: Isn't acetaldehyde more toxic than ethanol naturally? Ahern: Acetaldehyde is fairly nasty, yeah. Where were we? So there's our pyruvate fates again to remind you of what I showed you earlier. Pyruvate going now to acetaldehyde and ethanol. That's happening in bacteria and yeast. Lactate happens inside of us. Acetyl CoA, if oxygen is available in any of these cells, assuming they're all aerobic like most bacteria are aerobic as well. I will talk about this reaction actually right here beginning of next term. Where am I at? Ethanol formation, there you go. Blah, we don't need to do that. Lactate formation, that reaction is again one that we have. We have the enzyme lactate dehydrogenase. You notice the difference in this reaction compared to what's happening in bacteria and yeast is in bacteria and yeast we're from a 3 carbon compound to a 2 carbon compound. In us, we're making lactic acid. And lactic acid turns out to be pretty much a biological dead end. We don't really convert lactate into anything else. Well then what happens when it accumulates? When it accumulates, we've got a whole bunch of lactic acid sitting here and it's not very useful for us. Our body has to wait until we catch up in the oxygen department and then it runs the reaction backwards to make pyruvate. It turns out that this cycle actually is very important when we're exercising heavily. Our body has a very cool way of dealing with lactate where parts of our body have oxygen and other parts don't. I'll explain that to you next term. What else did I want to say here? Fermentation options, and I won't talk about that. Next, I'll talk about other sugars and then I'll finish for today and I'll open it up to questions. Glycolysis is a central metabolic pathway. That central metabolic pathway is central to most cells on the face of the earth. Almost every cell on the face of the earth has it and it's useful because it allows us to oxidize not only glucose, but it allows other sugars to enter that pathway as well. So for example, there are enzymes that will, through a series of steps, convert galactose into glucose-6-phosphate. And that turns out to be very useful because when we drink milk, we're getting a lot of galactose. Milk contains lactose, lactose is a disaccharide that contains both glucose and galactose. So we have to be able, ideally, to convert galactose into something that's useful for us. We do that in a process I'll show you in just a second. Fructose not surprisingly is something that we can convert into fructose-6-phosphate and metabolise inside of glycolysis and, if I have time, I will tell you briefly why I think we have an epidemic of obesity relative to fructose. In fact, I'll start there. This is, I'm going to give you Kevin Ahern's pet theory about why America is growing fatter and fatter and fatter. This is Kevin Ahern's pet theory. There's no evidence for this theory other than what I'm going to argue for you on the screen. But I think it's not an illogical argument. One of the things that's happened in the American diet over the past 20 or 30 years has been the increasing use of fructose inside of materials for sweetening. We talked about high fructose corn syrup. The American obesity epidemic, you can literally trace to about the time we started putting that into our food. High fructose, we've got high fructose. Well fructose is just a sugar, we just oxidize it like glucose, we've got the glycolysis pathway, what's the deal? I'm going to argue with you here that there is a big deal. The big deal is what you see on the screen. The last pathway didn't show you what you see here. The last pathway shows you fructose going to fructose-6-phosphate and then on into glycolysis. I argue that if that pathway occurs, not a big deal. I also argue that if we overload that pathway, that this pathway causes some problems. Now let's think about this. Here's fructose. First I need to tell you what's happening in this pathway. Fructose, we've got a lot of fructose in our body. A lot of fructose floating around here. There's an enzyme called fructose kinase that will convert fructose into fructose-1-phosphate. Then there's this enzyme called fructose-1-phosphate aldolase. Notice that aldolase is kinda like the aldolase we saw in glycolysis. It splits this 6 carbon molecule into 2 3-carbon molecules. One is glyceraldehyde and one is DHAP. That's the same as we saw in glycolysis. And we say, "oh, glyceraldehyde, that's the problem." Well no, we can convert glyceraldehydes into glyceraldehyde-3-phosphate. So at this point, we have two things exactly the same thing as glycolysis. Why are we obese? There's something very important we've neglected. Anybody know what it is? We've talked about it. I'll give you a hint. That last pathway showed going in through fructose-6-phosphate. This is not going in through fructose-6-phosphate. Yeah. I'm sorry? Student: Where's the phosphate? Ahern: It's not that. It's not the absence of phosphates. I'll tell you the answer. The answer, at least from Kevin Ahern's pet theory on why Americans are getting obese, a good acronym right? My pet theory about this is that we have just bypassed the phosphofructokinase step. Phosphofructokinase was a regulatory enzyme. Right? We've bypassed the regulatory enzyme and now what are we doing? We're force feeding the cell with these compounds and when we start force feeding the cell with these compounds, we're going to start force feeding glycolysis all the way through. We start making lots of pyruvate. Pyruvate is a precursor of acetyl-CoA. And when we have a lot of energy, as we do when we have a lot of sugar, acetyl-CoA is made into fatty acids. High fructose corn syrup, by this argument, the Kevin Ahern pet theory about why Americans are getting obese, okay? Say that real fast. By this idea, we're force feeding glycolysis and as a consequence, making fatty acids and making fat. For what that's worth. Clear as mud? Questions? Comments? It pays you to look and see if you have high fructose corn syrup that you're eating. It's pretty hard to find stuff that doesn't have it. Student: What gets converted into the fatty acids again? Ahern: So pyruvate is the end product of glycolysis. Pyruvate can be converted to acetyl-CoA. And when we have lots of energy, acetyl-CoA goes straight to fatty acids. Yes, sir? Student: This would be the same net effect you would see if someone who had a knock out mutation of PFK. Ahern: He says it would be the same net effect you would see if you had someone with a knock out mutation of PFK. I think you had somebody who is dead if they had a knockout mutation of PFK, but yeah. That would be worse. Yes, sir? Student: Is there any research going on looking into the problem with [inaudible] obesity epidemic? Ahern: Say that again? Student: Is there any research going into this, like not this specifically... Ahern: Oh yeah, a lot of people are interested in high fructose corn syrup and the link to obesity. And there are really some really suggestive things that there is in fact a link between the two. Student: So is there anyone looking at biochemical pathways? Ahern: All of these are used, so this is only my own pet theory. What's that? Student: There are some people that say it isn't. Ahern: There are some people that say the sky is made from green cheese, I mean, so you can't go on what some people say, right? Student: You're bouncing. Ahern: Oh, I'm bouncing. Thank you. I thought we might, instead of going into galactose, do a song and then call it a day. And then I'll take questions for stuff. Does that seem reasonable? I've got a song about glycolysis that's a lot of fun. It's to the tune of "These Are a Few of My Favorite Things." Here we go. Everybody sing. Aldehyde sugars are always aldoses and If there's a ketone, we call them ketoses. Some will form structures in circular rings. Saccharides do some incredible things. On to a glucose, we add a "P" To it. ATP energy ought to renew it. Quick rearranging creates F6P Without requiring input energy. At a high rate Add a phosphate With PFK F1,6BP is made up this way So we can run and play da da da da. Sorry, okay. Aldolase breaks it and then it releases DHAP and a few G3Pieces These both turn into 13PG Adding electrons onto NAD Phosphate plus ADP makes ATP While giving cells what they need energy Making triphosphate's a situation Of substrate level phosphorylation 3BPG, 2BPG Lose a water PEP gets a high energy state Just to make pyruvate So all the glucose gets broken and bent If there's no oxygen cells must ferment Pyruvate lactate our cells hit the wall Some lucky yeast get to make ethanol This is the end of your glucose's song Unless you goof up and get it all wrong Break it, don't make it to yield ATP You'll save your cells from futility Alright. [class claps] Excellent. We will definitely have an extra credit question on the exam. If there are people who would like to ask question for a mini review session here, I would be happy to take questions. I see a hand back there. Student: Really quickly, when we're looking at the insulin signaling probably, Ahern: Yes. Student: There's one enzyme that has a really long name and could you give us a nickname for the highlight because it's phos...pho- inosit... Ahern: Yeah, I know what you're talking about. [class laughing] Student: Do you think you could write one you could use for the test? Ahern: As a matter of fact, Where's my thing here? I am happy to grant that request. How's that? Every year, I get this request and it looks like it's made it into my highlights and so fourth, so let's take a look at that pathway and come up with a name. I will let you guys name it, how about that? So let's think, hold on just a second now. Insulin signaling right here. This is the name, do you want this one, or do you want this one? Students: The green one. Ahern: You want the green one? That's usually the one people want to rename. Student: That's the one you said was like longer in your highlights thing. You said it was like phosphor... Ahern: What do you want to call the green one? How boring! P3K? The green one? The what? [class laughing] The hulk! [Ahern laughs] I like that. Can anybody beat the Hulk? The hulk it is. So, for this exam, phosphoinositide-3-kinase will be known as the Hulk. [everyone laughing] And you may also call it phosphoinositide-3-kinase, we will not count that against you. Okay? But if you call it the Hulk outside this class, please don't tell anybody where you got that name from. It didn't come from me, right? The Hulk. Are there other real questions? Connie? Student: Ribose, does it have an alpha and beta? Ahern: Does ribose have an alpha and beta? Anything that has a Haworth structure will have an alpha and a beta. And you may notice, I didn't mention Haworth structure in class. I did put them in the highlights. So you should know Haworth means ring. Fissure structure means straight chain. Probably got that in organic chemistry but I just forgot to mention it the day I talked about those. Other questions, Yes? Student: For the catalytic [inaudible], we talked about the catalytic triad and then that was the active site and then the oxyanion hole in the S1 pocket. And the S1 pocket is what binds the substrate. But I thought the definition of the active site was what [inaudible.] Ahern: So her question is, that's a very common one that Karen's asking. So the question that she's asking is I talked about the catalytic triad being the active site and then I talked about separate things like the S1 pocket and the oxoanion hole. So really, it's a very semantic argument we're talking about here. So you're correct. These are all in essentially the same place. So I simply used the three amino acid side chains to describe the active site because that's where the reaction is catalyzed. But you're right, the substrate will be held at the active site. And that substrate specifically is held in the S1 pocket. So all I care about is you know the S1 pocket is right there at the active site. Whether we call that part of the active site or not is just a semantic argument. That's a very common question. I get that the most common question of that material from students. Yeah? Student: I wanted to check something about SH2 domains. Ahern: Yes, SH2 domains. Student: I think it says that both the Hulk and IRS have SH2 domains and they're like both that allows them both to recognize phosphate . . . Ahern: So when you see portions of a protein recognizing phosphotyrosines, they have SH2 domains, that's right. Student: Okay, so they both do that. Ahern: They both do, yeah. Student: Can you say that one more time? If it has a SH2, it recognizes . . . Ahern: When you see something recognizing a phosphotyrosine, which is what these are recognizing, it usually involves an SH2 domain, yeah. Other questions? We've got 10 more minutes. Yeah? Student: Where on the G protein is the interaction with the beta adrenergic receptor? Does it bind with all the beta [inaudible]? Ahern: Her question is "where on the G protein does “the G protein interact with the beta adrenergic receptor?" So, on the beta adrenergic receptor, you've got all three present that's there. So the interaction that's there is not precluded by the covering of the beta and the gamma. The beta and the gamma; however, have to move away in order for the G protein to interact with the adenylate cyclase. So all three are present in the beta adrenergic when it binds to the beta adrenergic receptor. Student: And then when it takes GTP, it sheds the beta.... Ahern: The binding of GTP causes it to lose the beta and the gamma, which is what enables it then to go and bind with the adenylate cyclase. Student: So there's no actual binding of the GTP to the other subunits, is there? Ahern: There's no initial of the GTP to the other subunits. The other subunits in fact don't bind GTP at all. So it's only the alpha sub unit that will bind GTP. Yes? Student: It's also called PI3K in the book, can we use that, too? [Class laughing.] We've already named it for our study guides. Ahern: So the book calls it PI3K? Student: Yeah. Ahern: Oh, why not? PI3K, we've got phosphoinositide 3-kinase, my key is going to be this long, the TAs are going to kill me. Yeah, go ahead, that's fine, I'm sorry. Student: What are your TAs going to think when the Hulk is an acceptable answer? Ahern: Every year, I rename that enzyme. One year we renamed it Larry. [Class laughing.] Last year, I think we called in Malcolm. The Hulk is the best name we've had though, I have to say. So one year I forgot to tell the TAs that I had done this. [Everyone laughs.] And so I always say this thing with my TAs, and I say, "You know, give me a call if “there's something unusual on the exam." And then I give this call late one night and he goes, "what the hell is Malcolm?" [everyone laughs] So I shouldn't tell them, right? Just leave it as the Hulk, right? Yeah? Student: It's Malcolm in the Middle. Ahern: Malcolm in the Middle, yeah. Other questions? I shouldn't open this up, should I? Shannon? Student: I'm a little confused about G proteins. It seems like there's a part of a G protein and then there's a difference between a G protein, GTP and GDP, and which one is binding where and I'm sort of really confused. Ahern: So the term G protein, to hopefully alleviate your confusion, the term G protein simply means protein binding guanine nucleotide. It can bind GTP, it can bind GDP. Alright? And that's where we talk about ras being a G related protein because it also binds GTP or GDP. Student: What are GDP and GTP? Ahern: Guanine nucleotides. GTP is ... Student: Those so are nucleotides then. Ahern: Yeah, yeah. GTP is like ATP except for it has guanine instead of adenine on it Student: Oh, okay. Ahern: Sorry, yeah. Student: [Inaudible.] Ahern: Okay, yeah. Student: How come? Ahern: How come? So her question is one of the problems in the book. And the problem in the book says that if you take small concentrations of PALA and you treat ATCase with it, you discover that you've actually increased the activity of the enzyme. If you used high concentration of PALA, you completely kill the enzyme. The question is why. Student: is it because when it binds to the one site, the other sites are still available? Ahern: Her answer is exactly right. So the answer to the question is in low concentrations, only one or two of the sites get bound, locking the enzyme in the R state, right? Locking the enzyme in the R state and then those ones that aren't bound to PALA are just as active as they can be. That's exactly right, yeah. Student: Did you send an email out about an error? Someone. Ahern: Oh, I had an error on a sign on a Delta G. Student: Oh, okay. Can you pull the figure up for me? Ahern: Sure. The original one I modified so it's not even on here anymore. But I can show you where it was. So it was actually the highlights for... it was the one, let's see, is that right? Yeah, it was this one right here. So on the original one, I had written this reaction backwards. So I had written it creatine phosphate + ADP goes to creatine + ATP and I left the sign as plus, and it should've been negative. Whenever you reverse the direction of an reaction, you have to change the sign. I hadn't done that. So I just went back and rewrote it in the same way that you saw it in class, which is the way, this is the way I showed it in class. Other questions? Good questions. You guys feel confident for this one? No? I want to see everybody make an A on this one. Ask you again on Friday. Okay, have fun, see you on Friday. [END]
Medical_Lectures
29_Cancer_I.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. TYLER JACKS: OK. So now we're going to change gears entirely, and talk about cancer. And to put you in the mood to talk about cancer, I'm going to show you a video, which we actually produced last year for the American Association for Cancer Research annual meeting, to open up that meeting, actually. Hopefully, there's sound. Guys, upstairs? [VIDEO PLAYBACK] [END PLAYBACK] TYLER JACKS: Hopefully, you're inspired. That video is on YouTube if you want to watch it again. It's got to 15,000 hits. It's not exactly viral, but still pretty good, pretty good for a cancer research video. And the video really was to kind of get people excited about both the progress that's been made and the opportunity that exists. But also some of the great challenges that we're referred to in some of those facts and figures that you saw there. So I want to review some of that with you, and give you a sense of what we're doing to improve our progress, accelerate our progress. And the bottom line is I'm actually extremely excited about the potential that we'll have over the next decade or two in really changing the course of some of those numbers that you saw there. But just to remind you of the severity of the problem, when we consider the statistics regarding cancer in the United States, in the United States over the next year, there'll be about 1.4 million new cases in the United States. That does not include common forms of skin cancer-- squamous cell skin cancer, basal cell skin cancer contribute another million cases. So it's a very, very common disease, very commonly diagnosed in this country, and indeed, around the world. Again, just considering the next year, it's estimated that there will be about 560,000 deaths in the United States due to cancer, and about 8 million in the world. And you might have seen the statistic that more people die in the world of cancer per year than of malaria, TB, or HIV aids combined. So very, very common, very, very deadly disease. In this country, the lifetime risk of developing cancer is 1 in 2 for men, 1 in 3 for women, based on current statistics. Again, hopefully by the time you guys are old and cancer typically is a disease of older people, these statistics will change. But that's what they are now. So if they don't change, then a large number of you sitting here will experience this diagnosis. And by current statistics, 1 in 4 deaths are due to cancer. Cancer has recently bypassed cardiovascular disease as the leading cause of death in the United States. Cardiovascular disease rates have dropped precipitously. Cancer rates have dropped less significantly, although they are coming down. Also, the population is aging. And cancer tends to be a disease of older people. So the demographics also increase the numbers of people who are dying from this disease. So a major problem. But also think of it as a major opportunity to do something really important if you choose to get into this field. So cancer, I think, is familiar to everybody on one level or another, but you might not have seen the disease up close. I hope that you haven't. But I want to teach you a little bit about what it looks like, and really what are some of the fundamental definitions. So lung cancer, a very commonly diagnosed cancer in this country-- we'll talk more about it in a minute-- is diagnosed either by a chest x-ray following symptoms, and you can see perhaps, although the light isn't great here, that there's a dark mass right here. And this dark mass indicates the presence of a tumor. A more refined diagnosis can be done by basically serial x-rays, commuted tomography. And you can see this very clearly defined mass growing in the lung of the individual. Cancer is an accumulation of cells, an abnormal number of cells within a tissue. And you can see that solid tumor here and here. You can also see cancers in the blood. Again, an abnormal number of cells within the blood, a leukemia. This is a normal blood smear. Here are red blood cells. And here is a normal number of white blood cells. These might be B-cells. These might be neutrophils. And you can see an accumulation of these nucleated white blood cells in this leukemic patient. Thousands, hundreds of thousands more of these cells than should be present within this blood sample. This is colon cancer. Colon cancer, as you probably know, can be detected by colonoscopy, a very important diagnostic test, preventative test. And we can actually see the lining of the colon. And this is a normal section of colonic epithelium. And here is a tumor developing. It's called a polyp. This is an early stage, precancerous tumor. We'll talk more about the details of that in a second. If this is diagnosed during endoscopy, they're actually removed right during the procedure. And this is very important to prevent those tumors from progressing further into true cancer. And actually, colon cancer rates have dropped significantly because of this test. When these lesions are discovered, they are removed. And therefore, they can't progress into true colon cancer. However, sometimes you see this. And this is a tumor that has progressed further. It's divided more. It's taken on additional abnormal properties and actually moved through the wall of the colon and is beginning to spread throughout the body. This is true cancer and much, much harder to treat. Not impossible, but much harder to treat. When this is discovered, you can't just remove the specific lesion. You have to have surgery. And a section of the colon is removed to take out the tumor in the hope that that will get rid of the disease entirely. But the concern is, in this situation, that the diseased cells might have moved out into the body in the process of metastasis, which will make the disease much more difficult to treat. OK. So I've given you some terminology there. Let me just explain some of it in greater detail. Actually, before I do that, let me show you one more, a couple of slides. So as indicated on that slide, cancer develops in stages from normal cells through the development of a benign, precancerous lesion, finally to the development of true cancer. And we can depict that graphically, as shown here. This is a normal tissue. Here are normal cells. These might be epithelial cells lining the intestine. Those cells sit on top of a basement membrane made of extracellular matrix proteins, which provide them structure and some function. There might be other cells present in this region, stem cells or progenitor cells, which will replenish those differentiated cells as they are sloughed off and die. These cells can acquire alterations-- and we'll discuss this in great detail today and next time-- alterations in their genes, which allow those cells to do things they shouldn't do, namely to proliferate abnormally. So rather than having a single line of cells, we now have a little clump of cells. These cells might look identical to their neighbors, but there are too many of them. This is a process we call hyperplasia, too many cells, too much growth. Within this collection of cells, additional alterations may take place that allow those cells to divide more rapidly and to do even more abnormal things, to pile up on one or another, which they shouldn't normally do. This is the development of one of those early stage tumors. I showed you a polyp in the colon. That's a stage of cancer, in the case of colon cancer, called an adenoma. That's a benign tumor. It's not yet cancer. It's actually not yet life threatening. But it's detectable because it's a mass of cells that shouldn't be there. Within that collection of cells, still further alterations can take place. And now the cells do additional things that are wrong and potentially dangerous. One, they're recruiting a blood supply. They're recruiting blood vessels into the tumor to nourish the tumor and bring factors that the tumor cells need for their survival. In addition, the cells are starting to degrade that extracellular matrix. They're starting to acquire the ability to move away from their normal site. Most cells in your body know where they're supposed to be, and they stay there. Cancer cells acquire the ability to leave their primary site and to disseminate throughout the body, creating secondary tumors. This happens when the cells access the blood vessels. They can then travel within the blood system and then take up residence in some secondary site. And this we call metastasis. Metastatic tumors are tumors that are derived from the primary cancer-- and this is true cancer here-- derived from the primary cancer, that have now created a secondary tumor somewhere else. And this is actually the most lethal phase of cancer. Of the 560 cancer-- 560,000 cancer deaths that will occur in this country this year, 500,000 of them are due to this phase of the disease. It's actually not a phase that we understand terribly well today, but clearly a very important one. So cancer arises from normal cells through the sequential acquisition of alterations that allow those cells to do things they should not normally do, including invade and metastasize. This is what it looks like in real life. This is a tumor. It's actually a tumor from a mouse created in my lab. It's lung cancer. This lacy appearance is the normal lung epithelium. There's a lot of air spaces in the lung to allow you to get gas exchange in the lung. And you can see in this region right here, there's a bit of a thickening of those epithelial structures. Too many cells, that's this area we call hyperplasia. Over time, these will give way to solid growths. The cells within those solid growths look pretty normal. And you might be able to see that here. The cells are pretty well organized. They are all lined up. There's just too many of them. That's a benign tumor, an adenoma. Over time, these will give rise to true cancers, carcinomas. And these have the ability to spread locally and throughout the body. In addition, the cells look even more abnormal. They don't look like the cells that gave rise to them. OK. Now let me give you some more details of the terminology that I've just been using. Hyperplasia is increased cell number. But the architecture of the cells is otherwise normal. They look like normal cells. If progression occurs, a benign tumor might arise. This is not yet cancer. These tumors are so-called not aggressive. They basically stay where they started. They don't destroy the local tissue. And they don't leave the site. And if they are detected, for example in a colonoscopy, they can be removed. If they're detected in the lung when they're at this stage, they can be removed surgically and the patient will be fine. However, they can progress into a malignant tumor. And this is where we use the term cancer. Cancer actually refers not to just any tumor, but a malignant tumor. And these, by contrast, are aggressive. The cells are dividing more rapidly. They're also causing changes within the local tissue such that they're locally destructive to the local tissue. And they have the potential to spread, to get outside of their local area, access the blood vessels, and move to a distant site. And that leads to this final phase of metastasis, which is the tumor growing at a distant site. And that can be one site or it can be many sites. And again, it's the combined effects of the metastatic tumors that tends to kill cancer patients. Now cancers can arise in virtually all organs, all tissues. Cancer is an umbrella term that actually refers to many different diseases of abnormal growth. The most common tumors in humans affect epithelial tissues, epithelial tissues. And these epithelial tissues will give rise to a cancer type called carcinomas. Carcinomas are cancers of epithelial tissues. Breast cancer, lung cancer pancreas cancer-- these are all cancers of epithelial tissues. The precursor lesions are called adenomas, in many cases. And these are benign. We can also have cancers of connective tissues, and these are called, collectively, sarcomas, sarcomas. Muscle tumors, myosarcomas. Fibroblast derived tumors, fibrosarcomas. Cartilage derived tumors, these tumors are rarer in humans, but they occur. And when they occur, they can be quite problematic, as well. And they go through similar stages of progression, as I've been describing for the other tumor types. And we can have tumors of blood cells, leukemias, too many cells in the blood. And I showed you a blood smear of a leukemic patient. The blood smear indicates that there are too many cells circulating. That contrasts to lymphomas, which is also a blood cell tumor. But here the tumor cells are confined to lymph organs, like the thymus or the spleen or lymph nodes. So there actually aren't too many cells circulating, but there are too many of these cells in these structures, which likewise can cause problems within those local structures, and surrounding tissues as well. OK. So some terminology. Cancers affect all tissues, or virtually all tissues. There are probably 200, 250 different types of cancer when we think about all the different cell types in your body that can undergo these changes and result in one or another type of cancer. All right. So cancers arise from normal cells. They develop in stages. What causes them to change over time? What gives them the ability to divide inappropriately, to grow abnormally? The answer to this question is that alterations take place in the DNA of the developing cancer cells. And in this respect, cancer is a genetic disease. And I'm going to use this term in quotes because when we talk about a genetic disease, we tend to talk about inherited diseases. You inherit a disease allele from one of your parents. You develop a disease. In this case, cancer can arise as a consequence of an inherited mutation. We'll talk about that in a subsequent lecture. But what I'm referring to here is genetic alterations that take place within you, within your cells. And this accumulates over time, over decades in some cases, and allows the cells to progress through these various stages. The case that cancer develops through the acquisition of mutations in genes has been building for about a century. We've been suspecting that cancer was a genetic disease for a very long time. And now we know it's true because we've seen the alterations in the genes of cancer cells. And we'll come to those specific alterations in subsequent lectures. But I want to give you the background that led us there. The first and the oldest was the observation going back almost 100 years that cancer cells have abnormal number and structure of chromosomes. As you know, your cells have 46 chromosomes, 23 pairs. And most of your cells look like the cells on the left, where there's a pair of chromosome 1, 2, 3, and so forth. These chromosomes are painted with a specific chromosome specific paint so we can distinguish which one is which, and this is a so-called normal karyotype. Cancer cells can look like this. And you can see that they're different in many respects from normal cells. A, there's way too many chromosomes. This is a condition we call aneuploidy. Aneuploidy, as opposed to being diploid, the cells are aneuploid, an abnormal number of chromosomes. Moreover, you can see in some of the highlighted areas that the chromosome structure is abnormal. We have this chromosome here, which has a little bit of the pale blue chromosome-- which may be chromosome 4, I can't read it-- and a little bit of this pink chromosome, which is one of these guys here. A translocation has taken place so that the structure of the chromosome is abnormal. So we have aneuploidy, defects in chromosome number, but also defects in chromosome structure, like translocations. We also have deletions-- not easy to see in this slide-- where chromosomes have incurred big losses of genetic material. OK? Chromosome abnormalities in cancer have been known about for a very long time. A second and very important observation, which occurred sometime in the '40s-- maybe '30s, '40s, and '50s-- and built up over time since then, is that carcinogens, carcinogens, which are cancer causing agents, are almost always mutagens, which are mutation causing agents. So something that can cause cancer in, for example, a laboratory animal, can be shown to alter the DNA and cause mutations. That would suggest that the carcinogen is acting through the alterations in the DNA. And this observation was made much more convincing through the work of an investigator by the name of Bruce Ames, who developed the so-called Ames test. And I want to tell you about that. But actually, before I do, let me just show you graphically how the agent can be tested for its carcinogenic capabilities and its mutagenic capabilities. The carcinogen is tested by treating an animal-- a mouse or a rat-- injecting the animal with the carcinogen or painting the carcinogen or the potential carcinogen on the skin of the animal, and then waiting a certain amount of time and asking the question whether the animal developed a tumor. And you can do this with different doses of the agent, with large numbers of animals, and actually get quantitative data that tells you the potency of this potential carcinogen. So that's the carcinogenesis assay. To test whether something is a mutagen, you can take the agent and treat cells and ask whether you can cause mutations in those cells. You could do this in lots of different types of cells. But the easiest types of cells to do it in are bacterial cells, for example, salmonella bacteria or E. coli. And the way this assay is done is to use cells that are defective in the production of an amino acid, let's say histidine. So the cells have mutations in a biosynthetic enzyme-- and I'll tell you more about this in a second-- that is required for the cells to make histidine. Now these cells can live if you provide histidine to them exogenously, for example, on the Petri dish. But if you take those cells and you plate them on a Petri dish that is lacking histidine, none of the cells will be able to grow because they require exogenous histidine to live. However, if you take that mutagen and you add it to these histidine minus cells, the mutagen might correct the mutation in the histidine biosynthesis gene, thereby converting it to a wild type form at some low frequency. Such that if you plate these now mutagen-treated cells on a histidine minus plate, you might get a few colonies growing. And these would be histidine plus, capable of producing histidine themselves, revertants. They've reverted the mutation to now a wild type form. OK? And it was this that was the basis of the Ames test, to test the mutagenicity of potential compounds. Now we actually use different versions of his minus bacteria, because different mutagens cause different types of mutations. And if you use just one mutant bacteria, you might miss certain potential mutagens. So for example, if we have a specific mutant, which is in a gene required for the conversion of histidinol, in an enzyme that is called histidinol dihydrogenase, this enzyme is required to produce histidine in the final step of the synthesis. The wild type enzyme would have a particular sequence, which would encode a particular pair of amino acids, glutamine and serine. And it is this collection of histidine minus bacteria, this collection of histidine minus bacteria that we use in this assay, we might have one mutant, which has an alteration, which converts that C to a T. This creates a termination codon. So this is why that bacterium can't make histidine, because it can't make that enzyme. It has a stop codon on that position. A second mutant might have a different stop codon. This is determination codon. Here, this C has been converted to a G, creating the stop codon. And a third mutant might have an abnormal number of bases in this region, an insertion of an A residue, which would cause a frame shift. These are three different mutants, which would require three different types of alterations in the DNA to convert back to the wild type. Here, this pyrimidine would have to be converted to a different pyrimidine. Here, this purine would have to be converted to a pyrimidine. And here this abnormal number of bases would have to be corrected to the correct number. This would allow one to find agents which function as point mutagens, which is a class of mutagens. They create point mutations. And this type of bacteria would allow you to find what are called frame shift mutagens. So these bacteria are mixed together. The mutagen is added. And then you count the number of cells that survive on the his minus plate. That's the original Ames test. As Ames and others continue to do this kind of testing, they discovered, to their surprise, that some clearly established carcinogens failed the Ames test. They cause lots of tumors in animals, but they didn't revert any bacteria in that bacterial assay. So can anybody think why that is? Why might an agent, which can clearly cause cancer, fail that test? Well, one answer-- and the most common answer-- is that the agent itself is not itself a mutation. But it can be converted in the body through the process of metabolism. As your body tries to convert that agent into something, for example, that it can excrete, it alters it chemically and converts it from a promutagenic form into a mutagenic form. And in this form, it can cause mutations in your DNA, and in theory, in the bacterial cases as well. And here's an example of a promutagen called benzo(a)pyrene, a very important mutation in cigarette smoke. This is converted, through various steps inside your liver, to a form that is much more mutagenic. These epoxides are much more mutagenic compared to the original compound, much more reactive, much more reactive to DNA. And in these forms, the compound will actually covalently attach to the bases of DNA and cause mutations. OK? So in this sense, your body is actually part of the problem. It's trying to get rid of this bad stuff, but in the process of doing that, it's making it worse. Recognizing that this was an issue for actually quite a few potential mutagens, Ames and others modified the Ames test. It's now called the Modified Ames test. In which case, you take the compound of interest, the potential mutagen, you mix it with some extract from liver to allow this metabolism to occur. And then you take those, the metabolized compound, and you do the bacterial, the bacterial mutagenesis test that I just reviewed for you. OK? And now you find that many of these things that failed initially, score positively. OK. So stuff that we get exposed to, like benzo(a)pyrene, and other agents in the environment, can cause mutations, and these can also cause cancer. I just want to take a few seconds to rail against tobacco smoke and cigarette smoking. Lung cancer is the most common form of cancer in this country. 175,000 deaths due to lung cancer each year. About 150,000 of those deaths are due to smoking. It's the most common form of cancer, and among the most preventable forms of cancer, through to the failure to expose the body to carcinogens in cigarette smoke. Not only is there benzo(a)pyrene in cigarette smoke, but there's about 1,000 other carcinogens in cigarette smoke. Cigarette smoking is still very common in this country, remarkably common in this country. About 46 million adults still smoke in this country. A remarkable number of high school students still smoke in this country. And it's because of this that lung cancer rates are still very, very high. Moreover, smoking causes all sorts of other diseases-- emphysema, kidney diseases, cardiovascular diseases. It's now estimated that among the however many billions of people are on the planet today-- what's that number, 6 billion people on the planet today? Something like 650 million of them will die due to the exposure to cigarette smoke. So my little lesson here is that if you're currently smoking, stop. If you're not smoking, don't start. It's the easiest way to protect yourself against many, many dangerous future problems. All right. So cigarette smoke is something we do to ourselves. We expose ourselves to mutagens that cause cancer cells to develop in your lungs and in other parts of your body. There are other so-called exogenous mutagens, things that we get exposed to. Sunlight, for example, sunlight, the UV rays in sunlight, can cause damage to your DNA, causing skin cancer and melanoma. Dietary carcinogens, barbecued beef, has certain dietary carcinogens in the category, actually, of benzo(a)pyrene, that can cause damage to your DNA and induce colon cancer. Not at high numbers, not saying you shouldn't eat barbecue. But still, this is an example of stuff we get exposed to that increases our cancer risk. Replication errors. Your cells are good at copying the DNA. They're very good at it. They have proofreading functions that make them better at it. But they're not perfect. So every time your cells divide, you actually run the risk of making a mistake. And replication errors are a common source of mutations in cancer. As your cells are moving DNA around, they also sometimes break it. And these DNA breaks are sometimes sealed properly, but sometimes not. And deletions can occur. And translocations can occur. And another endogenous process that leads to mutations, including in cancer cells. Defects in DNA repair. You have lots of enzymes that are looking at your DNA at all times for adducts that have formed, and other alterations. And those enzymes remove those damaged bases and fix them. But sometimes they fail. Sometimes they actually get mutated in cancer cells, raising the risk still further. So defects in DNA damage and DNA damage repair enzymes. Your cells also produce endogenous mutagens. Various reactive oxygen species are produced, for example, in the process of metabolism. And these reactive oxygen species, like superoxide, hydrogen peroxide, can interact with the DNA and cause mutations. This is why antioxidants are useful in preventing cancer in some settings. OK? So various things that we get exposed to or we expose ourselves to cause mutations in DNA. And this results, ultimately, in the development of cancers. The last thing I'll mention to you is that this doesn't happen overnight. It's not that a single alteration in a single gene is sufficient to derive tumor development. Instead, it's a process that occurs over time and requires alterations to many genes. So if you imagine a cell, a normal cell, which divides to produce two daughters with the same DNA content, at some frequency, this cell might acquire a mutation. Maybe it got exposed to cigarette smoke. Maybe it got exposed to superoxide. Maybe it made a mistake. And this mutation then confers upon that cell the ability to divide especially well. And now all of its daughter cells carry that same mutation. And as that cell divides further and produces daughter cells of its own, perhaps one of those cells-- and this might not be in the very next cell division, it might be five years later-- one of those cells acquires a second mutation. And that mutation gives that cell the ability to divide even more rapidly, or survive even better. And again, all of its descendant cells will have that same abnormal genotype. And maybe within that clone of cells, a third mutation takes place. And on and on we go. We now think that we need somewhere between 5 and 10 mutations in cellular genes to allow the cells to progress all the way to that full blown cancer that I showed you in pictures before. So this process continues until we have true malignancy. And this process of developing clones with increasing ability to develop into cancer, we call the clonal evolution theory, the clonal evolution of increasingly abnormal clones, which eventually will develop into a cancer. And next time we'll talk about what are the genes that are mutated in these developing cancers.
Medical_Lectures
03_Biochemistry_Amino_Acids_Lecture_for_Kevin_Aherns_BB_450550.txt
Captioning provided by Disability Access Services at Oregon State University. Kevin Ahern: Happy Friday! Student: Whoo! Student: Happy Friday! Kevin Ahern: Shall we all celebrate? Call off class? Don't quote me on that. How's everybody doing? Everything making sense? Student: Mostly. Student: So far. Kevin Ahern: Mostly? Mostly? Kevin Ahern: Working through the problems? Recitations? Okay. Everybody's got a smile on their face. That's good. So, I'm happy to see that. Okay. So, as always, if you have problems or questions, or there are things that aren't working right, let me know. That's my job to, obviously, hopefully, help you to make things right. So that's what I want to do. What I want to do today is talk about amino acids, because we're going to start turning our attention now to protein structure, a bit, and getting a better understanding about how these things we've been talking about with respect to pH affect charge and how charge can affect the structure of proteins. As I've alluded to in the very first lecture, structure is essential for function. And so things that affect structure we really need to have an understanding of. Okay. So that's the topic of today's lecture and will be the topic of actually the next three lectures, including today. So as you can gather, by the fact that, here we go, okay. We'll try this. And yes, I have it on vibrate so that you guys can't hear it, but my vibrate seems to make that thing vibrate. Okay. [buzzing noise] Shutting down. By virtue of the fact that I give three lectures on this, it says something about the importance of the topic of protein structure. It's a very, very important thing for us to understand. Well, structure is a pretty remarkable thing. We think of protein structure as actually occurring at four different levels, and I'm going to talk in some detail about each of those four levels. The four levels are what we call primary, secondary, tertiary and quaternary. Primary, secondary, tertiary, quaternary. And those four levels of structure you can think of as resulting from interactions between amino acids that are farther and farther and farther away. So therefore the interactions between amino acids that are the closest are those of primary structure. The next closest is secondary structure, the next closest is tertiary structure, and the farthest away, quaternary structure. And as I describe these, I think you'll see why that's the case. So I want you to keep those in mind. They're not difficult concepts. They're not difficult concepts, at all. But because of these four levels of protein structure that exist, all of the proteins that exist in the world can be explained and understood. Okay? Structure implies function. This is one of the best examples I can give you of structure determining function. Next term, in BB 451, I will talk about this protein. This protein is called PCNA. You don't need to know that for this. The important thing about this protein is it helps DNA polymerase to stay stuck to DNA. Okay? Stay stuck to DNA. DNA polymerase copies DNA. It's got to move along the DNA. DNA polymerases that bind to this protein remain stuck to DNA for a long time. Why? Well, the structure of this protein is a ring, and like a ring sliding along the DNA, it doesn't let anything go away. DNA polymerases that stick to this protein stick to the DNA. DNA polymerases that don't stick to this protein go along the DNA for a little ways, they pop off, they pop on, they pop off, they pop on. Okay? Meaning, therefore, structure is, again, implying function. If they don't have the ability to bind to this, this protein itself binds to DNA because of its ring structure. This ring structure is essential for the function, not only of this protein but the proteins that interact with it. I like to show this figure because it's a really nice example, when we look at proteins, of how, what I like to describe as nano-machines can assemble. When we look at, for example, a virus, a virus is composed of a protein coat and a nucleic acid inside of it, at its very simplest level. That protein coat is a protection for the amino acid that's inside and that protein coat has to be assembled. Viruses don't have factories. They don't have workers that can hammer rivets and make things stay together. They don't have glue. Instead, they make proteins that do something really remarkable. Those proteins can self-assemble. Now, imagine having a puzzle that you put together that's got 500 pieces and the puzzle puts itself together. That's what proteins in a viral coat can do. And understanding how that assembly process works has medical implications, as we will talk about later. And it has tremendous implications for the virus being able to make copies of itself, because if it can't assemble its protein coat, it can't function as a virus. So when we think about what I like to describe as nano-machines, and we have a lot of people who are interested in nano-technology, it doesn't get any more nano than this right here, things that can put themselves together. Remarkable, remarkable, proteins. Another important concept that I'll talk about, with respect to structure of proteins, relates to it indirectly, and that is the concept that proteins are flexible. This is a very important concept. You're going to hear me talk about the flexibility of proteins over and over and over this term. Flexibility is key to understanding what proteins do. It's key to understanding what enzymes, which of course are also proteins, do. And this example here is a prime one. It shows the effect of binding of a single atom of iron to a protein, a single atom of iron. When this protein binds to this atom of iron, right here, the shape of this protein changes from PacMan over here to... I don't know... globular nightmare, alright? But it changes its shape. And this change in its shape has very drastic effects for the function of this protein. This protein over here will do something that this very same protein over here will not do. These are the same protein, differing in shape by the binding of a single atom. Okay? We will see later, when I talk about enzymes, that this flexibility allows enzymes to do things that chemical catalysts cannot do. It's the key to why enzymes can speed reactions trillions, quadrillions of times. Flexibility. So flexibility is a very, very important concept for us to understand, and it's a very, very important phenomenon for a protein to exist. Well, as we talk about and think about proteins, we have to go down to the level of the building blocks, and, of course, everybody in basic biology, I hope, knows that the building blocks of proteins are amino acids. And there are, in fact, 20 amino acids that are most commonly found in proteins. There's another amino acid, called the 21st amino acid, that is occasionally placed into proteins. And of those 20 amino acids that we always find in proteins, some of them get chemically modified after they get in. So we see quite a variety of things arising from the structures of amino acids in proteins. There are some things that we see not so variable, however. One of the almost invariant things that we see in proteins is that when we look at the stereoisomeric configurations that exist of proteins, what we discover is that there's a very strong bias for biologically-made amino acids. We can synthesize amino acids in a test tube. If we synthesize amino acids in a test tube, we get a mixture. They have two stereoisomeric configurations. You can see the configurations on your screen, D and L. If we make them in a test tube, no cells involved, we make them chemically, we get 50% D, we get 50% L. If we examine the amino acids that a cell makes, if we examine the amino acids that are in proteins, 99.999% of them will be in the L configuration. Okay? There's only a very tiny number of amino acids and I'm going to actually give you the exceptions later in the term of amino acids that appear in proteins that are in the D configuration. How do cells make such a bias? Well, they make such a bias by virtue of the fact that the amino acids, that the enzymes that make the amino acids have their own bias. They will only make one type, preferentially. Why do they only make one type? They only make one type because they can only use one type. Well, how about that cell over there? Well, that cell over there is eating, using amino acids from proteins it's gotten somewhere else. The language of biology says that, if it's not L, I can't use it. So if I'm a cell and I've got D amino acids, I couldn't use all the L's that were out there. I couldn't eat! So by default, everything ends up being L amino acids, because that's what cells universally use. Now, this is a really cool phenomenon because if we are interested in a phenomenon of what's called "astrobiology," you know... is there life out there, floating around out there in space? There probably is. We would like to know that. One of the things that people do when asteroids... asteroids... when meteorites fall to Earth is that they grab those, they try to get them before they get any contamination, before people get their hands on them, before all of Earth can its contaminating amino acids in it. They bust open the meteorite and they analyze the amino acids that are in it. Is, yes, astroids, I keep saying "asteroids" meteorites are full of amino acids. The question, then, is do we see a bias? Were those amino acids made like they make them like in a test tube? Or where they made by an organism that has its own built-in bias for those structures? Well, so far, we don't have any meteorites that show any strong bias that we can tell that. And we probably won't. Floating out in space is not a real good place for life to be. But, it doesn't hurt to look, right? So it'd be kind of cool if we could see, we find the meteorite that has everything in the D, or everything in the L, either way. It'd be kind of cool. We'd know that it didn't happen by that simple chemical process that we can do in a test tube. Is there life? Okay. Now, I'm not going to ask you to draw, actually, let me go back. I'm jumping ahead. I'm not going to ask you to draw an amino acid in the D versus the L. Okay? You're not going to have to do that. You should know that L's are the predominant form. And you should know the constituents of an amino acid. That's something I forgot to mention here. Every amino acid has four basic constituents that we can point to. Actually, five, if we count the central carbon. There's a central carbon called the "alpha carbon." Attached to that alpha carbon are four different things, and because there are four different things attached to it, that's why we have stereoisomeric forms of each of the amino acids, all but one, that is. Okay? Four different things. One is called the "alpha amine." It's this blue guy, over here. An alpha amine is attached to an alpha carbon. We also have something called an "alpha carboxyl." An alpha carboxyl is attached to an alpha carbon. In addition, we have a hydrogen. And the fourth thing that we have is an R group. Now, if you look at this, all the amino acids have a hydrogen. They all have an alpha carboxyl. They all have an alpha amine. The only way that the 20 amino acids differ from each other in structure is in the configuration of their R groups. So we can think of the R group as really defining the chemical properties of the amino acids. Okay? So if we understand the R groups, we understand the chemistry of each of the amino acids. Alright. Well, I've been talking for a couple of days about ionization. I want to point out to you that ionization is critical for amino acids, because amino acids have ionizable groups. Now, so far, the ionization that I've talked about has been about acetic acid. Acetic acid has a single ionizable group. It has a single pKa. All of the amino acids that we find in biology, all of the amino acids, have at least two ionizable groups, and therefore, at least two different pKa's, at least two ionizable groups, at least two different pKa's. The alpha carboxyl can ionize. It starts out as a COOH, which has a charge of zero. When it loses its proton, it has a charge of -1. The alpha amine can also ionize. Okay? If it has an NH2, which is what we see here, and which is what we think of as an amine, it has a charge of zero. But if it gains a proton, it has a charge of +1. NH3, +1, okay? So two possibilities for each one, a charge of +1, a charge of zero, for the amine. A charge of zero, a charge of -1 for the carboxyl. The amine has its own pKa. The carboxyl has its own pKa. Let's examine what happens during the loss of protons with this amine. Let's say we start over here with all of the protons on. What would it take for me to put all the protons on? It would take a low pH, right? Today I'm going to give you a rule that I want you to understand. It's going to simplify things for you, and you can actually derive this rule mathematically or you can memorize the rule. This is one rule I won't put on your exam, okay? We are concerned, in ionization, about protons being on or off. Okay? We're concerned about them being on or off. The rule I'm going to give you is as follows, If the pH of a solution is one or more units below the pKa, the proton of that group is on. If the pH of a group, I'm sorry, the pH of a solution is more than one unit below the pKa of a group, the proton is on that group. If the pH of a solution is more than one unit above the pKa of a group, the proton is off. And you're saying, "What if we have it somewhere in the middle?" Well, if we have it somewhere in the middle, some molecules will have it on and some molecules will have it off. We can't say one versus the other. Okay? Now that's what this graph on the screen is showing you, and I'm going to describe it to you. Does anybody want me to repeat the rule? Okay. pH more than one unit below the pKa, proton on. pH one or more units above the pKa, proton off. One or more below, one or more above. Right? Okay. Well, here is a plot that shows us, it's a little confusing of a plot, we'll just first look at the things up at the top. You told me that we had to have a low pH to start, if we had all the protons on. Well, we know we've got all the protons on, because there's the proton on the carboxyl group. That carboxyl group has a charge of zero. And there's that extra proton on the amine group, and it has a charge of +1. What would it take for me to pull a proton off of this molecule? An increase in pH. How much would I have to increase the pH to know that I'd got a proton off? One or more units above the pKa. And you're saying, "Well, which pKa?" Well, I haven't given them to you. The pKa of an alpha carboxyl group is approximately 2.2. I'll give you that on an exam. Okay? The pKa of an alpha amine group is approximately 9.5 and I'll give you that on an exam. Okay? So if I wanted to pull off say this guy has a pKa of 2.2 and this guy has a pKa of 9.5, which proton's going to come off first? The carboxyl, because the one that's got the lowest pKa. Which one's the stronger acid, the carboxyl or the amine? The carboxyl, because it's got a lower pKa, right? Okay. Basic rules. Alright. How high would I have to raise the pH to be pretty sure I've got the proton off of the carboxyl? 3.2, one or more units above, right? Alright. So when I look out here at about 3.2, look what's happened. We've pretty much gotten this proton off, and that's what this graph is showing. The percentage of pink is dropping, dropping, dropping, dropping, until we're essentially down here. We've got the proton off. Instead of having this molecule, we essentially have this molecule. Student: I thought the proton was on. Kevin Ahern: It was, until we started pulling it off. So we're pulling the proton off here. [clears throat] Excuse me. Pulling the proton off. Student: [inaudible] Kevin Ahern: I'm sorry? Student: Because you're above the pKa? Kevin Ahern: Because I'm getting above the pKa. That's right. So the pH is rising. Well, what if I was at 2.2? What if I was at 2.2? What would I have? I would have half and half, right? Half of this guy and half of this guy, right? Notice both these guys have the NH3+. Why is that? Because at 2.2 I'm more than one pH unit below the pKa of the amine group. The amine group proton stays on. Okay? If I said, "Which of these two is the salt and which of these two is the acid," you would say? The salt is? The salt is this guy right here. The salt will always have one less proton than the acid. This has the most protons. This has lost a proton. Salt's in the middle. There's the acid, right? That's going to change, over here. Between these two, which one's the salt and which one's the acid? Salt on the right, acid in the middle. So whether something is a salt or an acid depends upon which ionization we're talking about. Okay? Well, notice, we keep adding sodium hydroxide, we keep adding sodium hydroxide, the pH keeps changing and the pH keeps changing. And all of a sudden, we start seeing the green form start to appear. And where is the green form going to start to appear? Well, within about one pH unit, this thing's a little bit more exaggerated than mine but within about one pH unit of that pKa we start seeing ionization happening. By more than one pH unit above the pKa, it's essentially all happened. At a pH of 9.5, you would expect that we would have approximately 50% this, 50% this, and that's exactly what we have. So this graph is showing you, in graphic terms, what's happening with these molecules. Notice, and this always confuses people, there are three molecules. There are two ionizations. pKa's refer to ionizations, not to molecules. Okay? "How come there's three molecules? "I've only got two pKa's in this problem!" Well, think back to what I just said, pKa's refer to ionizations, not to molecules. They refer to the process of this happening, where this happens. Okay, questions about that? No dazed looks? You guys read your—yeah? Student: Could you repeat what you said about [inaudible]. Kevin Ahern: In this case, the molecule is the salt, and this molecule is the acid, if we're talking about this one. And, specifically, you're right. This guy is the thing that's lost the proton. This is the thing that's gained the proton. But specifically it's the entire molecule. Okay? Okay, good. Oh, I thought I saw a hand. Okay, yeah? Student: So the third can only ever be an acid? And the first could only ever be... Kevin Ahern: This can never be an acid. Student: That could only ever be a salt. Kevin Ahern: This can never give up a proton. That's right. Yeah. Okay. And this could only ever be an acid. That's right. Alright. Good. Very good. Let's move forward. Now, first of all, I'm not going to ask you to memorize the structures of the amino acids. Okay? If you're in the majors class, I would expect you to memorize all 20 structures, and you would really love me. But now, because I didn't make you memorize that, you really should love me, right? I need love. Alright. But, but, and there's a big "but", I said that the R groups determine the chemistry. You should know a bit about the R groups in terms of categories, alright? Now, your book, in this edition, went to something that's a little different scheme than most books use, and I'm going to use their convention to keep it simple for you and so you won't get confused. If you looking in the sixth edition of the book, you're going to see this is going to look different. So you might want to refer to the figures of the seventh when you're learning your amino acids. Okay? Your book groups amino acids into several categories. One category you see on the screen are what they call "hydrophobic." And they're called hydrophobic because they have R groups that will not interact with water very well. They don't like water very much. So if I ask you to identify the hydrophobic amino acids, I would expect that you would know this. You should know the names of all 20 amino acids, yes. But you should know, when I say "hydrophobic amino acid," you're going to have these things pop into your head. "Oh, there's alanine, there's leucine, there's proline." There's also other ones, and these include these guys here. These are all in the category of hydrophobic. They have side chains that really don't interact very well with water. Some of these have big side chains. Look at tryptophan. That's the biggest side chain right there, okay? That's a big honking molecule and it doesn't like water. Now, this hydrophobic nature of these side chains of these amino acids have very important implications for the location of amino acids in proteins. I'll talk about that later when I talk about tertiary structure, but I want you to keep that in mind, okay? The chemical nature of the R group will determine a lot about where these things are found in proteins. Okay. Another group that your book refers to are what are called "polar amino acids." Polar amino acids have side chains that interact with water very well. They're either ionic or they have something that can hydrogen bond. Okay? Cysteine, for example, can ionize its sulfur, its sulfhydryl, reasonably easily. It will interact with water very favorably. Threonine, hydroxyl side chain, hydroxyl group. When we think "hydroxyl group," we think "hydrogen bond," therefore, likes water. So these are polar amino acids. They tend to be hydrophilic, liking water. Student: You said they have a hydrogen bond or ionic bond? Kevin Ahern: They either hydrogen bond or ionize. We'll see there's a separate category that ionize, and I'd describe those to you here. But these guys, here, if they ionize, it's usually not to a large extent, with the possible exception of cysteine. Cysteine actually ionizes reasonably easily. Now, here's a group that I call the positive what happened there? call the "positive R groups." Oh, I've got the wrong figure linked. Okay. Oh, blast it. Okay. I'll have to fix that. This category includes lysine, arginine and histidine. What you see here on the screen is only the ionization for histidine, unfortunately. I thought I had all three of them up there for you, so I'll fix that. But these guys all have side chains that have amine groups. They all have side chains that have amine groups and therefore, that means if they have a proton on them they will be positively charged. So these guys can have an R group that definitely is positively charged. Now, the R groups of these guys vary, but if I said to you that the R groups of these amino acid side chains are on the order of 10 or 11, what would you say about their charge at physiological pH? Are they charged? Are they uncharged? Are they positive? Are they negative? What are they? pKa of, let's say, 10. Physiological pH of 7. The rule tells you what about the proton? Proton on, right? pH more than one unit below the pKa. We're talking about an amine group. Proton on, an amine group. Charge? +1, right? So this is the kind of thing you should be able to go through in your head just like that, just like I'm doing here. It's not hard. But when you get the basic rules, you'll understand these components of charge, okay? Now, histidine actually is an exception. It has a rather odd pKa, but I won't talk about that, at the moment. There are two amino acids that ionize very readily, okay, in their R groups at physiological pH. These are the negatively charged R groups. They are what we oftentimes refer to as the "acidic R groups." By the way, the last group, your book calls "basic R groups." I tend not to like that term, but if you want to call it that, that's fine. I like it the "positively charged R groups." That's the way I like to think of them. Alright. These guys have carboxyls in their R group, and they have a pKa typically of about 4.4. Again, I'll give you that on an exam. You won't need to know that. And at physiological pH, if the pH is 7, and these guys have a pKa of 4.4, you should run through your head, "Well, the pH is more than one unit above the pKa, "the proton will be off. "Proton off the carboxyl, negatively charged." So these guys are usually negatively charged when they're found in proteins in cells, or when they're found in cells alone, either way. Okay. Now, you do not need to memorize the three-letter abbreviation. You do not need to memorize the single-letter abbreviation. You do need to memorize the names. Student: Do we have to spell them correctly? Kevin Ahern: Do I have to spell them correctly? Well, since something like aspartate really isn't a very difficult word, I would say, in general, yes. I'm not inflexible, except for graduate students. Graduate students have to spell everything precisely. But we will not have a very wide latitude for something that's a simple name, I'll tell you that. Aspartate is aspartate. It's not asperilmarilbartlebate. Literally, I've had students on an exam, and they're like, "Well, it was close," but, you know, no. You need to know the name. But, again, it's not absolute for undergrads, but it has to be pretty close. Here's a pKa table showing you some of what I've just described to you. And though I think their numbers in some cases are a little odd, I'll show it to you. Here's a terminal alpha carboxyl, okay? Approximately 3.1. Most of them are actually below that. There's acidic side chains, about 4.1. I told you histidine's a little odd. Histidine's about 6. And, again, you don't need to memorize these at all. I will give you any relevant pKa's that you need on an exam. But I just show you these to show you the various groups. One that's of interest is cysteine. I'll talk a lot about cysteine this term. Cysteine ionizes reasonably readily, okay? 8.3. There's that stupid bouncing thing. And not only does it ionize readily, but it turns out that sulfhydryl, that SH group on the side chain of cysteine, is very chemically reactive. It will readily react with other sulfhydryls of cysteine, and make chemical bonds. And we'll see that this is an important consideration in stabilizing the structure of many proteins. Tyrosine has an OH that can go to an O-minus. It takes a fairly high pH to get that proton off, but it can happen. If I had pH 12, this tyrosine would have a charge like this. Lysine, of course, the positively charged polar side chain there. Arginine is even higher. Arginine, by the way, has a resonant structure and we will just treat it as if it's a single NH3. We won't treat it as which one is which. It's resonant and it's possible to go to either one. So we'll treat it as if it has a single NH2 that can become an NH3 out there. Okay. There's the abbreviations that you don't need to know. And I think we're there. Alright. Now this figure shows you very much what I showed you in that earlier figure. It's actually not even quite as nice as that first figure that I showed you, the ionization. This is a simple amino acid that has two ionizable groups. An example might be alanine. Alanine only has two groups that can ionize. The R group of alanine can't ionize. If I had aspartic acid up here, I would have three ionizations that could occur. Okay? Well, this actually comes up as important when we think about titration. So I showed you a titration curve the other day for acetic acid. Actually, it was an acid I made up. It had a pKa of about 2.5. I showed it at the end of class the other day? And you saw that single flattening. And that flattening corresponded to the buffering region. That was the place where that buffer was resisting the change in pH. I told you earlier that anything that has a pKa indicates it's a weak acid, and anything that's a weak acid can be a buffer. And so amino acids can be buffers, as well. And they act as buffers. So this figure, it's kind of a dumb scheme, but I'll show it to you, this dumb scheme can show us a little bit about what a titration plot looks like for the amino acids. Okay? If you look on the problem set videos that I work, I'll draw better ones than this because these tend to be a little odd. But here, we can see, here is the titration plot for alanine. Alanine has no R group that can ionize. But it does have an alpha carboxyl and it has an alpha amine. Student: Is it a hydrophobic or [inaudible]? Kevin Ahern: Alanine is a hydrophobic, because it has nothing that can interact with the water and the R group has no side chain that can do that. The important thing are the ionizable groups, the alpha carboxyl and the alpha amine. We see the pH rising as we add and here we're adding, by the way, NaOH, alright? We're adding NaOH to this solution. We're seeing the pH rise. Okay? The rising is going on. And we see it flatten. It's flattening right here. Why is it flattening there? Which group is being affected? The alpha carboxyl, because, again, we're at a pKa of about 2.2. Alright? Within one pH unit of that, it's going to act like a buffer. We get out of that buffering region, and look what happens. The pH goes, boing! The pH is rising rapidly, even though we've added very tiny amounts of sodium hydroxide. But then we get up to another region where there's buffering, and look what happens. Well, that's the alpha amine, and that's going to happen up around 9.5, thereabouts. Okay? Now, in that first figure that I showed you, there was a term that was on there, that one of you mentioned, that I didn't mention, but I'll mention it to you now. It was called a zwitterion. And I want to say a word about a zwitterion. A zwitterion is a molecule whose total charge is zero, total charge is zero. Okay? Now, if its total charge is zero, that means it must have equal numbers of positive and negative charges, right? And we saw in that graph that I drew for you, on that ionization earlier, that we had a molecule that had a charge of +1, zero, and -1. Right? Let's go back to that, since I'm referring to it here. So if we look at that ionization, here's our molecule. Here's our amino acid. It's got a charge of +1. It loses a proton, it's got a charge of zero. It loses another proton, it's got a charge of -1. This guy's a zwitterion. Every amino acid can exist as a zwitterion. Now, let's think about these structures and let's think about, what did it take for this guy to become a zwitterion? What did it take? We had to pull that first proton off, right? We have sort of a range over which we have a zwitterion, right? But, in fact, when we look at a pH plot, what we see... oh, don't start that again. Okay, blast. I shouldn't have gone away. What we see is there's actually, on the titration plot, there's a specific place where it will exist as a zwitterion. That first graph gives us an approximation. That rule I gave you about +1, -1 charge, from the pKa? That is an approximation. It's all it is. The titration plot will allow us to see exactly where this is. Now, let's think about this. Here's a molecule down here. Let's say we're at pH zero. What's the status of the protons on this molecule? They're all on, right? We're just like that very first molecule we had before. We have a charge of? Well, zero from the carboxyl and +1 from the amino, so we have an overall charge of +1. Right? We take that first proton off, where's that going to happen on here? Where are we going to get that proton off? What's it going to take to get that proton off, in terms of pH? We're actually more than one unit above the pKa. The where I would wager 25% of the students on the exam will make a mistake on the exam is right here. They'll say, "Oh, there's that first proton off, "at the pKa." Nooo. It's only half off there. To get it off, we have to get more than one pH unit above. We actually have to be right exactly right there. Okay? Student: Wait, shouldn't it happen at 3.4, though? Kevin Ahern: Hold on, hold on. Just bear with me. It happens exactly right there. Right? How do I know it's exactly right there? Remember, I said this is an approximation. When I say it's more than one unit above, the proton is off, I said we could assume that. It's an approximation. Student: [inaudible] Kevin Ahern: The place... Please, please. The place where a proton, where we have a zwitterion, is a precise place. There's a precise pH at which we have a zwitterion. That's known as the pI. The pI is the pH at which the charge of a molecule is exactly zero. The pI is the pH at which a molecule has a charge of exactly zero. It gets the approximation out of that thing that I gave you before. Well, how do I calculate this number right here? Well, in this case, it's very simple. It's the pKa's on either side of the place where it's zero. Well, there's only two pKa's here, right? If there's only two pKa's, then it's the sum of this one, plus this one, divided by two. The average of that will give me the pI. If this were 2.2 and this were 9.5, the correct answer on the exam would be 2.2 plus 9.5, divided by 2. I wouldn't even make you calculate that. 2.2 plus 9.5, divided by 2. You would have the pI of this amino acid. Now, what if I have three of these? Do I just average all three? No, if you use the rule I just gave you, the two pKa's on either side of the place where the charge is zero. Now, the TA's are going to be going through with you, in class, in the recitations, pH plots for the amino acids. And you're going to see how you decide where the charge is here, where the charge is there, where the charge is. So you can find this magic place, where the pI is. That is the two pKa's on either side of it. And once you identify that, then you have the knowledge to calculate the pI. It's the average of those two pKa's. So something that has, let's say lysine, which has a positively charged R group, it can, in fact, it'll have three places of flattening because it'll have three pKa values. I've got to decide which are the two that are relevant. Does that make sense? Now, we'll see later in the term, actually in about next week, I think, where knowledge of pI gives us an incredibly powerful tool for understanding proteins. So this is not just an exercise in calculation, but it's an important concept for understanding structure and function of proteins. Okay, questions about that? Student: Would the pI be [inaudible]. Kevin Ahern: The pI would be right there. Student: I mean, would the pI, just cross [unintelligible] Kevin Ahern: It will, it will. Now, if you go and you look at these, that's whyI don't like these graphs. So, I'm going to show you, for example, their graph for aspartic acid. Theirs doesn't draw this so clearly. In fact, it should be flat, up, flat, up, flat. Right? But here, because they're close together, it sort of runs them together. So I'm not real fond of their plots for this. But when you look at my videos online, where I'm working these, or what the TA's are going to show you, you're going to see some more defined flattenings where you can have no doubt that you're in a buffering region. Student: If you zoomed in, would that cause you to see it? Kevin Ahern: To some extent, but these two pKa's are pretty close together, so it makes it a little bit more complicated. But you'll see the TA's will show you that. Arginine's a little better. Let me clear that one. Here's the arginine. You can see the three here. Here's one, two. Here's three. Okay? Yes, sir. Student: So with three pKa's, which two pKa's would you average? Kevin Ahern: You would have to calculate what the charge is at each place and assign that. I'm not going to go through that here. The TA's are going to show you that in the recitations. But you understand the concept. You have to get the two pKa's on either side of the place where it's zero. Okay? Okay. That's good. Let me get through here. So I haven't said much about primary structure. So I want to spend at least a couple of minutes talking about that, and then we will, I think finish with a song. [scattered chuckling] Just something to keep you going. I've been doing all this talking about ionization, but I haven't said a word about primary structure, and that's how I started the lecture. Why did I do that? Well, we'll see that the charge of these amino acids affect the secondary, the tertiary, the quaternary structure of a protein. They affect all three of those structures. They don't affect the primary structure, however. The primary structure is essential, however, for all of the other structures of a protein. Underline that. The primary structure of a protein is essential for all the other. It determines what the secondary structure will be. It determines what the tertiary structure will be. It determines what the quaternary structure will be. The primary structure of proteins relates to the sequence of amino acids, joined one to the other. Lysine. Arginine. Glutamic acid. Valine. That's a sequence, one to the next. And the sequence happens because the amino acids are joined together by peptide bonds. You see a peptide bond being formed here. We see there's the peptide bond. It goes between the alpha carboxyl of one amino acid and the alpha amine of the next one. There's my R group of the first one. There's my R group of the second one. We'll say more about this next time. We see in this orientation that this guy has an end. This is known as the "amino end," because there's the alpha amino and it's not bound to anything. And there's the alpha carboxyl, there's a carboxyl end, because there's an alpha carboxyl and it's not bound to anything. However, this alpha amine and this alpha carboxyl are tied up in a peptide bond. All proteins will have one free alpha amino and one free alpha carboxyl. All the other alpha amines and all the other alpha carboxyls will be joined in peptide bonds. Okay? So I can always tell which is the amino end or the protein, and which is the carboxyl end of a protein. Okay, you guys have been patient. Let me see if I can get the audio going. I think instead of us... you can sing along, but I've actually got somebody who's going to sing for us. And I hope this works, so be patient for me. He's a way better singer than I am, so you may like that. And, let's hear it! [music, "Alphabet Song" tune] Sing along! Lyrics: Lysine, arginine and his basic ones you should not miss. Ala, leu, val, ile, and met, fill the aliphatic set. Proline bends and cys has "s." Glycine's "R" is the smallest. Then there's trp and tyr and phe structured aromatically. Asp and glu's side chains of R say to protons "au revoir." Glutamine, asparagine bear carboxamide amines. Threonine and tiny ser have hydroxyl groups to share. These twen-TY amino A's, can combine...
Medical_Lectures
23_Stem_Cells.txt
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation or view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. HAZEL SIVE: I want to discuss with you today a very topical and interesting question, which is the notion of stem cells. In fact, I'm going to discuss two things, the first of which is another concept that you need, following on from the concepts we had right at the beginning of lecture. I feel like this microphone-- The first is a concept that you need in addition to the ones we started lecture with, and then we'll talk about stem cells. So today, we'll talk about potency, and then we'll talk about stem cells. Potency, along with fate, determination, and differentiation, is one of those terms that you need to know and you need to understand in order to understand stem cells. Potency refers to the number of possible fates that a cell can acquire, number of possible fates open to a cell. And this is a very important concept of development because, in general, potency decreases with age, and decreases as different parts of the organism become specialized. So in general, potency decreases with age. But I will put in here, and we'll explore this more in a moment, except for some stem cells. And we haven't defined a stem cell yet, but we will. What kinds of potencies are there? There's the big one, totipotent, where a cell can become all fates. And there's really only one cell that can do this in the normal animal, and is the zygote. And in most animals, even as the zygote becomes just two cells or a few cells, that full potency is lost. And cells instead, in the embryo, are multi or pluripotent, which means that they can acquire many fates, but not all fates. Embryonic cells, especially in the early embryo, and many stem cells can also become, are also multipotent or pluripotent. And then as time progresses-- is that a hand up? Yes, sir. AUDIENCE: How do you reconcile the fact that human cells, we can separate them. Even at the eight cell stage. HAZEL SIVE: That's a great question. The question is how do I reconcile what I'm telling you with the fact that you can get identical sextuplets, or octuplets actually? It's a good question. That's true. In different animals, the very early embryonic cells are sometimes totipotent up to a while. OK? And so for example, in armadillo, here's a piece of, you know, fact for your back pockets. In the armadillo, the eight-cell embryo almost always splits into eight single cells, each of which becomes a baby armadillo. OK? So those cells are totipotent. In mice, even at the two cell stage, the two mouse cells are probably not equivalently potent and they're not totipotent. So very, very seldom-- almost never-- get identical mouse twins. OK? So it's one of these generalities. And if you ask me, you get the specifics are a bit different. As cell fate restriction continues, cells can become bipotent, or unipotent, whereby one or just two fates are open to them. And so if we look, taking this concept, let's now start the lecture about stem cells. You're going to need this concept. Stem cells, I'll point out, whatever they are, got almost 3,000 hits yesterday on Google News. This is way below baseball, which got 45,000. I checked. But still, you know, as science topics go, stem cells are really up there. And they're on the covers of magazines, over and over again. And we'll talk more about why that is. Here's a diagram-- it's not on your handouts-- that I drew for you. Let's not dwell on it. But let's now move on to Topic Number 2, which will fold in this concept of potency and the concepts of fate, determination, and differentiation, and talk about stem cells. And let's do, as is our custom, let's define what a stem cell is. I think that a stem cell can be defined as a cell of variable potency that has the capacity to self-renew. Cells of variable potency that can self-renew. They can make more of themselves. Despite the hype, despite covers of Time Magazine and almost every front page of every newspaper across the world, stem cells are normally found in our bodies. And normally, as we'll explore, they're used for organ maintenance and repair, organ maintenance and repair. But the thing, you know, that has everyone fired up is that you can somehow harness these cells for therapeutic purposes. And that you can repair what the body cannot, by being clever, and using the power of these cells as they normally have it, or as you can give it to them. And so there's this question of therapy and therapeutic stem cells, where the idea, again, is that you would repair a damaged organ by introducing somehow, injecting or otherwise introducing, extra or somehow special stem cells-- which I am going to abbreviate heretofore as SC-- by introducing extra stem cells into a damaged body. Does this work? It does work. It works for the hematopoietic system, as in bone marrow transplants. And it also works for skin cell transplants. Well, let's just put skin cells. OK. Skin stem cells can be grown from your own skin. And in the case of burn victims, this has really saved countless lives. The original technology began to be developed here at MIT by Professor Howard Green, who is now at Harvard. But the idea is to take your skin and grow it on something like gauze or some kind of some solid support, and then to cover a burn patient with layers of support on which there are some stem cells. And these stem cells will help fill in the holes in the skin left by the burn. Normally when a wound heals, as I'm sure you've noticed, it heals from the sides. The only way a wound can heal is from the side. And if it's a big wound, it can take a very, very long time to heal. And you can get infections and so on while the healing process is going on. So seeding the inside of a wound with stem cells that can start the skin regeneration process and seal up the body against infection, that's been incredibly useful. And we'll talk more about bone marrow transplants in a moment. OK. We previously talked about this process by which cells decide, are undecided initially, they decide what they're going to become. And then they differentiate into their final function. Stem cells fit into this litany somewhere between the commitment stage and the differentiation stage. And in this diagram, these multiple arrows are there for a reason. There are multiple steps between commitment and differentiation. And somewhere along the way, a group of cells with capacities we'll talk about, leaves this lineage and sits around and waits, partially determined, so that it can go on and make more differentiated cells when they're needed. And I've added on there the potency timeline, decreasing with age, of the developing animal. But let's diagram the notion of stem cells on the board. Stem cells generally divide slowly. Here's one. It's a variable potency. It may be multipotent. It may be bipotential. And it is somewhat committed, which is a slightly difficult concept. Because last time we talked about committed versus uncommitted. But now I'm telling you something can be somewhat committed. And that gets to this multiple arrows there were as cells progress in their fate decisions, they change their molecular signature and they really do become closer and closer to a cell that's made a decision. But it's kind of like, you know, if you're weighing up going to med school or going to graduate school in bioengineering, you know, you have decided that it will be one or the other, but you haven't decided which. You're somewhat committed. And then when you make the decision to go to graduate school, you have now become committed. OK? So the cell is doing the same kind of notion there. There's your stem cell, variable potency. Under the correct stimulus, that stem cell will divide to give rise to two different cells. One is another stem cell. And the other is something we'll call a progenitor. The progenitor is more committed than the stem cell. The progenitor cell is going to go on and divide, usually a lot. Progenitors divide rapidly. And their progeny will eventually go on and differentiate into one or more different kinds of cells, maybe a stripey cell, and a spotted cell type, and a cell type with squiggles. And so here are the differentiated cell types. And the number of different differentiated cells that comes out of this process is a reflection of the potency of the stem cell. OK? So here you've got these progenitors. The idea is that these progenitors will have similar potency. But as I'll show you, there's a whole variation on this. But here the number of differentiated cell types, the number of cell types reflects potency of the stem cell. OK? This kind of diagram is called a lineage diagram. It tells you what-- not only what the final fate of the cell is, it tells you something about the progress towards that final fate. So a lineage we can define as the set of cell types arising from a stem cell or a progenitor. Let's talk about the discovery of stem cells, because this is something that really was pivotal in helping understand whether or not there was some way that the body normally repaired itself. It was clear that during early development, there was lots of cell division and lots of changes. Cell types were formed and organs were formed. But it really wasn't clear in the adult how much repair there was, how much turnover of tissues there were, and really what the whole dynamic process of maintaining the adult was. And the discovery of stem cells came about because people looked to see how long cells lived. And what they found was found using a turnover assay that measures the half life of cells. And they found that in almost all organs, in fact, probably in all organs, cells did not live forever. They turned over. They died. And they were replaced by new cells. And this turnover assay implied that there was some kind of replacement. And the cells doing the replacement were called stem cells. You find this by a pulse/chase assay, which we'll go over on your handout in a moment. And what was found was really variable for different organs. Firstly, all organs, about, show cell turn over. Red blood cells have a half life of about 120 days. There are a lot of red blood cells in your body. And in fact, that implies that there are about 10 to the seventh new red blood cells made a day. In your intestine, the half life of cells is three to five days, in the small intestine. And the hair on your head has a half life of about four years. So it's variable for different kinds of cells. If you look at your first handout, it diagrams a pulse/chase assay where a cell population is labeled with a nucleotide analog. It's a normal nucleotide, but it's got a bromine added to it. And it acts like deoxythymidine, gets incorporated into DNA, and you give just a short pulse of this nucleotide analog. So only some of the cells get labeled. And you only give it for a short time. So you get a labeled cell population. And then you stop the labeling by adding lots of unlabeled thymidine, and that's called a chase. And you follow the cells over this long chase period. And then you can watch and see what happens to those cells that you initially labeled over a very short time. And so in this example, I've got four cells initially labeled. Over time, they're only two cells left. And if you measure the time from going from four cells to two cells, you can get to the half life of that cell population. OK? You can also-- if you don't have a handout for this, just look on the screen-- you can also follow the labeled cell population and see what those cells become. And you can see that they go on to differentiate as particular cell types. So this is a kind of way of labeling the lineage of these cells. And that is useful, too. This was the theory behind stem cell definition. But what is a stem cell look like, and how do you isolate one? It turns out that that's really difficult. So isolation and assay in the adult stem cells are very, very rare. And that is one of the issues with using stem cells for therapy. There are very few of them and they're hard to isolate. Hematopoietic stem cells comprise about 0.01% of the bone marrow, which is where the stem cells reside, and where the precursors of your whole blood and immune system reside. The way that this was dealt with was through a really clever technique that has the acronym of FACS. I'll give you a slide in a moment. It stands for Fluorescence Activated Cell Sorting. We'll go to a slide in a moment so we don't have to spell it out. And the idea behind FACS is that you label stem cells. And you might be guessing what to label them with. But you label them usually via their cell surface proteins with some kind of tag, often an antibody tag. And then you can use that tag to make them different colors. And you can then sort them, cell by cell, through the Special Fluorescence Activated Cell Sorter. Sort individual cells, and then you can assay individual cells or small groups of them for stem cell properties. Let's look at a slide of how the FACS, the Fluorescence Activated Cell Sorter works. No, let's not. I've really gotten ahead of myself here. And I'm going to go back because I want to show you this. Hold on to that thought and let's go back to this notion of a pulse/chase assay. I forgot that I had this here. This is really important. This is a pulse/chase assay of intestinal cells. And so your small intestine lies inside your belly. And if you look at its anatomy, it contains many tubes whose surface is thrown into folds to increase the surface area for food absorption. And if you look at these folds, they are very closely packed. So you get a huge surface area increase. And the cells of these folds that are doing the food absorption turn over every three to five days. Those are the ones I was talking about. So if you blow up one of these folds, which is called a villus, there's the part that's sticking up into the cavity, into the lumen of the small intestine. And then there's this part that kind of dips down into the lining of the intestine, which is called a crypt. The stem cells lie somewhere at the base of the crypt. It's not exactly clear where. But the idea is that somewhere at the bottom of this crypt there are these stem cells that under specific conditions will start to proliferate. And they will move up into the villus and replace the villus cells that are dying. You can monitor this by doing a pulse/chase experiment. So here's the crypt. And they've given, in this experiment, a pulse of thymidine has been given. And you're looking kind of at time 0, right after the pulse of thymidine has been given. Here the cells, the black cells are in the crypt. They're labeled. And then if you look over time, you can see that these black cells move away from the crypt. They're moving up into the villus. And here they are actually right on top of the villus, replacing the cells that were turning over. So that's a really beautiful demonstration of a pulse/chase assay. OK. Now we can go to our Fluorescence Activated Cell Sorter. The idea is that you take a mix of cells-- you don't have this on your handout, just look on the screen-- the cells are labeled with fluorescent antibodies. And you put them into a reservoir and droplet generator. And the cells drop out of this reservoir, one at a time. And as they drop out, they go past a laser. And you can tune the laser to whatever wavelengths you want. It excites the cells. And if the cells emit in the particular wavelength you're interested in, the detector will detect that. And then it actually gives a charge to the cell that it is the correct fluorescence. And as the cells are dropping down, the cells of the correct color are deflected by an electric charge. And different color cells can be collected into different flasks. OK? This really works. It's a fantastic machine. You can collect cells about, you know, you can collect millions of cells an hour. It's pretty quick. But you do it one cell at a time. And in that way, you can isolate cells, which have got stem cell properties. OK. So you've used your FACS machine. You've got cells that look as though they've got stem cell properties. And now, let's look at the assays that may be used. And there are three assays that you should know. One is a repopulation assay to test stem cellness. And this is a transplant assay where you're transplanting test cells, test stem cells, into the adult. And you have removed from the adult, endogenous cells that might be competing with those stem cells. That would include the stem cells. We'll go through a slide in a moment. I'm going to list them here. Another one is an in-vitro induction assay, where you are going to take isolated cells and you're going to treat them with various inducers, various signaling molecules, and you're going to test and see what fates those cells can acquire. And a third assay is called an embryo incorporation assay, where you are going to take cells that may be stem cells, and you're going to test them in a chimeric embryo. Let's go through your next slides to discuss each of these points. Bone marrow transplants resulted in a Nobel Prize in 1990 for E. Donnall Thomas and Joseph Murray. It's a technique that saved millions of lives, and here how it works in a mouse. You take the mouse and you irradiate the mouse to destroy the bone marrow and the stem cells associated with the bone marrow. If it's a person, to destroy the diseased burned bone marrow. And then you replace that bone marrow with normal bone marrow to either try to make the person better, or in this case, to test something about stem cells. The irradiated mouse or person would die, but the normal bone marrow will cause the mouse to live. And if you've put stem cells into that mouse, you can start getting them out of that mouse whose life you have saved, and isolate more stem cells. Those kinds of assays led to the definition of the hematopoietic stem cell. Here it is. It's a pluripotent stem cell that gives rise to all of these different kinds of cells, the immune cells, and all of the blood cells. It's a very, very powerful stem cell. And the ability to actually make this diagram and say that there was a single cell that gave rise to all these different lineages was because of a titration assay where you could take these putative hematopoietic stem cells that were difficult to isolate-- and still haven't been isolated in their purity-- but you can titrate them down. And you can introduce what you think is one stem cell, 10 stem cells, 100 stem cells, and so on, into an irradiated mouse, and ask how many stem cells does it take to repopulate the entire blood system and immune system. And it turns out, you have to mix these cells with carrier cells, otherwise it doesn't work. But it turns out that one cell can repopulate the entire hematopoietic system, which is really extraordinary, and led to the diagram that I showed you in the slide before. OK. Here's another assay. This is an in-vitro induction assay. And the idea here is that you start off with something, which you think might be stem cells, by various criteria. And then to test what these cells can do, you put them into plastic tissue culture dishes, and you add some nutrients and so on to allow the cells to divide. And then you add some inducers. And you remember a couple of lectures back, inducers are just ligands for various signaling systems. You might add fibroblast growth factor to this one, and retinoic acid to that one, and then you ask what happens to the cells. They will go on, in general, to differentiate into different cell types. And depending on what they differentiate into, you can say something about the potency of these putative stem cells. You can't test if they're really stem cells, but you can say something about their potency. You can do a similar experiment, but in a whole mouse. The mouse is made from, the embryo is made from a part of the very early embryo called the inner cell mass. And you can inject labeled, putative stem cells into an early mouse embryo, into this inner cell mass part of the embryo, put it into a mother, a recipient mother, and then ask what comes out, what kind of embryo comes out of that process. And if you see that the baby that comes out of this chimeric embryo has got a green liver and green ears and green whiskers, you'll know that these cells that you put into the chimeric embryo, that you made the chimeric embryo with, had the capacity to give liver, ears, and whiskers. OK? So this is a powerful assay to, again, look at the potency of cells, not, in this case, the stem cellness of cells. One of the things about stem cells is that you only want them to work when you want them to work. If you cut yourself, in the normal process of keeping your liver the right size, in the normal process of keeping your heart muscle correct, you want your stem cells to be working and keeping everything homeostatic. You don't want them to be dividing out of control, because then you'll get cancer. And so something has to control what stem cells do and when they do it. And this is a question of regulation and the notion of a stem cell niche. Stem cells are kept quiescent, usually in G0 that we talked about in the cell cycle lecture. And they're kept quiescent by signals from the surrounding cells. So their cell-cell interaction, and by signals from the surrounding cells. And those are given a special name by stem cell biologists. They're not that special, but they're given a special name. They're called niche cells, or the niche. OK? They're just surrounding cells that are secreting signals. On some kind of activation-- you cut yourself, your organ normally needs repair-- things change. So on activation, by some kind of environmental input-- local or less local-- niche cells induce the stem cells to divide. And they do this-- and they induce the stem cells to divide into a stem cell plus a progenitor, which then goes on to do all the things that I diagrammed that progenitors do, on the first board. And the niche cells do this because they have changed their signaling. This is yet another use, or a very related use, of the notion of cell-cell signaling in controlling life. Here's a diagram. Here are niche cells. You don't have this. Just look on the screen. Surrounding cells, maintaining stem cells quiet. When there's an environmental input, the niche cells change. They activate the stem cells to divide and form more stem cells and progenitor cells. One really fantastic example of the niche, and the interaction between stem cells and the niche, is in the hair. This is from my colleague Elaine Fuchs at Rockefeller, who over many years has figured out that in the hair follicle-- this is the hair sticking out of the skin-- in the hair, there's a small group of cells on one side of the hair shaft called the bulge cells, and these bulge cells are the stem cells. Her investigators isolated these bulge cells and did the following experiment to show that they were hair stem cells. There's a kind of a mouse called a nude mouse that has a very bad immune system. So it's useful for immune system experiments. But it also has no hair. And you can do a transplant into this mouse of these bulge cells. And you get little tufts of hair growing where the rest of the mouse is nude. And you've done a careful experiment where you've labeled the cells you have transplanted in so that you can show that they actually came from the transplanted tissue and not from the mouse, the nude mouse tissue. And you can do that because you labeled them with GFP and they're green. So this the green hair. And it shows you that these bulge cells are stem cells. During the life of a hair-- your hair has a life, we discussed four years on your head-- during the life of a hair, the bulge cells and the niche cells are in different places. And depending on whether they're touching or far apart from one another, there's induction of hair growth or not. So at a particular stage of the hair cycle-- this is called the hair cycle, look on the bottom of the screen here-- the bulge cells and a group of cells called the dermal papilla that lies right at the bottom of the hair shaft are touching one another. And at that point, a particular signaling pathway called the Wnt pathway is activated. And these dermal papilla cells tell the bulge cells to start dividing and start making a new hair shaft. After that's happened, the bulge cells move away from the dermal papilla cells. Here they are during growth, the growth period. You see the bulge and the dermal papilla cells are no longer touching. At this point, the stem cells become quiescent. And there are enough progenitor cells in the hair shaft to give you formation of the hair. And then the stem cells remain quiescent until the next hair cycle starts and they get in contact with the dermal papilla again and start making new hair. This is a really beautiful story that's shown us quite clearly how the niche cells can control these stem cells. All right. So let's spend the last minutes talking about therapeutics. Here's the dream. You know, the dream is that you have a stem cell population for every organ in the body, including things like limbs, such that if your limb gets severed or your heart becomes really diseased or your spinal cord is injured and you can't walk anymore, that you can just inject into a patient the correct stem cells and everything gets repaired. That's the dream. And that's really the holy grail of what thousands and thousands of investigators are going after. And it's given precedence by bone marrow transplants, which really are very successful. Turns out that it's kind of tough. It's tough because these adult stem cells are really rare. The hematopoietic stem cells are special because they're kind of liquid. They're single cells. They're not attached to anything. And they are relatively easy to identify. But other stem cells are very difficult. So the idea is that you inject stem cells to repair a damaged organ. You need stem cells of the correct potency, otherwise you're not going to repair the specific organ. But adult stem cells are very rare. And so the quest has been to find some kind of substitute for adult stem cells. And those substitutes come in two flavors. One are embryonic stem cells, abbreviated, ESCs. Embryonic stem cells are cells that grow from the inner cell mass of an early mammalian embryo. And you can grow from them groups of cells that will keep growing in the laboratory for a long time and have variable potency. So the embryonic stem cells are derived from the inner cell mass of an early mammalian embryo-- human in the case of human. And you grow out so-called ESC lines, which means that these cells grow continuously in culture. And each embryonic stem cell line has a unique potency and can be used to do different things, in terms of theoretical repair. If you look on your next handout, the idea is that you take this inner cell mass, you plate the cells as single cells, they grow and grow and grow. And if you treat them with various inducers, you can get them to become heart or neuron progenitors, inject those into animals and make them better. There are some issues with embryonic stem cells. And the problems are twofold. One is ethical in that you have to harvest human embryos. You have to obtain and harvest human embryos to get these human stem cell lines. And currently, you're really not allowed to do much human embryo work to obtain human stem cells. But the second, even if you were, is that these cells are non-autologous. They do not match the person into which you're putting them. They do not match the immune system of the recipient. And so they'll be rejected. And it's the same as an organ transplant. You have lots of problems of rejection. So the latest thing that is very exciting and wonderful and potentially might be very useful, is the use of things called IPS cells, in which you convert adult cells into stem cells. And you do so, as I'll tell you, by adding some transcription factors to them. The advantage of these is that they are self cells. You could do them from yourself. The disadvantage is that they're really not proven. And there is still, I will just say, lots of problems. But there are a lot of people, including some of the top investigators in the world, working on these IPS cells. So look at your last handout. We will remove our interesting calendar reminders there. Adult differentiated cells, which are unipotent, can be treated with three or four transcription factors. And finding these transcription factors is the key. And once you express these transcription factors in these adult cells, like magic, they become stem cells. It really was like magic. You can test the potency of these stem cells in the same way we discussed. And Professor Yamanaka in Japan and Professor Jaenisch here have shown that these are really very powerful cells. And those are the promise of the future. And we'll stop there and meet on Friday.
Medical_Lectures
The_Medical_H_and_P_Part_1_of_2.txt
it's Eric strong and today I'll be discussing the medical history and physical commonly known as an h&p in a two video series the learning objectives of these videos is to understand the purpose content and organization of the medical hmp to compare the oral presentation of the hmp to its written form and to know some additional tips on what makes an effective oral presentation in the first video I'll discuss the conceptual details of the hmp in the second video I'll give an example of an hmp oral presentation displayed side by side real-time annotations pointing out the concepts introduced in the first this video will cover topics relevant to both oral presentations and their written counterparts that's because there are obvious similarities between them specifically the overall format is identical that is each has a chief complaint a history of present illness past medical history Etc each section is presented in the same order and has roughly the same type of content however there are of course some important differences the purpose of the oral presentation is rapid communication and to Aid in real-time decision-making therefore it avoids excessive details the purpose of the written note however is to serve as a detailed reference and legal document and therefore it should be very comprehensive let me review the overall structure of the hmp first comes the reporting section this is where the presenter or writer conveys factual information as objectively as possible it's divided into the History exam and Diagnostics with the history there will obviously be some degree of subjectivity and as the one obtaining the history the clinici will need to selectively filter out irrelevant information the history is further subdivided into the source of information the chief complaint the history of prti and illness P medical history meds allergies and so on I'll go through each of these in a few minutes the exam typically starts with a statement about the patient's General appearance then the vital signs and the remainder of the exam is usually organized roughly head to toe other than the neuro exam which for some reason is often listed last the diagnostic subsection includes all Labs as as well as a results of any other relevant tests including x-rays CT scans and EKGs now in addition to the reporting section there is also an interpretation section which is frequently referred to as the assessment and plan this typically takes the form of a problem list each problem that the patient has on admission whether it's an acute physiologic derangement uh a particular symptom in unexplained exam finding or a chronic medical condition should be listed here for any new new problem there should be a differential diagnosis with explanation and each problem should have diagnostic therapeutic and educational plans if relevant I'll discuss this in more detail later as well now these two sections the reporting section and the interpretation section comprise the entirety of many hmps but there is something critically important that joins the two together it's often known as the impression though I find that term to be too imprecisely used instead I call this brief but extra section the linking statement it links the reporting and interpretation sections together and a bit more than that which I'll get to soon now let me discuss each one of these headings the very first thing in the hmp should be a brief statement as to the source of information and the clinician's impression of the source's reliability here are some examples source is the patient who appears reliable source is the patient spouse who appears reliable sources the patient who appears unreliable secondary to apparent alcohol intoxication or sources the medical chart as the patient is unconscious and no family or friends are immediately available to provide information next is the chief complaint this is the patient's primary reason for seeking medical attention it can either be in the patient's own words which is now less common but more traditional or in your own short summary phrase which is more common but sometimes discouraged by instructors in most cases I prefer the latter it should only include abnormal exam and lab findings or a diagnosis if they were previously established before the patient arrived for example if a patient came to the ER because his primary care provider called to tell him that his blood count was critically low anemia could be listed as the chief complaint in that case now if the patient is unable to offer any history at all the chief complaint is why someone else sought medical attention for him or her I think it's helpful to provide specific structures for the chief complaint if one uses the patient's own words that structure would be first age and gender plus the highly relevant items in the past medical history plus the patient's own words for why he or she is seeking medical attention for example Miss Chang is a 60-year-old woman with heart failure who's presenting because quote I haven't been able to catch my breath all day close quote or Mr Smith is a 25-year-old man with schizophrenia who states quote the CIA is trying to control me with microwaves close quote the other option for the chief complaint is the short summary phrase in this case it's age and gender plus highly relevant past history plus the primary symptom or symptoms plus the duration of that symptom or symptoms Mr Singh is a 76-year-old man with Di diabetes and hypertension who presents with right arm weakness and difficulty speaking for 2 hours although the chief complaint may seem very straightforward it's the most commonly flubbed part of the whole presentation and since it comes right at the beginning if this happens to you it's guaranteed to start you off on the wrong foot so let's take a look at some common mistakes for our first suboptimal Chief complaint Mr Williams is a 65-year-old man with diabetes presenting with a heart attack what's the error here it includes the presumed diagnosis revised it might read Mr Williams is a 65-year-old man with diabetes presenting with chest pain for 3 hours this might be a good spot to briefly discuss why one should not mention the presumed diagnosis UPF front it's basically to prevent excessive biopsying of The Listener or reader an excellent presentation first provides all of the relevant objective data in order and without interpretation allowing The Listener to reach their own conclusions in their head about the diagnosis before the presenter explains what his or her conclusions are this allows independent verification of the presenter's clinical reasoning process let's take a look at some more examples here's a chief complaint of Miss Lee is a 14-year-old girl with no significant medical history presenting with diarrhea for one week and hypokalemia the error here should be obvious it includes a lab result that was presumably not known at the time of the patient's initial arrival the corrected version omits the hypokalemia Mr oku is a 58-year-old man with gout left knee arthritis chronic low back pain and peptic ulcer disease presenting with an acute abdomen what's the error here there are actually two first the original version includes parts of the medical history that are completely irrelevant second it includes an interpretation of the exam rather than the patient's presenting symptom a far better version would read Mr oku is a 58-year-old man with peptic ulcer disease presenting with severe epigastric pain for 45 minutes finally what about this Chief complaint cough and fever this complaint provides absolutely no context for the symptoms is this a previously healthy six-month-old infant or a 50-year-old man with AIDS instead one could report this says Miss Patel is a 90-year-old woman with dementia sent from her nursing home for cough and fever for 2 hours remember the goal with the chief complaint is to provide the context for the upcoming history without giving away the diagnosis prematurely in the US at least there is a common variation to the chief complaint as I've described it some providers separate the chief complaint into ID for identification and CC for chief complaint for example the ID might read Mr Jones is an 84y old man with therosis and the separate Chief complaint reads vomiting blood for two hours I personally think it sounds better and flows better to put them both together into a single line moving on to the history of present illness abbreviated HPI the HPI is like telling a story one in which chronology is extremely important it should include key events and only relevant information it often begins with quote Mr or Miss so and so was in his or her usual state of health until symptoms should be described in addition to just being listed or mentioned and at the end of the HPI one should describe the patient's perception of illness or PPI as a common variation some clinicians list the PPI as a separate section immediately following the HPI another occasionally encountered variation on the hbii imagines it quite differently instead of a story told in Pros the hbi can be listed as a series of dates and events I won't read through this but feel free to pause it here if you'd like to look at it on your own I find this format works well for unusually complicated hpis or hpis that involve multiple prior hospitalizations or multiple adjustments to outpatient medications which could be contributing to the current presentation such as changing doses of diuretics or anti-hypertensives next up is the past medical history or pmh unlike the HPI which is typically done in Pros the pmh should always be done as a list provide details of each item in proportion to the relevance to the chief complaint and HPI If an item in the pmh is completely resolved and is of no relevance at all to the HPI you should omit it and state chronic disease markers when relevant in order to give the listener or reader an idea of how well that disease is controlled as an outpatient for example the patient's Baseline weight in CHF typical outpatient blood pressures in hypertension most recent hemoglobin A1c and diabetes and a baseline range of creatinines in chronic kidney disease here's what the pmh might look like in a written note although not done universally I find it's very helpful to separate it out into medical surgical Women's Health and psychiatric sections don't worry if some of those specific acronyms are unfamiliar this is just an example a common variation of this format will show The Identical information but instead of listing it all under the umbrella term of quote past medical history each subheading becomes its own heading although this may be actually more common I don't like this variation as much as it implies that functionally there is as much distinction between the past medical history and the past surgical history as there is between either one and the HPI or between either one and the med list the former format simply feels more logically organized to me now we have a series of relatively quick and straightforward sections first medications group them by Common indication this helps the reader or listener remember them and it also AIDS in identifying when there might be a class of medication that seems to be missing given the patient's past medical history include over-the-counter meds as well as herbal and natural supplements report patient adherence to meds and always use generic names of medications this isn't because drug companies are inherently evil per se instead it's because often the generic suffix will help to identify the medication class for example a med that ends in an alall is highly likely to be a beta blocker and one that ends in a Statin will definitely be an HMG COA reductase inhibitor the other reason to always use generics is that your formal exams and tests almost always use generic names as well for allergies and adverse drug reactions realize that most things referred to by patients as allergies aren't true allergies but nevertheless they should still be reported here it's the clinician's job to sort out adverse drug reactions which are more common from True type 1 hypersensitivity reactions which are less common the social history is next this is not just limited to bad behaviors it also includes marital status residential situation occupation diet sexual history animal exposures and travel history if relevant for smoking alcohol and drugs always try to quantify how much how often for how long and when was the last time family history focus on first and second degree relatives with diseases that are associated with established familiar risk the big five are cancer cardiovascular disease diabetes psychiatric disease and substance abuse there are others as well but those are the big five you should always ask about you do not need to mention that the patient's second cousin has osteoarthritis now for oral presentations stating that the family history is non-contributory is often appropriate and in fact usually preferred if indeed true however for written notes stating non-contributory is generally not acceptable the final part of the history is the review of systems for this any symptoms relevant to the chief complaint Andor HPI should be listed in the HPI and not need be listed again here for oral presentations it's usually acceptable to State review of systems negative except as previously discussed in the HPI however for written notes all negative responses should be written out now let's move on to the physical exam the first thing to acknowledge upfront about the exam is that there is no such thing as a complete physical exam or at least if there was one it would be 3 days long so the exam should always be tailored to the chief complaint with consideration of the patient's gender and age along with the past medical history as I mentioned earlier always begin reporting the exam with a statement as to the patient's General appearance followed by the vitals and use appropriate medical terminology the most challenging thing about performing and subsequently reporting the physical exam is knowing how to make it focused suppose we have a 62-year-old woman with active heroin use presenting with fever and dnia for 4 hours what might an appropriately focused exam look like for her here's a reasonable one to understand why certain things have been included consider why an individual item might be included some parts of the exam might inform us of the overall severity of illness this would be the case with a patient's General appearance and her Vital Signs other parts of the exam are directly linked to the chief complaint in HPI so for an IV drug abuser with fever and dpia this would obviously include the pulmonary and cardiac systems but it would also include components of the extremity exam that would investigate for science of endocarditis which might be the most likely diagnosis we would also want to look for signs of any complications of that most likely diagnosis in this case signs of heart failure for example and lastly we would want to quickly identify any important comorbid conditions these aren't findings directly related to the HPI but rather would identify any other occult diseases which this patient might have that would complicate their treatment of the primary problem or which we should just try to identify in order to treat itself for an IV drug abuser this might include examination of the abdom for any evidence of curosis that might have been brought on by chronic Hepatitis B or C and this might even include an examination of the lymph nodes for evidence of HIV or other infections you'll notice that there are a couple of components here not accounted for the entire hent exam examination of abdominal tenderness or listening for bowel sounds we always seem to include these parts of the exam but if our goal is to provide an exam focused to the HPI patients age and gender and past history they really shouldn't be there however just to be aware that there are a few components that if excluded this will likely result in you being chastised by your supervising Physicians some things just always seem to be put into the focused exam even if it seems like they don't belong so things like listening to Bow sounds and looking the oral fairings are just always done after the exam comes the labs and diagnostic ICS in the presentation highlight only those results which are relevant to the HPI Andor assessment and plan in the written note however list all recent results irrespective of immediate relevance summarize diagnostic reports instead of either reciting or copying and pasting the actual reports finally only binary tests should be reported as either positive or negative for example a urine pregnancy test or St guak test is positive or negative but a chest x-ray is not it's not positive or negative it's either normal or abnormal why is this important imagine you're listening to someone else present a patient's hmp and the presenter states that the UR analysis was positive what exactly would that mean the most common assumption would be that the UR analysis showed evidence of a urinary tract infection but what was the specific evidence that led the presenter to that conclusion was it white blood cells cells in the urine or positive dipstick of lucite esterase or something else maybe the presenter uses a different cut off of white cells to indicate a UTI that you might and what if the presenter wasn't even referring to evidence of UTI but rather to evidence of ATN or glomular disease so when it comes to stating most test results either State normal or specifically describe the abnormality and allow The Listener the opportunity to reach his or her own conclusions so that concludes the reporting section it's usually responsible for the majority of the duration of the hmp though as mentioned almost solely conveys nothing but objective facts which have been screened and selectively emphasized by the presenter based on their relevance to the patient's Chief complaint and past history the reporting section is Then followed by the linking statement that term is almost certainly new to you instead most people refer to the general concept of the section as the impression or summary statement or in the literature it's called the problem representation though these terms are not necessarily used as specifically as I encourage you to do for me the linking statement consists of one to two sentences which link the reporting section to the interpretation section it also links the most important key features of the History exam and tests into higher order structures that allow one to begin to formulate a differential diagnosis it should usually include some degree of interpretation but should only explicitly mention a specific diagnosis if the preceding data overwhelmingly support it over alternative diagnoses let me break down the structure of a good linking statement as I did for the chief complaint near the beginning of this video start with the age and gender plus the highly relevant past medical history then add the summary of primary symptoms using distinguishing adjectives which are sometimes s called semantic qualifiers and end with a summary of objective findings with interpretation and grouping into clinical syndromes when relevant so that might sound a little abstract so let me provide a specific example of a linking statement um so you can understand it better Mr Smith is a 62-year old man with diabetes and alcohol dependence who presents with acute constant epigastric pain associated with nausea and vomiting found to have sers severe epigastric tenderness and a light pace of 800 for those not familiar with the acronym sers it stands for the systemic inflammatory response syndrome in the US sometimes during rounds or in a daily teaching conference called morning report as an intern or resident you may be asked to Briefly summarize a case or during attending rounds at the bedside a senior physician may ask you for something called the oneliner if the response to this request is a statement similar to this one I guarantee others around you will be impressed the structure of the linking statement may seem superficially similar to the structure of the chief complaint so let me show you a side-by-side example of what both might look like for the same patient remember that the chief complaint is age and gender followed by past history followed by the primary symptom or symptoms and ending with a duration so you might have Miss Gonzalez is a 50 5year old woman with a history of metastatic breast cancer presenting with dnia and chest pain for 2 hours a possible linking statement that might follow a complete reporting of the data of Mrs Gonzalez's presentation could read Miss Gonzalez is a 55-year-old woman with history of metastatic breast cancer presenting with acute constant non-positional dpia associated with ptic chest pain found on exam to have hypoxia a normal chess X-ray and evidence of right heart strain on exam and EKG what's the difference the chief complaint sets the stage it provides context for the HPI it allows The Listener to appropriately categorize and catalog in his or her mind all of the subsequent information provided in the history while the linking statement provides the summary and higher order structure necessary to Aid The Listener in following along as you explain your differential diagnosis I mentioned earlier that you didn't want to State the diagnosis in the chief complaint because you don't want to bias The Listener by failing to allow them to reach his or her own conclusions about a case and validate your own clinical reasoning skills certainly the linking statement here seems to strongly suggest a specific diagnosis in this case pulmonary embolism but by now all of the data is already out there and hopefully the listener has arrived at the same point as you without any bias and now it's time for the interpretation section where you will make your case about the differential diagnosis just in case you as the presenter and your listener aren't on the same page yet or alternatively in case the listener just wants to check that you understand what you're talking about so no one actually refers to the interpretation section as such and instead it's always called the assessment and plan this should be organized as a prioritized problem list and not as a list of organ systems unless you're in the ICU where this is common problem number one on the list should almost always be the symptom or problem summarized in linking statement each problem that is new or directly related to the HPI should have a differential diagnosis which should include first a discussion Andor list of key features which argue for or against each item on the differential diagnosis and the second a commitment to one diagnosis as the most likely referred to as the provisional diagnosis unless no single diagnosis stands out each problem should have a plan that is divided into a diagnostic plan which is a list of additional tests Andor consults to be acquired that will help secure the diagnosis a therapeutic plan which is a list of meds IV fluids special diets procedures Andor surgeries that will help treat the patient and finally an educational plan if relevant which is a list of specific topics the patient would need to be educated about prior to discharge to understand what the assessment and plan should look like let me show you an example starting with a linking statement this would be the written form of the assessment in plan due to its length I won't read it in its entirety but feel free to pause it if you'd like to read through it more completely in fact when giving an oral presentation depending on the patient's complexity available time and your audience what you say allowed may be significantly shorter than what you see here on the screen so let's take the following linking statement which in the US would commonly be referred to as the impression in summary Mr Hadad is a 52-year-old man with diabetes presenting with acute constant non-positional dnia associated with hopsis found to have severe sepsis and hypoxic respiratory failure with bilateral infiltrates on chest x-ray and Complicated by acute kidney injury so next is the problem list in this case I'm going to make problem number one his sepsis with hypoxia and hopsis one could opt to make the sepsis hypoxia and hopsis all separate individual problems but since their underlying path of physiology is likely tightly linked and since their treatments are strongly overlapping it seems more logical to keep all three group together like this what follows then is an explanation of the differential diagnosis including commitment to one specific diagnosis as the most likely in this case severe community acquired pneumonia then there is the plan for this problem the plan has Diagnostic and therapeutic sections the individual items in the plan are written out as a list or in bolded format with as much specifics as possible the next problem to discuss in order of importance might be his acute kidney injury since this is also a new problem it also deserves a differential diagnosis of some degree but as it is a lesser problem than the sepsis discussion of the differential is necessarily shorter then the diabetes since this is a long-standing problem there is no need for a differential but it might include a mention of any possible acute complications next next up is the smoking which contains the first example of an educational plan in this case an intention to discuss the importance of cessation once the patient is feeling better then rounding out the problem list is the patient nutrition plan any type of inhospital prophylaxis such as that against dvts or nosocomial infections and lastly a statement as to the patient goals of care during the hospitalization which should always mention the code status and how that determination was reached this has been quite the exhaustive video and I hope you feel that your patience has been rewarded before switching to the much shorter part two of this video series which will consist of an annotated example presentation start to finish let me provide you with some final tips stick to the standardized format unless you're supervising attending physician specifically instructs you otherwise every symptom abnormal finding and established diagnosis should be represented in some way within the problem list every test ordered and medication prescribed should be mentioned in the written plan when presenting the hmp consider your audience when in doubt as to what level of detail they want ask aim to keep your complete oral presentation to within 5 to 7 Minutes avoid reading off your written hmp when giving the oral presentation don't editorialize and finally be sure to practice as much as possible presenting an hmp on a medically complex patient is a difficult skill and you will not master it in a day or a week or even a month but with deliberate practice and frequent feedback from both peers and those more experienced hopefully this will feel natural and effortless by the end of your training
Medical_Lectures
16_Biochemistry_Blood_ClottingCarbohydrates_I_Lecture_for_Kevin_Aherns_BB_450550.txt
Ahern: How's everybody doing? Student: Amazing. Ahern: Amazing. Everybody's doing amazing. You're speaking of everybody here, alright. Okay, so we're not too far from finishing up regulation of enzyme activity. When I finished last time, I had started, at least introduced the topic of blood clotting and as we will see, blood clotting is another system that uses zymogens. It is a system that, in addition to using zymogens, uses an interesting scheme or an interesting strategy that we will see later and it's what I call a cascade and the cascade system is used in other enzyme control systems and the beauty of a cascade system is that you can mobilize an effect very, very rapidly. Okay, what does that mean? So if I think about the Cascades, the Cascade maintain range and I go climbing the Cascades and I get up to a high point, the higher I get up, the more I see that the cascading waterfalls get smaller because I've got a smaller source of water. As I go down the mountain, the further I go down the mountains, the waterfalls and everything get bigger because the streams start coalescing together. That cascading system, one stimulating another, stimulating another, stimulating another, is a very, very effective way to make things happen very rapidly, okay? As I said, we'll see another example of it, but this shows a very involved system for blood clotting. I'm not going to take you all the way through, in fact I'm only going to emphasize a couple major points in it. But sufficed to say that a cascading system, if we think about a signal, let's say it's a damaged tissue or a damaged blood vessel in some way. That signal has to be amplified and the reason it has to be amplified is because as I said at the end of the lecture last time, we really need to stop that blood flow before we lose too much blood. If we don't, then the person will bleed to death. So we have to have a system that works very rapidly and is very effective, alright? So a small signal here, if this is an enzyme, this enzyme activities another enzyme and this enzyme in turn activities a bunch more enzymes, and this bunch of enzymes activates an even bigger bunch of enzymes, and at every step along the cascade, the signal gets bigger and bigger and bigger. The beauty of this is it happens very rapidly, it's happening in our blood stream. And so the main thing that we have to do in order to protect ourselves via a clotting mechanism is to make sure that we have plenty of zymogens in our blood stream. Things that are not yet clottable but can be very quickly be turned into a clot. Now, that's the good side. The bad side is the same thing. We have a blood stream that's full of things that can form a clot. And so if the system screws up, then we can very rapidly form a clot and kill ourselves if we have the clot forming in the wrong place, alright? So there's a Ying and a yang to blood clotting. We're going to focus mostly on the good side, I guess the Yin. I wasn't going to say good, but on the positive side of the blood clotting, which is the forming of the clot, and I'll also talk about how we dissolve a clot. So just like I said before, if the body has a way of turning a system on, it's also going to have a way of turning a system off. If it makes blood clots, it's got to be able to dissolve those blood clots as well. We'll talk about both of those. Okay, so like I said, I'm not going to go through this pathway in detail and no I'm not going to ask you to regurgitate this pathway to me, alright? The important aspects of this pathway are actually down here for our purposes, okay? We don't really care much of up here. For our purposes, we're going to focus on what's happening down here, okay? Now as we start with things on the left, or things, I should say on things on the left, but things that are coated in this sort of pinkish color, these are inactive zymogens, or inactive factors. So when I see for example, prothrombin, prothrombin is a zymogen. It is inactive, it is incapable of acting. By the way, most of these guys on here are proteases. One protease activating another protease, activating another protease, activating another protease and that last protease is going, which happens to be thrombin, is going to convert the clotting material from an inactive form to an active form, meaning that the clotting material will start to make a polymer. So once this guy right here gets converted to fibrin, fibrin is a self-assembling polymer. That's very important. It's a self-assembling polymer. In the absence of this activation, fibrin is floating through your blood stream all the time. Or fibrinogen is floating through your blood stream all the time doing nothing. That's what you want it to do. You don't want it to clot unless you got an issue. Well, this raises a couple of concerns. One is we want plotting to occur in a specific place, we don't want it occurring randomly in our body. And our body, I'll show you one way, has a way of knowing where to put that clot. There are several things in place that help to do that but I'm going to tell you about one of them at the molecular level. So our focus for right in is going to be on the prothrombin to thrombin and the thrombin catalyzing the conversion of fibrinogen to fibrin, okay? Alright, so let's consider that. Where was I at here, there we go. First of all, we have to consider the structure of fibrinogen. So fibrinogen is the inactive polymerizing material. When it gets activated, then it will form polymers. Well it's kept inactive by the action of these two things here. This label with its capital B and this guy down here with its capital A. These we can think of as knobs that basically stop the polymerization process. During the polymerization of fibrin, what we will see is these knobs get clipped off and they yield ends that can link into these things that we see here, okay? You see this little hole? The B will fit into that little hole, so I can take one fibrin molecule that's got a B and stick it into the hole of another one that's got this. The As will stick into the gammas, okay? So what thrombin is doing is it's clipping off the knobs. It clips this guy off, it clips these guys off and now we've got some ends that the pieces can start sticking together. The tinker toys, as it were, can start building themselves into a bigger structure. Alright, so that actually occurs in this mechanism that you see here, okay? In this case, the alphas are linked up with the gammas as we can see here, okay? The alphas and the gammas have been stuck together. We don't see the betas going into the Bs, into the beta structures and the reason is because that goes in the 3rd dimension. We could imagine that this polymer is going to stick back out towards us, right? So we have this guy sticking in here, this guy sticking in here, this guy sticking let's say in here, and now we get a three dimensional structure. So coming back out at us, we got all these things tied together. Now that's, what's amazing is A, that happens very rapidly, okay? It happens very, very rapidly, and second, it is a pretty good structure, but it's not a perfect structure. What does that mean? Well, what you see on the screen is a sort of a two dimensional display of what we call a soft clot. A soft clot, why do we call it a soft clot? Well, it's the very first thing that forms when there's been damage, there's been a cut and you're losing blood. The very first thing that happens is what's called a soft clot. And the reason we call it a soft clot is that these interactions are not, underline not, covalent. These are hydrogen bonds. It's soft because, yes, it helps to put all the pieces together, but it's not very sturdy. It doesn't hold things real well. We can imagine we get a few hundred or a few thousand of these hydrogen bonds, it's really going to start to add up to a reasonable structure, but for a good protection, we want to have what's called a hard clot, okay? To get a hard clot, we have to make covalent bonds, alright? So what you see in terms of that initial polymerization reaction only makes hydrogen bonds, it does not make covalent bonds. The covalent bonds require action of another enzyme, okay? The other enzyme is known as a transglutaminase and it catalyzes a reaction like this. The side chain of a glutamine and the side chain of a lysine can be joined together to make a cross link. This is a covalent bond. Now this is not happening in those little knob structures. This is happening just between the strands when they get adjacent to each other, if there is a lysine next to a glutamine, this transglutaminase will join these bonds together. When we make these covalent bonds, we've converted a soft clot into a hard clot, okay? We've converted a soft clot into a hard clot. If you watch a scab on your hand, you'll notice that when it first forms, it's different than what it looks like a couple of a hours later, okay? It goes from being literally soft feeling to being hard. The scab does that. Now as I said, the remarkable thing is this happens in the order of minutes and this happens and the place you wanted. It's rare that you're forming clots at places in your body where you don't want to have it, okay? And it's water tight. Those are really remarkable features of blood clotting. Well how do we know, how does the body know where to make that? How does the body know where to do it, okay? One of the ways in which the body uses information about how to do it is by a modification to prothrombin. Prothrombin. Remember prothrombin is the zymogen form of thrombin. It's the inactive form of thrombin. But the body has a way of collecting prothrombin at the site of the wound. As a way of collecting prothrombin at the site of the wound, I need to tell you about that, okay? So that happens as a result of action. Of prothrombin, let's see, let's go here. Here's what I want to show you. In order for prothrombin to get gathered at the site of the wound, it has to be modified. So prothrombin gets modified by vitamin K. Vitamin K is known as the clotting vitamin, all right? Vitamin K is required by an enzyme that puts an extra carboxyl group on the side chain of glutamate. So prothrombin has several glutamates. This enzyme that uses vitamin K, grabs a hold of prothrombin, grabs a hold of vitamin K and it puts additional carboxyl group on the side chain of glutamate, okay? Here's a regular glutamate side chain, here's the addition of a new carboxyl group on it, okay? So in black, you see the regular, I'm sorry going up here, the regular side chain of glutamate, all right? And now this guy's had an extra carboxyl group added to it. Why is that important? Well it turns out that when that extra carboxyl group gets added, prothrombin can all of a sudden bind calcium very, very well. It can bind calcium very, very well. With only one carboxyl group, prothrombin doesn't grab calcium worth a darn, all right? But with two carboxyl groups, they sort of gang up on calcium and hang onto it, okay? One doesn't do it, two are positioned perfectly to bind calcium, calcium is charge plus 2, each carboxyl group is charge minus 1, they form a very nice bond. Why is that important? Well at the site of the wound, we've got cutting, we've ruptured open cells and we've exposed a bunch of calcium, okay? So right at that site of the wound, we're going to have an abundance of calcium there and that calcium is going to attract prothrombin. Prothrombin will be concentrated at the site of the wound. Now, prothrombin sits there and waits for all of the other zymogens to get activated to get activated and finally it gets activated, making thrombin. And what's thrombin going to do? Well thrombin is gonna convert fibrinogen into fibrin and right in the site of the wound, that polymer is going to form. So there's other systems the body uses but one is this one. It's vitamin K dependent, it's why we have to have vitamin K to have efficient blood clotting, okay? Yes? Student: So when...[inaudible question] Ahern: Okay, so there's many causes of stroke, but one of the most common causes of stroke can be the formation of a clot in a place that would stop blood flow to a vital organ like the heart or the brain, okay? And yes, those do happen and those are a problem and to prevent the formation of clots in places where you don't want them, people are given what are called blood thinners and I'm going to talk about that in just a second, okay? Yeah, please. Student: So if you're a hemophiliac, what goes wrong? Ahern: If you're a homophilic, what goes wrong? There's several places in the scheme where you can be lacking an enzyme genetically. So if you're lacking a critical enzyme, and there's several places where this can happen, if you're lacking a critical enzyme in that activation pathway, you may not be able to convert zymogens and that's going to stop the whole cascade and you're literally going to bleed to death if you don't have that factor. But there's several places where that can happen. Okay, so a very interesting phenomenon, a very important phenomenon, it has a molecular basis, when I talk about vitamin K, this is what vitamin K looks like, okay? Vitamin K is needed by that enzyme that puts the carboxyl groups in the side chains of glutamate of a prothrombin, all right? Blood thinners, okay, the things that people refer to as blood thinners, resemble vitamin K. And the enzyme binds those molecules and when it binds those molecules, it cannot put a carboxyl group on the side chain of prothrombins. Would you describe these guys or competitive or non-competitive? These are competitive. They resemble in some way vitamin K. They're competing for the same site. Warfarin is also known as rat poison. That was the original use of warfarin. Oh my God, if I poison rats, am I going to get blood all over my house? No, do you know why? Internal bleeding, yeah. So when you really thin the blood a lot, what happens is the most common thing that happens is the slightest bruise can kill you. So people that get put on blood thinners, okay, are, they have to do what we call titrate the thinner. We don't want to give them too much thinner because we will kill them if we thin their blood too much. They will bleed to death internally. So a physician who's giving a person thinners will measure what's the clotting ability of this person. You're trying to lower it but not stop it because you don't want to completely stop it or you're going to kill the person, okay? Warfarin and dicumarol both are very effective in this respect. They both do reduce the clotting ability and people, for example, who had a stroke or have other problems. I have relatives who have phlebitis. Phlebitis is a clotting disorder in the legs and they get put on blood thinners to keep them from forming these clots in their legs and again, they have to balance the right amount so they don't give them too much. At the same time, they want to stop the clotting as much as they can. Student: A DVP? Ahern: I'm sorry? Student: A deep vein thrombosis. Ahern: A deep vein thrombosis, uh huh, yeah. Okay, all right, let's see here. What was I going to say? All right, so that's how we form clots and that's a fairly cursory look of how we form clots. I also want to say a word about how we get rid of clots because as I said, the body has to not only make things, it has, if it has something, a switch, to turn something on, it has to have a switch to turn something off, and so how does it get rid of clots that it forms? Well it turns out our body has an enzyme that does this very, very well. The enzyme that dissolves blood clots is also a protease. And it's known as plasmin. PLASMIN. Plasmin. And plasmin is present in the blood stream, not as plasmin, but as plasminogen because we want to have that available so that we can activate it when we want to dissolve the clot and make that, alright? Now how do we activate plasminogen? Well that's activated by another enzyme known as TPA, which also stands for tissue plasminogen activator. Tissue plasminogen activator is also a protease. We see a lot of proteases involved here. And the effect of this protein is remarkable. Now TPA has the historical note that it was the first genetically engineered protein that was made available for human use, okay? It was actually the first protein that Genentech, you've heard of Genentech, that was the first protein, they were the first one to get that approved. TPA is a very powerful molecule. It needs some activation as well and we're not going to talk about how all that occurs, but TPA basically, if you give TPA at the site of a blood clot, it will activate plasminogen at that site and effectively break down the clot. Now it's not given routinely because as you can imagine you could have some problem turning this on like giving people too much blood thinner. But at the site of a clot, this guy can convert a plugged artery in the heart to complete flow through in a matter of minutes. So for a person who has a blocked artery because of a clot, TPA can be a life saver. In some cases, TPA is actually given to people after they've had a stroke in hopes that if there are small clots in the brain or something, that they can be dissolved very quickly and readily. That they can actually alleviate the effects of a stroke and in many cases, that actually can have a very positive effect. As I said, it's used very carefully because again, it's a very, very powerful substance and we don't want to be indiscriminately breaking down clots that might otherwise be protecting us. Yes, sir? Student: I know that in situations like you're describing, they describe that as a clot buster. What portion of that is TPA, or are there other components that we use? Ahern: His question is, when you hear the term clot buster, does that refer to TPA or other things? And there are other things that can be used as well, but TPA, the term clot buster is just a generic term. TPA is definitely a clot buster, yes. Yeah, back there? Student: How long does TPA remain active in the bloodstream? Ahern: That's a very good question. How long does TPA remain in the blood stream? And I don't honestly know the answer to that question. So that's the activation. The inactivation of blood clotting, it's a pretty phenomenal process I think. If there are no other questions, I'll move forward to carbohydrates. One other question back here. Elliot? Student: What was the enzyme that catalyzes the vitamin K? Ahern: Yeah, his question is what enzyme catalyzes the carboxylation of prothrombin. I didn't give you the name of that so you're not responsible for that. Vitamin K is a cofactor for that enzyme, though. all right, we turn our attention now to a subject that most students tend to like because this subject is something I've covered before in organic chemistry. This structure of carbohydrates and it's almost all focused on structure. So we'll say a lot about the structure of carbohydrate, there are a lot of terms that are here, and it's very straight forward kind of stuff. So I'm going to go through it sort of quickly but also hopefully not too fast to run over you with that. We talked about carbohydrates. Carbohydrates are obviously important molecules for us. Carbohydrates are one of our main sources of energy. They are in fact our primary source of quick energy. Carbohydrates include sugars, they include polymers of sugars, and they also include modified forms of those sugars. The term carbohydrate actually tells us what the structure of the molecule is. Carbo referring to carbon, hydrate referring to water. Carbo-hydrate is basically the structure of these molecules. For example, look at the structure of glucose, the structure of glucose is C6H12O6. You don't have to write that down but I could easily write that CX H20X. In its case, the X is 6. That's a 6 carbon sugar. Well it would be C4H8O4. So it's a hydrate of carbon, that's what a carbohydrate is. Water to carbon. Probably never thought of that before. Well the first term I want to introduce you to with respect to carbohydrate, and by the way, we also use the term carbohydrate, we use the term saccharides. Saccharides are the same as carbohydrates. SACCHARIDE. Saccharide. A saccharide literally means and I think it's Latin, sweet taste. Sweet taste. So carbohydrate, saccharide, same thing. Well let's look at a couple of structures of very simple carbohydrates or saccharides. These are three carbon molecules. In this case, we would have C3H6O3. We notice that they are similar in structure but not identical. We see first of all that this guy is a ketone and these guys over here are aldehydes. Ketones vs. aldehydes, right? And if you look at the two on the right, they are both aldehydes but they are slightly different in their three dimensional configuration. We remember that a carbon that has 4 different groups attached to it can have those groups attach in three dimensional space in two different ways. And here's where you're going to like biochemistry because biochemistry, very simple people, we like to think of the terms D and L to describe those. And you'll see this is a great simplification in terms of describing their overall structure. The ketone doesn't have, in this case, doesn't have a symmetric carbon. There is no carbon that has 4 different groups attached to it, so there's only one form of this three carbon ketone. A carbohydrate that has a ketone bond in it is called a ketose. KETOSE. Whenever you see the letters "ose" at the end of a name, we're talking about a carbohydrate. A ketose is a general name for a carbohydrate that has ketones. Fructose is a specific ketose. And I'll show you the structure of that. On the other hand, if instead of having a ketone bond in it, that the carbohydrate has an aldehyde bond in it that structure is known as an aldose. ALDOSE. And again, that's a general term for a carbohydrate that has an aldehyde bond in it. We can further delineate the names of these guys by describing the numbers of carbons that they contain. The guys I just showed you are three carbon molecules. They're known as trioses. I could describe them as an aldotriose, or a ketotriose, depending upon whether they had an aldehyde or ketone bond in them. If they have four carbons, they're known as a tetrose, five a pentose, 6 a hexose, 7 a heptose, 8 an octose. We don't generally see carbohydrates with single units containing more than 8 carbons. But we will see polymers of some of them that have 6. Now when we look at the different structures of carbohydrates, we see that there are a variety of names that can be used to describe these. Let's start down here. This guy down here shows those two aldoses that I showed you before. One is known as D-glyceraldehyde, the other is known as L-glyceraldehyde. You'll notice the structure. C3H6O3. They are mirror images of each other because again, that relates to the three dimensional arrangement of those structure, those substituents on the asymmetric carbon. We draw them in simple terms. We're going to then draw them three dimensional. We can draw them like this where we take the asymmetric carbon and we put the hydroxyl on the right side vs. putting the hydroxyl on the left side. In general, when we look at the structure of carbohydrates, and we decide if it's D or L, we look at the next to last carbon. If the OH is on the left side of the next to last carbon, it's an L sugar. If it's on the right side of the next to last carbon, it is from the bottom, it's a D sugar. If we orient the OH on the left side of the next to the last carbon from the bottom, it's an L sugar. If we orient it on the right side of the next to the least carbon from the bottom, it's a D sugar. Two carbohydrates that are mirror images of each other are called enantiomers. Yes? Student: Are there biases in proteins? Ahern: Are there biases? Do we see carbohydrates being in one vs. the other? We do tend to see many more in the D form than in the L form. Yes, we do. But it's not as strong as we see with amino acids and other things. But D is very strongly favored. Two molecules are enantiomers if they're mirror images of each other. These guys are mirror images of each other. Now, [inaudible] uses this term "constitutional isomers." I don't like that term, so we're not even going to hold you responsible for that. What is a stereoisomer? Stereoisomers are molecules that have the same general structure, that is they're both C6H12O6. They're both aldoses. But they're not mirror images. Look at this. This guy is not a mirror image of this one. They are stereoisomers of each other. And another term is used, it's not on here, actually it is right here, they are called diastereisomers. DIASTEREOISOMERS. So stereoisomers will include enantiomers. They will also include diastereoisomers. Now notice what I said had to be for a diastereoisomer. They had to be the same kind of sugar. In this case, they had to both be aldoses. They had to have the same number of carbons but they're not mirror images of each other. Yes, sir? Student: [inaudible] Ahern: I'm sorry? Student: There are 7 oxygens instead of 6. Ahern: Oh, that's a very good question. That's actually incorrect. I didn't even notice that. Obviously the book didn't either. That should only be an H, yeah. Good eyes, wow. I've stared at this I never noticed that before. When I was working on a textbook, I was an author, a co-author on a textbook about 10 years ago and I was on the third edition of the textbook and so they, you know, I was reading all the things that the other authors were writing and so forth and we use a lot of the same figures in our textbook that have been used in the previous edition of this textbook. And so I look at this one and this one figure and I said, "this is ridiculous, "it's a carbon that's got 5 bonds." [class laughing] And so I go to my co-authors and I said, "this carbon has 5 bonds." And they said, "oh my God, it's been in the past "2 editions and nobody's ever noticed it." So you found 5 bonds, so you found an extra oxygen. Good. We should contact, we've seen other errors in this edition of the textbook, so that's kind of bad. This is the first time this figure has been used in this edition of the textbook, so we didn't used to have this figure. I kind of like this figure and there's my little... Now, two other terms. Stereoisomers include enantiomers and they include diastereomers. Diastereomers include epimers. And epimers are two sugars that again have the same kind, they're both in this case aldoses. They have the same number of carbons. They only differ in configuration by one hydroxyl. So if we look at this guy on the left, on the left. On the right, on the right. On the right, on the right. The only place they differ is right here. These two guys are epimers of each other. They're not mirror images. They're not mirror images of each other. They're epimers. They're only different in the configuration of carbon number 2. Last, anomers are also diastereomers. And anomers arise from the different configuration that comes upon cyclization. I haven't said anything about cyclization so I'm going to show you that in a minute. I'm going to introduce the term to you right now and then I'm going to come back and show you more detail in a minute. You should know what an enantiomer is, you should know diastereomer is, you should know what an epimer is, you should know what an anomer is, you should know what a stereoisomer is. Just some basic terms of carbohydrates. Here are some common monosaccharides that we see. We call them monosaccharides because they're not polymerized. They're only existing as a single unit. Glucose is a monosaccharide. Fructose is a monosaccharide. Galactose is a monosaccharide. Ribose is a monosaccharide. Deoxyribose is actually an oddball because it's lacking an oxygen. That's why we have the deoxy part. We show it to you here because obviously this an important constituent of DNA. It's what gives DNA the D part of its name. When we start looking at sugars, carbohydrates here, you started thinking, "okay, is Kevin going to make us "know all these structures?" Well there are hundreds of possible carbohydrate structure, that pained look that says, "I really don't want to know that, right?" I'm not going to make you know all those. But I will make you know the structures of the important ones. And the reason I make you do that is you're going to need to know them in other classes. So you should know the straight chain and the ringed structure forms of glucose, galactose, fructose, and ribose. Ring and straight chain, glucose, galactose, fructose, ribose. Now these actually are quite common. They're quite similar to each other. Let's look at glucose and fructose for example. The only difference between them is glucose is an aldose and fructose is a ketose. Because if we look at the configuration of the OH groups, this one is lacking an OH in position 2, so we go to position 3, it's on the left. Position 4 on the right, position 4 on the right. Glucose and fructose are identical except for whether they're ketose or aldose. Galactose is an epimer of glucose. It's an epimer of glucose. The only place that galactose differs in configuration from glucose is right here. These are very easy to learn in terms of structure. I learned glucose is right, left, right, right. Right, left, right, right. I can always draw glucose and then remember that this guy's going to be identical. I remember that galactose is going to differ at carbon number 4. 1, 2, 3, 4. There's the difference. So you can put these to memory pretty easily. Now, you learn in organic chemistry, I trust that these 6 carbon or 5 carbon rings have a geometry such that they can actually come back around and interact with each other to make ring structures. The ring structures are named according to their resemblance to a couple of molecules. Pyran is a molecule that looks like what you see on the top. It has 6 carbons, I'm sorry it has 5 carbons and it contains an oxygen. Furan has 4 carbons and contains an oxygen. This is a 6 membered ring, this is a 5 membered ring. We use these names as our way of describing sugars. Sugars that form six-membered rings we refer to as pyranoses. Notice I said 6-membered rings. 6-membered rings have 5 carbons in them. Sugars that form 5-member rings are known as furanoses. 4 of those are in there. Can a 6 carbon sugar form a pyranose? Can a 6 carbon sugar form a furanose? Yes. We're only counting them with carbons in the ring. Other carbons can be sticking off as we will see. Okay, so how do these form? Well, aldoses form a structure known as a hemiacetal. A hemiacetal arises by reacting an aldehyde with an alcohol. An example would be glucose can make a hemiacetal structure because glucose is an aldose. A hemiketal arises from taking a ketone and reacting it with an alcohol to make a hemiketal. And fructose is a ketose. So it can form a hemiketal. Let's watch this cyclization process happen. The cyclization process happens as you can see right here. Here's the 6-member glucose. The 6-membered glucose has a geometry such that this hydroxyl group on carbon number 5 right here can get very close to that aldehyde group on carbon number 1. When it does, it can make a ring structure. And it turns out it can make 2 rain structures because what's happening is this guy, which only had 3 members on it, now is going to have 4 members on it. We don't have the double bonded oxygen anymore. If it forms in this configuration such that the OH is down, we refer to that configuration as the alpha configuration. If it forms such that the hydroxyl is up, we refer to it as the beta. Now, at this point, I can tell you these two guys we see here on the right are anomers. Their only difference is whether they are alpha or beta. Everything else is the same. So we can have alpha-D-glucopyranose, we can have beta-D-glucopyranose. Those two will be anomers. But if I had alpha-D-galactopyranose and beta-D-glucopyranose, they would not be anomers because they would have other differences. So anomers can only differ in the configuration of the anomeric carbon. And by the way, the anomeric carbon will always be the carbon that had the double bonded oxygen. I'll repeat that. The anomeric carbon will always be the carbon that had the double bonded oxygen. That's true whether it's an aldose, or whether it's a ketose. So we have a glucopyranose here, we have a glucopyranose here. Alpha vs. beta. Yes, Shannon? Student: So is an anomer also an epimer? Ahern: Is an anomer also an epimer? Technically it is, but we don't tend to use that term for that. We use epimers for other than the anomeric carbon. all right, when we have a ketose like fructose, what happens? Well, looky here. Here is our anomeric carbon. There is a double bonded oxygen. Here is carbon number 5. We see this same sort of intermediate structure forming here and voila. We have in this case alpha-D-fructofuranose. Alpha meaning the hydroxyl is down. Notice it's the hydroxyl that determines the position of alpha or beta, not the carbon that's there. Can we have a beta-D-fructofuranose? Yes, we can. They just have, your book has gotten lazy and they haven't drawn it. If we had the beta, we would have the same structures we have the hydroxyl of and we would have the CH2OH down. Now, notice this is a 5-membered ring. It has 6 carbons. Carbon #1, 2, 4, 5, 6. If you want to spare yourself grief on the exam, always number your carbons. You'll always bail yourself out. Don't forget to number your carbons because the most common things I'll say, "what's the structure of fructose?" A five-membered ring, 1, 2, 3,4, 5, I've got it. Well you know that fructose is a 6-membered, has a 6 carbon molecule. It's a hexose, right? So don't forget to number your carbons. That will always bail you out. An important thing about these structures is that they are reversible. They are reversible. In solution, which is the most common way that we have glucose, these molecules can go back and forth from one structure to the other structure quite readily. They can go from one structure to the other structure quite readily and it turns out that there's a little bit of steric hindrance for the beta form. I'll show you that in just a little bit. And so we tend not to see as many of these in the beta form as we see in the alpha form. But nonetheless, we do see some in the beta form. In solution it's going back here, over here. Back here, over here. One of the criteria for the reverse ability of this process. Okay, is the ability of these guys to go back to the straight chain. If they can't go back to the straight chain, they can't flip. These guys can flip. Well what stops it from going back to the straight chain? If I modify the hydroxyl. If I modify the hydroxyl, that process is not reversible and it will be stuck in that configuration. It will stay. So let's say I put a methyl group in place of that hydrogen right there. If I put a methyl group right there, it can't go back. It will stay stuck in the alpha configuration in this case. Let's say I put a nitrate right there. Same problem. It's not going to go back, it's going to stay in this case in the beta configuration. So if I do anything to that anomeric hydroxyl, and notice it's the anomeric hydroxyl, I will lock it in whatever configuration it happens to be in. Student: So they switch between alpha and beta, they don't switch between straight and ring... Ahern: Well they can make straight, yes. They can make straight, yeah. They have to be able to go back to straight and back to here. Don't waste your time on this slide. [laughing] Since I've shown it, I'll tell you. I'm going to show it to you, you're not responsible for it. This is to be showing you that yes, these guys can make 6 membered rings just like glucose can make 6 membered rings. It's not the most common form we find fructose in and I think it just adds another level of memorization that you don't really need. So we're not going to worry about the 6-membered rings of fructose, okay? These are the ring structures of sugars that we find very commonly. Yes, you're responsible for ribose, yes you're responsible for glucose, yes you're responsible for fructose, and yes you're responsible for galactose, alphas and betas. And again, these are very, very similar to each other, but don't forget to number your carbons or you'll get lost. Look at fructose. "Whoa, those aren't identical!" Yes they are. There's carbon 1, 2, 3. There's carbon 1, 2, 3. Hydroxyl up on 3, hydroxyl up on 3. If you don't number your carbons, you will get lost. Yes? Student: So the ribose ring doesn't have an alpha or beta indication which [inaudible]. Ahern: It actually should have that. If I were to say, "what configuration would that be?" What would you say? Student: Alpha. Ahern: That's beta, that's beta. They do have alpha and beta. What we see in nucleotides is it's always in the beta. I think that's why they haven't drawn it or given that designation but you're right, it should have that designation on there. So this would be the beta-D-ribose right here. Good, you guys have good eyes today. Let's say a word about steric hindrance, I had mentioned it. We talked about steric hindrance earlier in the term and steric hindrance relates to nuclei or electron clouds that get too close to each other. And we saw that there's a tremendous amount of energy that opposed putting things too close together. This is a schematic way of looking at a sugar that has a couple of groups that are kinda butting heads with each other. And we can imagine that if there was a way for the sugar to avoid butting heads, it probably would do that. And in the case of an animatic carbon, it's actually fairly readily able to change that. This shows glucose in the beta configuration. The beta configuration, we can see that this guy over here has a hydroxyl that is sort of interacting with this CH20H in carbon number 6. They're too close to each other. We describe the structure this guy has as a boat. Because if we trace the path of the carbons, the carbons look like this. There and then down and then across like this. It looks like a boat, all right? The same beta, this is beta, this is beta, they're both betas, can twist bonds and rearrange itself so that that interaction does not occur. This is a beta form like this is a beta form, but this is a beta that has a different confirmation and it's flipped itself so that hydroxyl which was in the way up here is flipped down over here. And it should say down equatorially instead of being flipped up. Now that configuration is called the chair because it has a sort of configuration of a chaise lounge. There's the back, there's the butt end part, and there's where you put your feet. You have a question? Student: Yeah, over here, that hydroxyl [inaudible]. Ahern: These two guys do interact actually right here. There is interaction. There's much more interaction here than there is here. And so this is favored of the two structures. The chair form is favored over the boat form because of that. So if you compare this to this, this has much more interaction than this one does. Yes, Connie? Student: When you say the chair form is favored, do you mean in all cases or just... Ahern: In this particular case, but in general, when you can arrange things away from each other, you're going to be better off. I'm just illustrating this as one example to show you how a boat vs. a chair form might be favored for structure. Student: Will we need to put these in chair or boat form on the exam? Ahern: Will we need to draw a chair or boat form on the exam and get all these axiology and so forth and the answer is no, I think that's kinda busy work. But I think you should certainly know what a chair form is. And I think you should know what a boat form and why one vs. the other might be favored. Question back here? Student: So maybe I missed it, but how do you determine whether it's an alpha or beta [inaudible]? Ahern: The alpha has the hydroxyl down as we draw it and the beta has the hydroxyl up. That's a good place to stop, I hear the rustling. I'll see you guys on Friday. Captioning provided by Disability Access Services at Oregon State University. [END]
Medical_Lectures
Cardiovascular_Medications.txt
now before we start actually talking about the medications I want to go through some other basic things that are just very important that have to do with medications and also some background sort of physiology Anatomy stuff that's going to make a lot of what I talk about make more sense so um the first thing I want to point out to you is that I passed around you either got one of these or you got a small card with a little plastic sleeve and what this is is something for you to make a list of your medications this one you write them in here and tear this out and fold it up and put it in your wallet or purse um so that you always have a list of your medications with you updated list names and dosages and if you come out of this class with nothing else but but learning the importance of that and actually going home and doing it then this class has been successful it is very important that you keep a current list of your medications on you at all time um there's a couple of good reasons for that one is the worst case scenario is you know you're in an accident and you can't speak for yourself very quickly it gives them information about you that um could be very important in your care uh particularly if you say you're on blood thinners or something like that so most of these these cards that have you write down your medications also have a place for you to write in your name any medical diagnosis you may have had so if you've had heart surgery or heart attack or whatever um so very important to have to make sure you get proper care the other reason it's important is because it helps the folks in health care that are taking care of you to use the time more effectively in taking care of you and I tell the the story of you you all went through when you first started cardiac rehab you had a 1-hour appointment that we do as a orientation and um I had a lady show up once with her medications in a tackle box so I had this big box of pills some she was taking some she hadn't taken for years and then she said oh but I'm taking this but that's at home on the counter and she didn't know the name of it and there was some in the bathroom she was taking and she didn't know the name and we had to get on the phone and call home it took me over a half an hour just to figure out what she was taking and and I have to think that that lady wasn't also maybe taking her medications in the way that they were prescribed because she didn't have an orderly system for doing it um so you know my time would have been spent more effectively you know helping her her her first visit get to know us and learn more about her but instead I was trying to untangle this web of medications so when you go in to see your physician you know hopefully they have an updated list but they don't always if you have a few different Physicians if you go in with that list and uh and it's updated and they get the information within a you know a minute is all it should take they have more time talking to you about the things that you want to talk about so um um very important the other thing I want to make sure that you have on your med lists are any over-the-counter medications you take regularly painkillers herbal medications those herbal medications um interact with your medications that are prescription medications sometimes and sometimes in ways that are detrimental to your health I was at a conference this last year and they were speaking about cardiovascular medications and the pharmacist told a story of a fellow that was taking goo and this fellow also had a seizure disorder for which he took medications and he did not tell his doctor or his pharmacist that he was taking gko for his memory uh for whatever reason but he he swam regularly and he was swimming one day he had a seizure in the pool and he drowned no one was looking at the time and he ended up drowning and and on autopsy they found out that he'd been taking gko the problem with it was gko interacts with the medication he was taking to make it less effective had his pharmacist known that his doctor known that they could have upped his medication and he maybe wouldn't have had that seizure and and would still be around so you know that's a little bit of an extreme example but those sort of things happen so please let your your doctor your Healthcare Providers know about those herbals that you're taking as well okay um all right so the oh traveling I want to talk a little bit about traveling too I think you're probably all aware of this though when you travel especially if you're on an airplane or wherever you're going keep your medications with you and your carryon you don't want to put them in the suitcases checked because you may never see them again and you may go get somewhere where you need them um also it's a good idea to keep them in their bottles because if you have one of those little dividers and you've got two weeks worth of medications in there you know when they go through your luggage if they do all they know is you've got a bunch of pills that they don't know what they are and that could delay your trip so um they're not always marked you know the individual pills so you know if you keep them in your B in the bottles you're less likely to have problems with that okay um all right so I want to start um talking about a couple of things just background so that some of what I talk about about the medications will make more sense [Music] ah okay so if you look at your handout if we go ahead there's a picture in there I think it's on the back page is it is it where you flip it over no it's not I didn't put it on there I'm sorry so it's it's it's up here we can look at it up here so this is a picture if you were to look inside um page three an artery oh it is in there okay thank you page three it's small usually it's a little bigger sorry about that but you've got a bigger one up on the screen up here this is a picture of an artery if we were to you know cut it and look look down inside of it and you can see that your your arteries have different layers in them they're not just you know like a metal pipe and all the same material all the way through they have different layers of material there and one of these layers here is a layer of muscle cell and that's important to know that there's a layer of muscle inside your artery because it's telling you your arteries have the um the capacity to constrict and get smaller or dilate and get bigger because of that layer of muscle and a lot of what we're going to talk about about the medications is that layer of muscle being affected in ways that make it dilate or constrict and your medications affecting blood pressure in that way so we'll we'll talk more about that but I just want you to be aware that that's there the other thing I want to talk about is the difference between angen and a heart attack because there's a lot of confusion when folks first join our program about that I've had people tell me I had at least you know 100 heart attacks last year because they think every time they have Anga they're also having a heart attack so angena is the symptom that your heart gives you um you know if you get a symptom um that is saying I'm not getting good blood flow I'm not getting enough oxygen and that angena symptom could be uh some sort of discomfort in the chest or neck or arm or arms between the shoulder blades um and I say discomfort because it can show up a lot of different ways it could be a tightness a a a squeezing a pressure a pain a vague I can't quite describe it but something's not right in this area you know so um that's Anga now that might happen with this is going on someone has a blockage in an artery where it shut down blood flow enough that they're not getting enough oxygen to the muscle and the muscle's calling out you know somebody help me so if you get a blockage that is severe enough like what's happening here where the the little layer on this lesion has ruptured and all the stuff in that that plaque is spilling out inside the artery um if that happens what happens is within a few minutes you get a blood clot here to help help heal your body thinks it's healing but it's really not if you get a clot inside an artery you go from maybe this is a 20% blockage to 100% blockage within minutes and that's how people have heart attacks a heart attack means you have damage to the heart muscle from that lack of blood flow the the blood flow um you know decrease was severe enough to actually cause damage to the muscle so that's a heart attack so you know angen means you could be heading toward a heart attack it's your warning sign um your body responds to any sort of damage right by trying to keep you from bleeding by forming a clot if you cut yourself or if you get hurt internally it tries to do that as well and that's what's happening inside this artery here when that stuff breaks and and falls out inside the artery um platelets rush to that site and some of the medications we're going to talk about work in ways by slowing or helping to stop that response so you don't get a clot in there should should a plaque rupture okay the other thing that I want to point out to you is this little equation I wrote out right here and what that is saying is your heart's oxygen needs so the mount oxygen your heart is needing per minute to do its work basically equals your systolic blood pressure that top number and your blood pressure times your heart rate so it makes sense if you think about it your heart needs more oxygen whenever your blood pressure or your heart rate is up when you're exercising what happens heart rate goes up blood pressure goes up your heart needs more oxygen to do its work now that can be an issue if you've got blockages in your art Aries and your heart's not getting the best of blood flow not like it did when your arteries were clean um and suddenly you're doing something heart rate and blood pressure are up um your heart may not be able to get enough oxygen to meet its needs and the reason that's important to know again is some of the medications work by affecting your blood pressure affecting your heart rate so they reduce the workload on your heart okay and we're going to get into those individual medications last thing I want to give to you before we start talking about the actual medications is this phone number and this phone number is um the a National Prescription Plan called The partnership for prescription assistance and if you call this phone number um someone will answer and they will do about a 20-minute interview with you and help determine whether you are eligible for financial assistance with your medications so you know if you're having trouble or there's one medication they want you to be on that's expensive and it's not covered or whatever this is a number you can call a lot of different drug companies are involved in this program and you just one place that you go make this phone call to find out what programs you qualify for so what they would do is if you qualified they would send you paperwork that you have to take to your doctor to get signed this is your diagnosis these are medications you're on and um then you send it back in um be prepared for personal questions about your income um because they'll want to know that as well okay all right can everyone see that number if you want me to it's 188 477 2669 okay so now we get to go back and talk about the meds all right so the first medication um is rapid acting nitroglycerin and uh wrap it up you everyone's familiar with this one in the little brown bottle the little ones that you put under your your tongue it's also called sublingual which means under your tongue um nitroglycerin this medication is used to help relieve angena when you're in an emergency situation would be which would be anytime you're having angena Anga is um not something we would ever want you to have I've sort of had a rash of people coming through recently saying yeah I had my stent and since then I'm having some Anga but it's just a little it's not that bad and they haven't called anybody you know I'm like well you know even a is not acceptable so um nitroglycerin rapid acting nitroglycerin is something that you might use in a situation where you're having an angen symptom and the way this works is it helps to dilate the blood vessels so if it if it dilates blood vessels and causes them to get bigger um what's going to happen to your blood pressure it's going to drop you know you've got the same amount of fluid in the system and the system's getting bigger so the pressure within the system drops so your blood pressure would would drop and that reduces the workload on your the on your heart remember if your blood pressure is lower your heart doesn't need as much oxygen to do its work so um that's basically how it works it relaxes the arteries in your heart and arteries throughout your body so it will drop blood pressure side of oh wait um how to take it is you know if you ever haven't taken it before you want to sit down the first time you take it cuz people who have taken it here can probably tell you with some people it causes a pounding headache um it also can make you feel a little lightheaded and dizzy if it drops your pressure too much you don't ever want to take your Nitro while you're driving if you get you know chest discomfort pull over to take your Nitro don't take it while you're driving um you can put one under your tongue and if the symptom isn't gone in 3 to five minutes you can put another one under your tongue and if it's still there in 3 to 5 minutes you can do another one if you're having to take that third one and your symptom is not relased D it's time to call the paramedics okay call someone to come take a look at you um all right side effects again the headache um possible dizziness if your pressure gets dropped storage these medications um the sublingual nitroglycerin in the little brown bottle is a little bit of a hot house flour of medication it's it's uh it's fragile and that it degrades easily with exposure to light or heat or air um so it needs to be protected and it will lose potency if exposed to to those things if you look at your bottle it has a date on it that's probably two years into the future and it's good until that date unless you open it if you open the bottle you should date the bottle write it on there when you opened it so that in 6 months you go replace it whether you've used anymore or not replace it CU it's certainly a medication you want to have potent and effective if you need it it's it can can save your life um it's you know you want to have it with you but it's it's not going to be good to have it stored in a car because it it gets very hot in the car in your pocket the pharmacist that I talked to said it's probably okay in your pocket it's going to be all right although it's going to be jostled a lot so you know take a peek in there every once while make sure it's just not a lot of powder because it will break down um another option is the sublingual sprad it comes in a spray bottle looks like one of those little you know Baka sprays I used to sell um and this one doesn't have the same thing about light um and air so it will last a little longer the only thing is if you want to use the spray instead of the pills you need a prescription that specifies spray um it does cost a little more and I think insurance companies are hesitant to just give it out unless they've been prescribed specifically to get this so if you're wanting to if you use your glycerin and you don't want to be replacing it all the time you you might want to ask your doctor about the spray any questions about about that you know one other comment just about the the sublingual nitroglycerin is I I sorry surprises me through the years I've had a few people say to me um yeah I had some chest pain last night but I didn't want to take that Nitro it gives me a headache you know you know you've got a headache versus a possible heart attack so you got to weigh those things so I know it's not comfortable but the headache is not dangerous um and the heart attack is so keep that in mind if you're ever weighing taking those nitroglycerin pills okay there's also a long acting nitroglycerin and this type of nitroglycerin is not used in an emergency situation this one is is not to treat angena this one is used in prevention of Anga it um some people have chronic Anga maybe they've had a bypass surgery maybe they've had two bypass surgeries or um you know their their disease is such that they can't do bypass surgery and the doctors say you know we've got Anga but we just think the surgery is too risky or whatever it's too risky but we're going to manage it medically and the best they can do is that they still have a little bit every day some people take this for chronic Anga um some people take it temporarily after bypass surgery um because they use an artery as one of their graphs and they don't want that artery spasming down that layer of muscle um and they do that sometimes um initially and they might temporarily put someone on nitroglycerin right after bypass surgery um some folks use it for it's used for blood pressure control don't see that as often but some sometimes it's used for blood pressure control it comes in a patch that you you wear um capsules tablets um if you're in the emergency room sometimes they use paste you know they put on your skin which is the same thing basically it's in the patches people on this medication can develop tolerance to the medication and um meaning it's not effective anymore it's not doing what it needs to do and their body sort of built up a tolerance to it and they need to go through a uh an interval where they don't take the medication where their physician may say I don't I don't want you to not take this one at night you know we're going to take you off of it for a few days and they stop taking it at night temporarily and then suddenly it starts working them for them again once they start back on their usual dosage so that happens occasionally now nitroglycerin everybody's seen the commercials for erectile dysfunction that they run all the time and if you listen to them carefully you've heard that line where they say you know do not take these medications like Calis Levitra Viagra um in conjunction with nitroglycerin because it can cause a dangerous drop in blood pressure and the reason for that is the drugs this Calis Viagra and all those that work for erectile dysfunction you know they're they're treating if someone cannot get an erection well what causes an erection it's it's an increase in blood flow so those medications work by their vasod dilators they cause the blood vessels to dilate and get larger it's the same thing basically that Nitro does just in a different way so if you're taking two medications at the same time that are vasodilators and cause blood vessels get larger blood pressure to drop it can be very dangerous people have died from that so you don't want to take these medications if you're on a nitroglycerin product um you know sometimes it you know it's a joke and these medications people get them on the black market and think of them as being recreational but they really have some very strong effects in the body they're even um Viagra is used to treat pulmonary H hypertension so and you know not just men it it works on dilating blood vessels in your pulmonary vasculature so um they're they're potent o dilator so that's an important thing to know rather than taking the medications this would be something good to discuss with the physician um to see if there's another alternative if um if erectile dysfunction is is a problem for anyone here and you are um taking nitroglycerin as well okay the next One beta blockers and I would bet at least 75% of you are on beta blockers um there's a listing of beta blockers here and I don't know if you noticed on these slides you know there wherever there's an aster it says this medication is available in a generic form um just because there's not an aster on here doesn't mean it's not generic now because things change so um you could always ask your pharmacist if you want to get a generic form of a medication because the generics are cheaper right so um so don't take our list of meds and our our asteris here as gospel because it may have changed some um and it will change as the years go on we also don't have a complete list of medications here for example on the beta blockers there it doesn't list Ender all which is a beta blocker and if you notice all of these products um the generic name in the beta blockers it ends with a LOL so if your medication ends with LOL it's probably a beta blocker you know the the generic chemical name okay so the beta blockers are used to treat high blood pressure it's used to treat angena they're used also to treat arrhythmias which isn't up there and for other things um some people that have familial Tremors or migraine headaches they're used for all sorts of different things but they're classified as an anti-hypertensive meaning they're used for high blood pressure to treat high blood pressure um how they work is they decrease your heart rate and they decrease your blood pressure by blocking the effects of adrenaline in your body so if you're on a beta blocker you've probably been told if you haven't already noticed your heart rate doesn't go up as much as it used to you know your heart rate might be 60 at rest and when you exercise it might hit 70 you know or before it would have gone up in the 100 something so and it's doing this job that's what it's supposed to do um so you remember that equation if we can hold your heart rate down and your blood pressure down your heart can get by doing its work with less oxygen so um beta blockers even though they're used for high blood pressure I get a lot lot of folks coming through just out of the hospital that say you know I don't have high blood pressure I had a heart attack they put me on this blood pressure medication I've never had high blood pressure and I I don't want to take it um and and they're very upset and and it's understandable but there's there's a good reason for being on that medication remember how I said that it's um it's used for high blood pressure and a lot of other things um people who take beta blockers have a A reduced rate of having a repeat heart attack and they also have a increase survival rate overall for so whether your blood pressure is high or not if you've had a heart attack or have coronary artery disease most likely your doctor has put you on a beta blocker um again if you're following what's going on with medicine in the news you've probably heard the terms best practices or evidence-based medicine and that's pretty much the way the practice of medicine is gone is that the the government looks at all these things that work they do all these studies and they say okay take all the studies and look and see these people that are having heart attacks um who has the best outcome who survives longer who does better and they make a list of things that are called best practices and on the things for people that have had heart attacks particularly if it's been a larger one is you're on a beta blocker you're taking aspirin every day there's a list of things that generally happen unless there's a good reason for you not to be on that medication so so don't be surprised if your doctor you know put you on this you know cookie cutter list of medications that everybody that has a heart attack is on is because they the Studies have shown that people do better when they take all of those sorts of meds so um so there is a protective effect in taking the beta blockers um side effects on these medications so dizziness so and I'd rather say light-headedness you know I to me that dizziness means you know like if you had too much wine or something and things are spinning but most people I think when their blood pressure drops they start getting more of a light headed sort of thing so um if it drops your blood pressure too much you could feel light-headed or dizzy um that's something your doctor would want to know about you know they want to they want your blood pressure down but they don't want it down so much that you're having symptoms so be sure to let your doctor know if you're having something like that um a slow heart rate and a decrease in heart rate on a beta blocker is normal but we don't want it so slow that it's giving you symptoms or problems so again if you're feeling you know light headed or if it's running down in the 50s or lower while you're sitting at rest you know let your doctor know if if it's running 55 and you're feeling fine and not having symptoms they probably wouldn't be concerned about it um but if it's you know running really low and you're having some issues they definitely would want to probably decrease your dosage so keep them posted on that a decrease in endurance and most of us aren't going to notice that because we don't push ourselves hard but say you're someone who is used to going out and uh walking briskly for a few miles a day or running or doing competitive stuff if your heart rate doesn't get up as High um that's less you know blood per minute circulating less oxygen per minute you won't have as high a capacity as you used to most people don't notice that because they don't work themselves that hard but but some will and it can cause a decrease in endurance what most people notice is the um the fatigue which is a whole another thing um that's what we hear probably most commonly with the beta blockers people will say God I'm just so tired I feel so fatigued since all of this happened and um and a lot of times it's the beta blocker is the culprit but the thing about it is your body can can re um adjust to that and um that fatigue will diminish with time so you may call the doctor's office and say look I'm really tired and I've heard it might be this beta blocker and I want to quit taking it and they say hang in there give it a couple more weeks because it tends to get better and the reason they want you to hang in there is because that medication has such a protective effect for people um that they really want you to be on that one if you can be on it so they'll ask you to hang in there a bit um it for some people it can cause um swelling or um Adema which is you know some fluid maybe in your lower extremity um so those are things you want to let your doctor know if you have shortness of breath and that's mainly for people that maybe have some obstructive pulmonary disease empyema or something and there's a lot of people out there that have that that don't know it they've just not been diagnosed maybe they don't go to their doctor or they just don't want to know um but we frequently see a lot of people that have a smoking history never been diagnosed with empyema and you might go on a beta blocker and feel more short of breath um because it can cause some constriction of um bronchial Airways some of the beta blockers are more selective and don't have that effect so if someone got more shorter breath on their beta blocker you let your doctor know they can try a different one that may not have that side effect for you so again another reason to you know keep your doctor your pharmacist in the loop on how you feel when you're taking your medications and this is when you don't want to stop abruptly um it's the only one that I know of that can have a uh rebound effect and what I mean by a rebound effect is if you're um let me get rid of this here if you're resting heart rate before beta blockers was at this level and you go on a beta blocker and it takes your resting heart rate down to this level and then you go I hate this medication I'm just going to quit taking it suddenly your heart rate's up here at rest it's a rebound effect and you know remember that equation if you have blockages and you already have impaired blood flow to your heart you you don't want your heart rate up here at rest it's not such a good thing for your heart so if someone's going off a beta blocker grad it's usually a gradual wean over about 3 to 5 days so you'd want to talk to your doctor about it and so they could set up a a schedule on weaning depending on the dose that you're on okay um other things that can happen with the beta blockers is beta blockers can after a period of time um they they cross over the the blood brain barrier and can cause some other interesting side effects some people get them some people don't um for some folks sleep disturbance and Nightmare or imp impedence are things that can happen with the beta blockers so again you know if another thing that if you come out of here with nothing else but you know keep your medication list current in handy and talk to your doctor about any side effects you think might be related to your medications I will be a happy woman that you you came out knowing all that um we we frequently see people trying to adjust their own medications and not talking to their doctor about it and we've got a guy in the hospital right now that is notorious for that and he he doesn't know why he's in and out of the hospital but he's conly adjusting his own medications and won't work with his doctors on it so you know please keep them posted with what's going on with you um any questions about that one about the beta blockers no okay um calcium channel blockers and again another medication it says used for high blood pressure used to treat angena because it lowers blood pressure and so it can help relieve Anga this one like the nrog glycerin pills is also used to prevent um artery spasm we have some f folks in our program that don't have coronary artery disease they have clean arteries that have had heart attacks um because their arteries have spasmed and shut off blood flow with the muscle clamps down and it's more often than the younger folks that you might see that have that and um they'll be on a calcium channel blocker to help prevent that so these medications work um how they work is by well calcium is important for muscle contraction you have to have calcium um crossing the membrane into the muscle cell the muscle fiber itself to cause it to contract it's an important part of of causing that contraction it's the it's the signal for the muscle to contract and the calcium channel blockers slow the movement of calcium into the muscle so there is not as much of a baseline constriction in those vessels they tend to relax a little bit because they don't have so much calcium bombarding them so um you get a little bit of um dilation in your not only your coronary arteries but the arteries throughout your body and that can drop your blood pressure I don't want anybody to get the idea that calcium is bad for them though it raises your blood pressure that's not the way this works you can't you can't adjust your dietary calcium intake and and cause any of this to happen so keep doing your calcium it's important um so let's see that's basically how it works side effects again dizziness if it drops your blood pressure too low um swelling that's that emo was talking about some people will get some fluid maybe in their ankles and some folks that are on calcium channel blockers might also be put on a little bit of lasx to help them get rid of that extra fluid if they develop this this side effect um constipation because if we're slowing muscle contraction and getting muscles to relax a bit you um don't have as vigorous a movement through the intestines because it's a wave of muscle contraction that move things through your intestines so you know if you're on a calcium channel blocker drinking plenty of fluids um having plenty of fiber in your diet maybe taking a regular over-the-counter laxative if it's a consistent problem for you are things that are encouraged okay and here's where we first encounter grapefruit juice so some of you probably seen on your medication bottles that do not take grapefruit juice with this medication and because grapefruit has a a specific chemical in it that causes your liver to be less efficient in clearing these medications out of your system certain medications not all of them and if it's not clearing the medication out it's going to keep building up and you can get higher levels than are necessary or wanted in your system so if you um you know you take your medication every day and you have to take it every day because your body metabolizes it and clears it out so if it's not being cleared uh as efficiently you could get you know a buildup of the medications and for some of the medications that can be dangerous so some of the calcium channel blockers do that and here are a few that that do interact with grapefruit juice there are some other medications we're going to get to that have the interaction as well and how do you know what medications you you read the bottles your bottles should have a sticker on them if there's any specific like take with food do not take grapefruit juice um you know all of that sort of stuff or read the paper in insert or just ask your pharmacist these medications I'm picking up today anything special any special precautions um so that's the person to to educate you about all that so grape I'm going to kind of skip ahead um to the last few slides and just keep talking about grapefruit juice and um grapefruit juice the way I understand it is if you're having a half of a grapefruit every once in a while not a big deal but it's if you're consistently drinking the glass of juice you know 6 to8 ounces of that juice um is enough to affect the metabolism of of certain medications um I was also told at that conference I was talking about that they're finding that there are some other juices that interact with medications um and I've never seen these but cevil oranges any juice with a blend that has cevil oranges has the capacity to do this and some of the folks in here have told me that they have um been told that some of their medications that cranberry juice is off limits so you know the most I can say is you know I'm not a pharmacist and I don't have the most up to-date stuff but the bottles and your pharmacist will so be sure you're looking at those bottles to know your precautions okay um ACE inhibitors now ACE inhibitors Ace stands for Angiotensin converting enzyme so Angiotensin is a a um hormone well Angiotensin is is a chemical in your body that causes a little bit of Vaso constriction and if we can have less of that then your arteries are going to dilate a little bit get a little bigger which drops blood pressure so um if we can stop Angiotensin from being made and Angiotensin has to go from something called Angiotensin one to Angiotensin 2 and if we don't have the converting enzyme that helps with that step um you can't make Angiotensin so it's an Angiotensin converting enzyme inhibitor so we're inhibiting the enzyme that helps create the Angiotensin so if you have less of it you have baso dilation um this medication um is really helpful in treating high blood pressure people that have heart failure which means that your heart function is not as good as it should be right a normal heart when it IT pumps and and pushes The Blood Out pushes out about 65% of the blood with each beat doesn't push at all about 65 and that's normal if you're heart function is low if it's you know down in the 30s or lower that's heart failure and that's just meaning the heart's failing to pump as much blood as it should and frequently folks that have heart failure are on ACE inhibitors because it tends to um help that heart failure not to get worse and U people that have heart attacks specifically large heart attacks are frequently put on ACE inhibitors and the reason for that is they found that the ACE inhibitors tend to limit something that's called modeling so after someone has a heart attack particularly if it's a large one the structure of your heart muscle can change in a way where you get an area on there that um um gets a uh you can get an aneurysm you can get a thinning of the wall you can get a weak point in the heart muscle and you can get a little bit of dilation or bulge there they call that Remodeling and they don't want that to happen they want the structure to stay sound in your heart and the ACE inhibitors can help limit that remodeling and help prevent someone from having further heart failure or complications after a heart attack so the ACE inhibitors is the list over here to see if you're on one and again not an allinclusive list but um but these are ACE inhibitors the most common ones um side effects for some people um can have an elevated potassium and that's usually for folks that their kidney function wasn't great to begin with um and the most common thing we see or hear about is the cough you know it's a really well tolerated drug except for the the cough that some people will get because it can trigger the cough centers in your brain and uh so there's nothing wrong with your lungs it's just your brain telling you to cough and there'll be kind of a dry sort of cough frequently and it can be very annoying to people so some folks can't tolerate the drug because their body doesn't adapt to that and the cough continues so people that get that cough a lot of times are put on one of these medications which is an ARB and that one stand for Angiotensin receptor blocker so these medications let your body make all the Angiotensin it wants but it's going to these medication's going to block the little doorway that the medication or that the Angiotensin needs to get into to do its work you know because there's a there's a receptor that it has to get into to to to do what it needs to do but if we can block the Angiotensin it's not going to be able to cause that vasil constriction and the arteries get to relax a little bit so and here are arbs and it was my understanding that they have um similar side effects but not so much of the cth okay now we're getting into talking about the antiplatelet medications um the blood thinning medications uh those things that are important for helping to prevent your blood from clotting which if you um you know have a tendency to create blood clots or you have you know lesions in your arteries that you know they are that could be vulnerable to bursting and causing 100% blockage um they're probably going to put you on some sort of anti-platelet medication and the most common one is the aspirin so aspirin works by um helping to prevent heart attacks and strokes 50% reduction in heart attack if you're taking your aspirin daily that's amazing you know that's it's probably the single most effective drug of all these drugs we've got um so when you get an injury like what we showed in that artery where there's a clot platelets have to rush to that spot to to form that clot and they they they get sticky and they Rush there and they stick together and form the clot you don't want that you want that to happen on the outside if you get cut but you don't want to happen inside an artery so you take an antiplatelet and it slows the process the aspirin slows the process of those platelets getting sticky so it it they tend not to congregate there and form that clot um you have to take it every day to get the benefit of it because your body keeps metabolizing it out of your system like any medications you take it every day the other thing is you want to take it every day because you don't know when a plaque might rupture um so if you've already got aspirin on board it's going to be the most effective and you've probably seen the commercials that say you know if you think you're having a heart attack take an aspirin you know if you're not already on it you need to take one and it it can get in in time to to cause help but it's most effective if you're already on it so you know they tell people too um unless there's a reason you shouldn't be taking aspirin you know they generally recommend that most everybody take an aspirin a day um but there are some good reasons why people shouldn't um so be sure you do talk to your doctor about whether that's a good idea for you if you're not already taking it um because aspirin is over the counter and I think a lot of us grew up for there were I don't remember there being any other painkiller type medications when I was a kid I mean aspirin was used for everything you had a fever you took aspirin you know you took aspirin for everything you hear in pain you took aspirin and so people kind of think of it as candy I think sometimes they just take it for everything but it has some very strong effects so you know be sure to talk to your doctor about whether it's a good idea for you to take it or not um lots of different products um an aspirin um the biggest side effects are are um stomach ulcer and the reason that happens is aspirin's uh real name let me make sure I get this right is acetyl salic acid so it's an acid so your stomach is an acidic environment and if you put an acid in an acid you want that medication to dissolve and acid doesn't dissolve in acid very well so it just sits there on the lining of your stomach and it can start to corrode the lining of your stomach and cause an ulcer so the way to combat that there's a number of ways one is take it with food you know do not take aspirin on an empty stomach that's asking for trouble um the other thing you can do is take an inic coated aspirin which is a aspirin that has a coating that protects the aspirin from your stomach acid until it passes through to your testine and it dissolves in your intestine very quickly because that's not an acidic environment and gets absorbed in your intestines so um an anic coated aspirin now that's different from a coated aspirin and this confuses a lot of people they say well I'm taking a coated aspirin but it has to say inic coated so a coated aspirin is just an aspirin that has a coating on it to help you um swallow it excuse me a second so it'll just help you swallow the medication because you know if you take a plain aspirin they kind of dissolve and stick in your throat so so an AIC coated is an entirely different thing so look at the ones you're taking be sure that they're inter coated um buffered aspirin they include a coating on there that helps buffer the acid in your stomach so that the medication um um passes uh it has a little bit of an acid around it rather so it can dissolve in a different environment so it makes a little less acidic right around the aspirin yeah isay um it sounds like it is but you know ask the pharmacist because I you I don't know the brand or anything but it should somewhere around it say inter something but if it doesn't um you know ask the pharmacist they would know for sure it sounds like it is though okay um the other side effect that's possible is bleeding and you know that's kind of what the medication does it it keeps you from forming clots and and it can in some cases cause bleeding that is is undesirable um you will if you're on one of these medications like aspirin routinely bleed more if you cut yourself it's going to take longer to clot um so you'll have to hold pressure a little longer um and that's except um expected um you might get bruised more easily you know you hit something or you don't even know what you hit and suddenly you have a bruise there and that's pretty common too that call that uh nuisance bleeding um internal bleeding is more of a bigger issue so if you notice any blood in your urine or stool something your physician needs to know about um if you have blood in your urine if it's a little bit it may look orangey if it's a lot it's going to look red blood in your stool is going to make your stool will look um black and tar okay uh if you were vomiting blood if it was fresh blood it'll be red if it's a little older blood's been sitting there in your stomach it's going to look like coffee grounds so um any sudden severe headache um you know could be a bleed it's something you'd want to you know take seriously let your doctor know about or someone around you know about um and for some folks um anemia so um which can be a problem because if you're a emic you have less oxygen carrying capacity in your blood and we want you to you know have lots of oxygen in your blood specifically if you already have impaired um circulation in your heart all right so next we're going to pass up our our picture and then go on to um Plavix anyone here on Plavix yeah so um Plavix um helps prevent heart attacks and strokes by helping to prevent that clotting it's also used frequently when people have Sten implanted to keep them from um forming clots around the stent because the stent's a foreign body there is a little bit of damage that happens in the arterial wall when that stent's put in there and is implanted and you're you're you know anytime there's a scrape or anything like that your body tries to make a clot to heal it so um it's frequently used after people have have stent implanted this works by making the platelets less sticky uh so that they tend not to stick together there there's an issue with this medication in that one in 10 people on it quit taking it because of nuisance bleeding you know that bruising we were talking about that minor bruising or a little bleeding like maybe they get a nose bleed every once a while and they blow their nose it's not bad just a little um um you know things that aren't dangerous but they're called nuisance bleeding and and that's not good because you know missing one dose of Plavix um can be fatal I mean you know we see people stop taking their Plavix their clot their stunt clots up and they're back with a heart attack so if you're on Plavix you need to take it at the same time every day and take your medication and you know be sure You' you've got a good Supply handy um the problem with it it's only available as brand name it's not generic yet and it's an expensive medication so it does create a hardship for some people and if that's the case you know discuss that with your doctor and see if they can work out a plan for you 85% of the people on this medication report that nuisance bleeding so it's very common sort of thing um and not dangerous oh our our nurse practitioner nurse practitioner um asked me to pass along to everybody that the one thing she wanted you to get out of this lecture today was that you please take your medications at the same time every day too because it it does affect how they work and um try and stay on a regular schedule um so I kind of forgot to add that in in the beginning but uh I want you to to know that okay so my ear is plugged up and my voice is echoing in my head is it coming out okay all right okay uh warin which is cumin and we see cumin a lot for folks that have um had heart valve Replacements are are in atrial fibrillation or go in and out of atrial fibrillation which is an irregular heart rhythm um or have a history of developing blood clots in their lungs or their legs so it's a antiplatelet uh also that will um well it's not it does stop the formation of clots by affecting clotting in a a different way blood clotting is a really complicated thing and I'm not even going to pretend that I know the whole sequence but there's a lot of steps to your body forming a clot and this one forms in one it works in affecting one of those steps to forming a clot um it does work differently from aspirin and um you know these drugs the the aspirin the Plavix the cumin aren't necessar arily replacements for each other you know if they want you on cumin you can't say well can I take aspirin instead you know that needs to be discussed with your doctor because they don't work in the same way they all work a little differently um this drug I don't usually go into a lot of information about this drug because folks who take this usually go to a cumin Clinic they go to a special place where they get education about this medication they get blood draws routinely um so you know for everyone else who maybe is not taking it you know the things to know is basically this um warin is the active ingredient in most rat poisons so it uh you know when you're killing a little beasty what you're doing is you know giving it enough uh anti-coagulant that it bleeds death internally so um it's a great medication to prevent blood clots and you want enough of it to prevent clots but not so much to cause excessive bleeding so people that are on this medication have to go in and get blood draws regularly to be sure that they've got that proper steady state level in their body body um the other thing to know about this medication is what you eat and drink can affect the levels in your bloodstream so that's why folks that are on cumin get education about the things that they should stay away from and um and a lot of times it's a green leafy vegetables green tea organs organ Meats anything that's high in vitamin K there are some dietary limitations with this um so any questions about that one no okay um the statins these are the cholesterol medications and uh they lower cholesterol it says here by 30 to 50% so they can reduce the risk of coronary artery disease or the risk of progression of coronary artery disease those blockages this medication is so effective in helping that that in the UK this is over the counter kind of like aspirin is here they tried to get that P here it's failed both times so I don't know why but um but in the UK you have to answer a couple questions to the pharmacist before you can buy it but you can buy it over the counter in in a little lower dosage than what's prescription um so not only do these lower cholesterol but they also they change the environment in the inside of the blood vessels so that they tend to be less inflammation and if you've been following anything about coronary artery disease lately you know all the latest research is on inflammation that there's inflammation causes all of these uh changes and in aging and in coronary artery disease and there's there's a lot of research on inflammation so you know again I get people back that have in the hospital and say you know what my doctor's been checking my cholesterol for years it's always been great and you know now I have um I'm on this Statin why am I on cholesterol medication mine's it's fine well it's because not only does this medication lower your cholesterol but it reduces your risk um of of developing blockages by changing the environment inside the blood vessels so they have less inflammation so you know it's a good medication to be on it does have a couple of important side effects to know about although they are extremely rare um you know less than 1% of the people that are put on these medications develop one of these side effects and the first one is liver damage and um it is reversible when the medication is stopped you'd have to really be ignoring the symptoms um to get to a point where it's not reversible because if you start having liver failure um you're going to turn yellow you're going to get jaundiced you're going to have pain you might get dist distended at your abdomen um your urine would get dark your stools would get light colored um there would be things that you would have to be ignoring uh to to get to that point so a lot of folks when they're put on the Statin their doctors will do lab work to see uh what their liver enzymes are and then recheck it you know after they've been on the med for a while just to make sure your liver is okay so that's pretty common thing they do um so the other thing is muscle damage and the sign of this is going to be pain it's going to be muscle aching and pain all over not just one muscle here or there but a generalized body ache um the danger with this is if it's ignored and you get enough muscle damage uh enzymes are released into your bloodstream when you have muscle damage that need to be cleared out through your kidneys and you can if it overwhelms your kidneys you can have kidney failure um but like I said both of these are very rare the symptoms are pretty obvious so if you notice any of this you want to talk to your doctor about it okay okay um again the statins are ones that react with um grapefruit juice so be sure to look at your bottle to see if your Statin is one of those that uh does that common side effects with the stens they're they're pretty well tolerated some people get some GI distress abdominal pain gas um constipation um some folks will um describe having a headache so again let your doctor know if you think you're having a side effect that uh is is not tolerable for you so let's see so there are other um drugs that work to help lower cholesterol this one works by affecting production of cholesterol in your liver we all have cholesterol in our bodies we have to it's part of the building blocks of of our cells but when you have too much is when it's a problem so it affects your liver and production and can slow production there are other lipid lowering drugs and some of them work by affecting your liver some of them work by um uh blocking absorption of different fats in your intestines so you've all seen that commercial for the medication that says you know this is uh this medication Works two ways which one is it it's a is a vorin vorin so it's uh the cholesterol from you know your family from Uncle Joe and the other from the sloppy joe or whatever you know so it's it's your own genetic predisposition maybe to make too much cholesterol some of the medications work that way by affecting your liver and some of them work to block um the fats that you take in in your diet that can cause your liver to go in over production so you can you can affect production of cholesterol by what you eat so some medications work by uh working with what you eat and some work by affecting your liver and what you produce naturally okay um diuretics the diuretics are medications that help draw extra fluid off your body and some folks are on those because of side effects from other Med medications some people are on it because it helps treat their high blood pressure um and there are uh people that have heart failure on occasion will have a tendency to hold on to extra fluid and build up fluid in their ankles or hands or abdomen um and will be on a diuretic something like lasx uh to help them draw that extra fluid off the diuretics work on your kidneys by causing your kidneys to uh excrete extra fluid and in doing that they can sometimes affect your electrolytes your kidneys also help control um well control but they they are sort of a um a well want to say control your your levels of your Electro so calcium potassium sodium magnesium those sorts of things and if you don't have those electrolytes in proper balance in your body you you can have side effects um so um side effects that can happen with diuretics are things like dizziness excessive urination which I don't know to me that's what it's doing what it's supposed to do you're going to pee more if you're on you're on one of these um you might get muscle cramps if your electrolytes are thrown out of balance um so if you notice any of these sorts of things you want to be sure to let your your doctor know frequently if someone is on a diuretic they're also on potassium because the the diuretics tend to flush a lot of the potassium out of your system so they'll give you some potassium along with that medication um the dosages some people take it every day every day same dose other folks use it to help control that fluid depending on how much fluid they have on them so they may have been instructed by your doctor weigh yourself every day if you gained four pounds and you notice that swelling I want you to take one extra dose they may have worked out with their doctor a schedule or it may be that they call their doctor and they tell them how to adjust it when they're having problems so some people take it routinely and some it's kind of a varying thing depending on symptoms and and how much fluid they're holding on to okay um the anti-ar rhythmics um I we there's a lot of anti- rythmics but we just listed the the two most common ones which are dejin and am odone and dexin um or lanoxin um is used most commonly to to treat atrial fibrillation or heart failure it slows your heart's contractions and makes them more forceful um so it it can help stop certain arrhythmias and it can help some folks with heart failure um de Jaxon digitalis is also Fox Glove if you're a gardener it's that flower Fox Glove um so atrial fibrillation just for those who might be curious is um where the the your heart normally you know has Chambers in it has the top Chambers and the bottom Chambers and the top Chambers are called The Atrium the bottom The ventricle and they beat like that you get the top one then the bottom one top one bottom one and um at fibrillation means that Atrium rather than doing this contraction thing fibrillates so it just kind of Quivers and doesn't end up forcing the blood out and down to the ventricles so what can happen is you can get clots up in that that Atrium and um you don't want that so people that in in atrial fibrillation remember we said before frequently are on something like cumin a blend that are to keep them from getting clots because of that lack of of forceful blood flow through that Atrium so um to help control that arhythmia sometimes folks are on the dejin um am odone used for the same sorts of things atrial or ventricular rhythms on from the bottom of the heart now amone um is something that I if you're on long term your doctor's probably doing some pulmonary function tests because if you use longterm it can cause changes in the lungs that are not good so you know if you are on amone and you've noticed any increase in shortness of breath it's certainly something to bring to your doctor's attention because they want to do some test on your lungs and make sure that medication isn't causing any negative changes um side effects you looking at the side effects um I want to go back just a little bit to dexin um if you have too much dejin in your system it can get toxic so if you're on dejin and you notice your heart rate getting slower and slower and slower at rest um that could be a sign you're getting too much of that builtup and you need to talk call and talk to your doctor you might also notice that you see a yellow green Halo um or feel nauseous um with the amone shortness of breath or you could see a bluish Halo with that one if you're getting too much of it in your system um so let your doctors know if you have any of those symptoms all right and then we're we're to um the grapefruit juice so um again the drug levels of certain drugs will rise if the drug um is not broken down as effectively because of the grapefruit juice um 6 to8 ounces of regular regular strength I've never seen you know light light strength great um will produce that effect and most likely just a half a grapefruit every once a while is not going to cause a problem but you know with any medication you got to look at who's the person okay so you know I'm not a very big person and let's say it's me and I'm 80 years old you know should I be drinking four ounces a day you know probably not cuz a smaller person that's older that doesn't metabolize is is much of something might have an issue even with 4 ounces but if it's it's a big person and they're 40 you know they they can get away with a littleit more metabolically than than the smaller person who's older so you got to you got to look at all those things too um when you're looking at the guidelines and here's a list of medications so you see the calcium channel blockers over here we also see um um Statin medications there are a few anxiety um type medications and and the Viagra listed so just be sure to read your bottles and and you'll be up on all that
Medical_Lectures
Hepatocytes_Liver_Histology_Part_17.txt
uh today we are going to talk about uh structure and function of liver primarily this lecture is about the histology of liver uh but this is very important to understand because in so many liver diseases is hystology Alters that is what we call it liver histopathology and you're supposed to know as doctor's liver histopathology very well when you talk about the liver diseases so first we'll talk about the liver histology liver is the largest internal organ an adult person it is approximately how much is the weight of it yes an adult person who can tell me approximately weight try to guess uh it is about uh about one and a half kilogram right you can say the weight of liver in an adult person is uh about 1,5 00 G of course that becomes approximately 1 and a half K kilog right this is the largest internal organ right and another way to express it its weight is that it is usually 2 and a half% of your total body weight 2.5% of total body weight now liver is situated anatomically right anatomically it is situated in a very strategic situation that all the blood which is coming from the G and pancreas and from the sulene right it is passing through the liver right this this has a lot of implications for example all the nutrients which you are absorbing from the git first they will go to the liver and then they will go to General circulation then if unfortunately if you have absorbed some toxins from G first they should pass through liver and get detoxified in the liver so that General circulation is not exposed to those toxins right in the same way from the suene one of the function of suene is to uh destroy the old rbcs and remnants of those rbcs which are being destroyed they may come to liver and liver has also RBC clearance mechanisms cuffer cells so that they should be removed then of course blood from the pancrea pancreas is also going to the liver so that from pancreas insulin and glucagon and other important hormones should go to the liver so that under the directions of those hormones liver can handle the glucose and handle other nutrients and metabolites right so first of all you should understand that liver is very strategically present present between the G suine and pancreas on one side of the liver and general circulation on the other side of the liver liver is in between is that right uh let me put a very simple diagram that let's suppose first of all I will explain a little bit about circulatory architecture of the liver that how the liver cells are getting the blood right and how hepatocytes of liver cells are exposed to the blood the first thing which I told you that here is your git right and here is your suine here is your pancreas so what I was telling that blood from all these organs is going to the liver right blood from the git of course this is the Venus blood from the git as well as splenic vein right as well as from pancreatic Venus drainage all of this is going to the liver this is first thing you should understand right again from git lot of nutrients they must go to the liver so that liver can handle it for example if you have eaten a food with lot of carbohydrates so extra amount of glucose which is going to the liver should be converted into glycogen so that a level of glucose in the general circulation should not go very High suppose here is your general circulation and eventually of course from the liver blood will drain into General circulation right this is one thing from the G secondly from G toxins can be absorbed bacteria or their toxins can be absorbed or inadvertently you may have ingested some toxins so many of these toxins should pass through the liver and liver should detoxify them is that right another way also acting as a detoxifier so that it protect the general circulation from the toxins which are coming from git right then you know from pancreas blood is also going to the draining to the liver so that pancreas adding to the blood to insulin glucagon and other hormones and those hormones will act on the liver cells so that liver cell can appropriately handle the nutrients and metabolites which are coming from the git then I told you that blood from the suine spanic ve that is also eventually draining its blood to the liver so that in the suine when rbc's breakdown is going on the remnants of those breakdown should be cleared off by the liver so that General circulation should not receive those remnants of RBC destruction so in this way liver is draining the blood uh blood which is coming from git and from the pancreas and from the spine all this blood which is coming to the liver this is coming through which vein yes Gary Mr G is going to impress all of us portal yeah portal ve excellent so this is your portal ve this is your portal ve but there is one problem that blood which is coming through the portal vein right this is Venus blood right because from the aorta from the aorta blood has gone to these organs it has gone to G right it has gone through to the splenic artery pancreatic arterial Supply so actually from the aorta oxygenated blood went to these organs and these organs have utilized some degree of oxygen from the blood right and then when this blood from these organs recollecting going to the liver is that right it is having lesser amount of oxygen again let me explain what was happening that blood which was going to the spene or to the pancreas or to this part of git we call it spanic circulation in spanic circulation when arterial blood come to these organs right oxygen has significant amount of oxygen has been utilized so blood which is going to the liver through portal system this blood is not very rich in oxygen even though it may be rich in nutrients it may be rich in metabolites or unfortunately it may be rich in toxins or it may be rich in many hormones but it is not very rich in what oxygen to manage that problem nature provide direct Supply there's a Celiac artery here and one of the branch of the Celiac artery directly supplies the liver with the oxygenated blood so it means that liver is getting double blood supply there is double blood supply going to the liver right this is your portal vein and what is that what is that hepatic artery what is that hepatic artery hepatic artery let me draw it more clearly we'll put the liver in proper perspective the whole body circulation I will draw this diagram more clearly and okay here I draw the right heart I'm going to draw the whole circulation just to put the liver into proper perspective circulation of the liver what is this inferior vena that is spirio Vena here we have the right heart right atrium right wind ventricle pulmonary artery here you have your left atrium and left ventricle from here we can say what is coming aorta is that right and this is your systemic circulation now what I'm going to do that I'm going to put the liver into proper circulation of course you all of you must be knowing here is the lungs and Pulmonary blood is going to the lungs and there it break down into pulmonary capillaries it gets oxygenated and oxygenated blood comes to the yes left is that right now where the liver is present liver is present uh in our diagram we'll put it here I'll make it a little down word and liver is here right here is your suine it is not an anatomical diagram it's a functional diagram here is your small intestine am I right let's suppose here here I put your pancreas right now the arteries there are arteries which are bringing blood to these organs for example to the spleen blood is coming from yes splenic artery is it right then there are misic arteries Superior mic arteries gastric arteries and there are arteries which are taking blood to stomach okay I'll make it from here blood is coming to stomach and then it is coming to yes small intestine and also from here it is going to what is this pancreas is that right now the point which you have to understand that blood which is coming to git right of course this Blood BL which is coming to git arterial blood It is Well oxygenated right this blood is these arterial vessels are eventually breaking down into capillaries so there are splenic capillary system here suppan capillary system here P of course pancreas also has capillaries isn't it and of course within the git also there are capillaries is that right now from these splenic capillaries or pancreatic capillaries or from these uh gastrointestinal capillas blood Will Be recollected Blood will be recollected into Venus channels is that right now from these capillaries blood is recollected now when blood is recollected it is going to be drained into Portal veins right now here is spirior mic ve and splenic vein will also fuse with it right and Superior misic vein is this one going up and what is this splenic wi coming from here so splenic wi and spiric way they fuse together and they make what is this yes please portal ve is that right now what we have understood that the blood which is coming from splenic vein and this is perior mic veins this blood is originating from capillary beds of pancreas and suine and the gastrointestinal system so this blood which is coming here it has lost some of its oxygen to git and sine and pancreas so it is relatively poor in oxygen as compared to arterial blood this is your portal yes way so nature wanted to supply well oxygenated blood to liver as well for that purpose from aorta there's a Celiac trunk have you heard of it okay that's good one of the branch of celiac trunk is yes hepatic artery this is your hepatic artery and hepatic arteries also bringing the blood to the yes liver this is hepatic artery right so we can say there are two inputs to the liver now blood should pass through the liver for this purpose I will explain a little bit the internal architecture of the liver the parent Kima of the liver is mainly made of hepatocytes hepatocytes are the special cells which are making the parimal cells of the liver right the functional cells of the liver now hepatocytes within the liver are arranged in special lobular structures right uh those lobules are explained in many ways one of the way to explain them is classical explanation we call it classic lild
Medical_Lectures
Anatomy_of_the_inguinal_region_simplified.txt
The inguinal region or groin extends between the  anterior superior iliac spine and pubic tubercle I'm going to draw the anterior superior iliac  spine here and here I'm going to draw the pubic bone this is the site of the obturator foramen so  here we have the superior ramus of the pubis body of the pubis and inferior ramus of the pubis this  is where the pubic symphysis is located on the superior ramus of the pubis this is the site of  the pubic tubercle between the pubic tubercle and the pubic symphysis the crest of bone is called  the pubic crest and lateral to the pubic tubercle this region is called the pectineal line of the  pubis the inguinal ligament extends between the anterior superior iliac spine here and the pubic  tubercle I'm going to draw it as two lines here because the inguinal ligament is a thickened lower  free border of the external oblique aponeurosis this aponeurosis is not sharp inferiorly because  the aponeurosis is upturned on itself so it becomes blunt some fibers of the inguinal ligament  will be reflected backwards and attached to the pectineal line of the pubis and these fibers will  constitute the lacunar ligament some other fibers extend laterally from the lacunar ligament over  the pectineal line of the pubis and these produce the pectineal ligament there is another extension  from the inguinal ligament these fibers are deep fibers that extend upwards and medially as you  can see them here I will draw them dotted these extend upwards and medially they cross the midline  the Linea Alba and they constitute what is called the reflected part of the inguinal ligament  the aponeurosis of external oblique muscle has a deficiency in it this is the site of the  deficiency in fact it is a triangular deficiency and is called the superficial inguinal ring the  name is a partial misnomer because although it is superficial and located in the inguinal region  but it is not in the form of a ring it is rather triangular in shape its apex is directed upwards  and laterally and its base is formed by the pubic crest and it has two borders medial border and  a lateral border which are formed by the crura medial crus and lateral crus of the aponeurosis of  external oblique muscle now I'm drawing the fibers that constitute the medial crus and there are  other fibers here that constitute the lateral crus of the aponeurosis in addition to the medial crus  and lateral crus there are some other fibers that run perpendicular at the apex of the superficial  inguinal ring and these fibers they constitute intercrural fibers they help to prevent the crura  from spreading apart also located in the inguinal region is the deep inguinal ring and this deep  inguinal ring is a deficiency in transversalis fascia that is located just above the midpoint of  the inguinal ligament so let's draw the inguinal ligament again this is the anterior superior iliac  spine pubic tubercle again Here I am drawing the inguinal ligament extending between the anterior  superior iliac spine and the pubic tubercle midway between the anterior superior iliac spine and  the pubic tubercle is the location of the deep inguinal ring this is a true ring a deficiency  in the transversalis fascia and located midway between the anterior superior iliac spine and the  pubic tubercle now I will project the superficial inguinal ring again here which is a deficiency  in the external oblique aponeurosis between the deep ring and the superficial ring is the  inguinal canal so this is the proposed location of the inguinal canal it is an intermuscular  canal that extends between the deep inguinal ring and the superficial inguinal ring it is  located above and parallel to the medial half of the inguinal ligament and is about four  centimeter in length structures that pass through the canal including the ilioinguinal  nerve in both sexes the spermatic cord in the male and the round ligament of the uterus in the  female they have to traverse other layers of the anterior abdominal wall that participate also in  the formation of the inguinal canal so up to now we have mentioned that transversalis fascia  and its deficiency produces the deep inguinal ring and the aponeurosis external oblique muscle  produces the superficial inguinal ring so what about the transversus abdominis muscle and the  internal oblique muscle which are located in between the external oblique and the transversalis  fascia now let's complete the diagram on the other side the inguinal ligament provides attachment  to muscle fibers actually it provides attachment to the muscle fibers of transversus abdominis  and internal oblique muscle Here I am going to draw the origin of transversus abdominis muscle  from the inguinal ligament the fibers of the transversus abdominis arise from the lateral third  of the inguinal ligament some anatomy textbooks mentione that they arise from the lateral half of  the inguinal ligament whatever the origin these fibers do not pass in front of the deep inguinal  ring and therefore do not contribute to the anterior wall of the inguinal canal ther uppermost  fibers of the transverse abdominis muscle they arch down and become tendenous these fibers are  attached to the pectineal line and to the pubic crests those are arching fibers constitute  part of a ligament here which is called the conjoint tendon or falx inguinalis some of these  fibers curve medial to the deep inguinal ring and constitute a thickening here that forms a physical  boundary of the deep inguinal ring and these are called interfoveolar fibers now let's deal with  the other layer that contributes to the formation of the inguinal canal and that is the layer that  is formed by the internal oblique muscle again I'm going to draw the inguinal ligament this  is the inguinal ligament extending from the pubic tubercle to the anterior superior iliac  spine apart from transversus abdominis muscle the inguinal ligament provides attachment for  another muscle which is the internal oblique muscle these are the internal oblique muscle  fibers they arise from the lateral two-thirds of the inguinal ligament now remember that the  deep inguinal ring that I am approaching now is located midway between the anterior superior iliac  spine and the pubic tubercle therefore this muscle which arises from the lateral two-thirds of the  inguinal ligament it is located in a front partly in front of the deep inguinal ring thus this  muscle contributes a sheath to the contents of the inguinal canal while the transversus abdominis  muscle which does not lie in front of the deep inguinal ring does not contribute a sheath to the  structures that pass through the deep inguinal ring like the transversus abdominus the lowermost  fibers of the internal oblique muscle they curve as well and they constitute the other part of  the conjoint tendon these curving lower fibers are attached to the pectineal line of the pubis  and to the pubic crest now I'm going to project the superficial inguinal ring again now by using  these diagrams we can list the boundaries of the inguinal canal just to remind you this is the most  superficial diagram showing the superficial ring and the aponeurosis of external oblique and this  is the deepest level of the diagram which shows the transversalis fascia and the deep inguinal  ring and then this deepest layer is covered by another layer of the transversus abdominus muscle  and then another layer of the internal oblique muscle and returning back to the most superficial  layer which is formed by the external oblique muscle and it's aponeurosis so the interior wall  of the canal is formed by the external oblique aponeurosis here and it is reinforced laterally  by internal oblique fibres that arise from the lateral two-thirds of the inguinal ligament again  I repeat that the transversus abdominis muscle that arises from the lateral third or lateral  half of the inguinal ligament does not pass in front of the deep inguinal ring and therefore  does not participate in the formation of the anterior wall of the inguinal canal the roof of  the inguinal canal is produced by these arching fibers of the transversus abdominis muscle and  the internal oblique muscle these fibers they arch from anterior to posterior they form the roof  they form also the posterior wall of the canal so here we can find that the posterior wall of the  canal is formed by the transversalis fascia here and it is reinforced medially by the conjoint  tendon which is formed by the internal oblique muscle and the transversus abdominis muscle as  well as reinforcements by the fibers here the reflected part of the inguinal ligament the fibers  that extend upwards and medially from the inguinal ligament so these form the posterior wall of the  inguinal canal the inferior epigastric artery ascends in the posterior wall of the inguinal  canal medial to the deep inguinal ring as we will see later the floor of the inguinal canal is  formed by the under curving fibers of the inguinal ligament and the lacunar ligament as well as a  thickening of the transversalis fascia which is located deep and parallel to the inguinal ligament  this thickening is called the iliopubic tract. The inguinal region is an important area anatomically  and clinically. Anatomically it is a region where structures exit and enter the abdominal cavity  and clinically because the pathways of exit and entrance are potential sites of developing hernias  so here we have the inguinal hernia and close to the inguinal ligament also we have the femoral  hernia might develop. To better understand the inguinal region and where the hernias pass let's  examine the inguinal region from the inside in other words from the inside of the abdominal  cavity now I'm going to draw the inguinal region from behind this is again the anterior superior  iliac spine and here this is the superior ramus of the pubis pubic tubercle pubic crest and the pubic  symphysis and here is the inferior ramus and this is the site of the obturator foramen now I will  draw the inguinal ligament again but from behind extending from the anterior superior iliac spine  to the pubic tubercle here is the site of the deep inguinal ring now this is the rectus abdominis  lateral border of the rectus abdominis muscle here is the site of the Linea Alba and those  are the fibers of the rectus abdominis muscle as they are attached to the pubic crest now these  fibers they constitute the lacunar ligament they are reflected from the inguinal ligament and this  is the pectineal ligament here now midway between the anterior superior iliac spine and the pubic  symphysis is the site where the external iliac artery is located and its continuation which  is the femoral artery so this is the external iliac artery here the external iliac artery and  its continuation the femoral artery are a little bit medial to the site of the deep inguinal  ring because they are located midway between the anterior superior iliac spine to the pubic  symphysis and this is a longer distance than the midway between the anterior superior iliac spine  and the pubic tubercle the external iliac artery before passing deep to the inguinal ligament it  provides a branch which is the inferior epigastric artery this is the inferior epigastric artery goes  upwards and medially toward the back of the rectus abdominis muscle to enter the rectus sheath during  its course it lies therefore in the posterior wall and as you can see it is located medial to the  deep inguinal ring it forms a boundary of an important triangle here this triangle is called  the inguinal triangle or triangle of Hasselbach this is the triangle where a direct inguinal  hernia might pass the triangle is bounded by the inferior epigastric artery laterally the lateral  border of rectus abdominis muscle medially and inferiorly by the inguinal ligament now let's add  more details to this region so I'm going to draw the transversus abdominus muscle and as has been  mentioned earlier that it arises from the lateral third lateral half of the inguinal ligament these  are the fibers of the transversus abdominis muscle the lowermost fibers they arch down and they  are attached to the pectineal line of the pubis and they constitute part of the conjoint tendon  this conjoint tendon that strengthens the medial part of the inguinal canal those arching fibers  while doing so they form part of the floor of the inguinl triangle the part of the triangle here  which is not strengthened by the conjoint tendon is formed by the transversalis fascia alone.  Weakness of the floor of this triangle can predispose to a hernia this hernia can push itself  from the abdomen forwards into the inguinal canal without passing through the deep inguinal ring a  hernia that passes through the deep inguinal ring goes into the canal and appears through the  superficial ring again is called an indirect inguinal hernia a hernia that pushes itself  through the posterior wall of the inguinal canal and the region of the inguinal triangle  and appears through a superficial inguinal ring is called a direct inguinal hernia now let's draw  some more details in this diagram medial to the external iliac artery is the external iliac  vein more medially here there is a deficiency in the anterior abdominal wall this deficiency  is bounded anteriorly by the inguinal ligament medially by the lacunar ligament posteriorly by  the pectineal ligament or the pecten pubis and laterally by the external iliac vein or the femoral  vein distally and those boundaries they constitute the boundaries of the femoral ring through this  ring a hernia might push itself from the abdomen into the femoral triangle and this is called  the femoral hernia now I'm going to add some more details to this diagram passing through  the deep inguinal ring is the vas or the ductus deferens this vas or ductus deferens will cross  the inferior epigastric artery in its way into the pelvis the inferior epigastric artery supplies a  small branch here that goes with the vas and this is called the cremasteric artery the vas in its  way from the abdomen into the inguinal region and the testes is accompanied by another artery which  is a branch of the inferior of vesical artery and this is the artery of the vas or ductus difference  there is another artery here which is a main artery that supplies the testes and this is the  testicular artery those arteries together with the vas and together with the veins and lymphatics and  nerves they are going to acquire sheaths during their passage through the inguinal canal the first  sheath that they acquire is while passing through the transversalis fascia and this is called  the internal spermatic fascia that is derived from transversalis fascia then the contents of the  spermatic cord pass forwards and medially in the inguinal canal transversus abdominis muscle does  not contribute to a sheath because it's fibers do not pass in front of the deep inguinal ring but  the internal oblique muscle whose fibers pass in front of the deep inguinal ring will contribute to  the spermatic cord another sheath and the sheath is called the cremasteric fascia this sheath  in fact is not only a facial sheath but contain muscle fibers derived from the internal oblique  muscle and these muscle fibers are called the cremaster muscle fibers as the content of the  spermatic cord passes through the superficial inguinal ring they will acquire another sheath  from the aponeurosis of the external oblique muscle and this is called the external spermatic  fascia an important point to show here in relation to the femoral ring is the pubic branch of the  inferior epigastric artery this pubic branch passes in relation to the femoral ring and the  lacunar ligament toward the superior ramus of the pubis this artery passing toward the obturator  foramen is the obturator artery and before it passes through the foramen it supplies a pubic  branch as well to supply the bone now this pubic a branch anastomoses with the pubic a branch  of the inferior epigastric artery sometimes the obturator artery is absent the proximal part  of the obturator artery is absent and the pubic branch of the inferior epigastric artery becomes  large and replaces the proximal part of the obturator artery continues through the obturator  foramen into the thigh now this large branch here they is called the abnormal obturator artery it  should be kept in mind because surgical operations at this region dealing with the lacunar ligament  and the femoral ring might injure this abnormal artery abnormal obturator artery and result in  considerable bleeding now to better understand the inguinal hernia I would like to remind you  of some embryology related to the development of the testes this is a diagram showing a sagittal  section in the body the scrotum the penis here and the anterior abdominal wall pubic symphysis here  the testis actually the gonads whether tests or ovary they develop in the posterior abdominal  wall behind the peritoneum this is to show the peritoneal cavity here and the gonad is connected  to the perineum by a band of fibrous tissue so here in the male it is connected to the site where  the scrotum is going to be formed in the female the counterpart of the scrotum is the labia majora  so it is connected to the labium majous further growth results in the descent of the testes along  this fibrous band which is called the gubernaculum and this fibrous band has to pass from the abdomen  through the anterior abdominal wall in order to reach the perineum during its passage through  the anterior abdominal wall it passes through the inguinal canal from the deep inguinal ring to  the superficial inguinal ring in its way to the scrotum the ovary does not reach the labium majus  and its descent will be halted in the pelvis the remaining part of the gubernaculum in the female  therefore will constitute the round ligament of the ovary and the round ligament of the uterus the  round ligament of the uterus is therefore located in the inguinal canal of the female during the  descent of the gonads a tubular prolongation from the peritoneum grows down into the perineum  this is called the processus vaginalis the processus vaginalis therefore passes through  the inguinal canal with further development the proximal part of the processus vaginalis will  become obliterated and only the distal part of the processus vaginalis remains in relation to  the descended testes and this remaining part of the processus vaginalis is called the Tunica  vaginalis failure of obliteration of this segment of the processus vaginalis will make it continuous  with the peritoneal cavity and create a potential site for a hernia to take place this hernia  is therefore congenital hernia and this hernia therefore passes from the deep inguinal ring to  the superficial inguinal ring so it is an indirect hernia this hernia is more likely to occur in  the male because it is related to the descent of the testes and this hernia usually descents  and extends into the scrotum a direct inguinal hernia is less common than the indirect hernia  and it is also called acquired hernia because it results from the weakness of the posterior wall  of the inguinal canal the inguinal triangle of Hasselbach this weakness is common after the age  of 40 years in contrast to the indirect inguinal hernia which usually occurs in young males because  it has congenital origin in the direct inguinal hernia the hernial sac protrudes directly forwards  without passing through the deep inguinal ring it passes directly through the superficial inguinal  ring now let's sketch the sites of these hernias and their relations here's the femoral artery  located midway between the anterior superior iliac spine and the pubic symphysis the external iliac  artery which is proximal to the inguinal ligament provides a branch which goes upwards and medially  and this is the inferior epigastric artery so you can see the inferior epigastric artery traverses  the posterior wall of the inguinal canal medial to the deep inguinal ring medial to the femoral  artery is the femoral vein and at this location these are the fibers that constitute the lacunar  ligament and then the pectineal ligament and this is the deficiency here the femoral ring leading  into the femoral canal medial to the femoral vein is bounded anteriorly by the inguinal ligament  medially lacunar ligament posteriorly pectineal ligament an indirect inguinal hernia that is  predisposed by a patent processus vaginalis passes through the deep inguinal ring so the neck of the  hernia is in fact located lateral to the inferior epigastric artery a direct inguinal hernia that  passes through the Hasselbach triangle because of weakness of the posterior wall of the inguinal  canal passes medial to the inferior epigastric artery in both cases the hernia passes through  the superficial inguinal ring and is located as you can see here located upwards and medial to the  pubic tubercle a femoral hernia on the other hand that passes through the femoral ring extends into  the thigh to the femoral triangle of the thigh through the femoral canal and as you can see that  it is located downwards and lateral to the pubic tubercle so an indirect inguinal hernia is more  common in a young male because it is predisposed by a processus vaginalis that accompanies the  descent of the testes a direct inguinal hernia is more common in elderly after the age of 40 because  it is predisposed by weakness of the posterior wall of the inguinal canal a femoral hernia on  the other hand is more common in females because in females who have wider pelvis therefore have  wider femoral ring and therefore femoral hernia is more likely to occur through a wider defect and  therefore it is more likely to occur in females
Medical_Lectures
Shock_Part_1_of_3.txt
[Music] [Applause] [Music] hello I'm Eric strong from the Palo Alto veterans hospital and Stanford University today I will be talking to you about shock focusing on its recognition and management here are the learning objectives of this talk first to be able to Define and recognize shock as well as understand its four major subtypes second to know the major ideologies of the shock subtypes finally to understand General treatment strategies focusing on IV fluids and choice of pressor before I begin I would like to offer an initial disclaimer though the body of primary scientific literature on the management of shock has grown significantly in just the past two to three years there remains relatively little in the way of comparisons between one medication versus another or between one treatment strategy versus another therefore there exists substantial variation in the management of shock as a consequence of differing emphasis on physiologic principles institutional preferences and a personal experience so now what exactly is shock this term is used in many different ways among different people lay people and doctors on TV tend to use the term very broadly to Encompass a wide range of pathophysiologic and psychologic States however as medical providers we should Reserve its use for something a bit more specific I like to think of shock as a physiologic state characterized by a systemic impairment in oxygen delivery as a result of reduced tissue perfusion almost universally mediated by low blood pressure there are rare examples of shock in which blood pressure is normal which we will mention a little later but the overwhelming majority of patients in shock have hypotension note that a diagnosis of shock depends not just on the blood pressure but also on indications of systemic hypo profusion let's take a look at what some of those indications can be while the underlying specific cause of shock may have very specific findings such as hematemesis and the hypmic shock accompanying a varal bleed or the St elevations in cardiogenic shock triggered by an acute MI the following list is of signs and lab abnormalities that are consistent across all forms and ideologies of shock in the cardiovascular system we have hypotension elevated lactate and an elevated troponin neurologically patients often demonstrate alter mental status of some form many pass through a sequence of stages whereby they first first become agitated next Delirious then somnolent and finally comos of course this sequence isn't Universal for example some patients may not get agitated or Delirious becoming sent in the renal system urine output will drop and a rise in The Bu to creatinine ratio will frequently precede a Frank rise in creatinine that can eventually lead to kidney failure a large array of hematologic abnormalities can be seen depending on the underlying ology the one common problem across subtypes of shock is dysfunction of the coagulation Cascade which can lead to DIC in extreme circumstances the skin is frequently cool and clammmy except in the early warm phase of septic shock and there may be sinosis of the fingers and toes suggestive of poor profusion and poor oxygen delivery to those areas hypoxia and particularly tpia are also common irrespective of the underlying ideology finally within the GI system ilas is frequently seen along with extreme elevations of as and ALT above a thousand commonly known as shock liver even if the initial ideology of the patient shock has nothing to do with GI Hemorrhage at all poor profusion of the gut can lead to minor diffuse Hemorrhage that may not be directly clinically relevant but can lead to a positive feal ult blood test regarding this list list of Science and lab abnormalities there is are no guidelines or criteria regarding how many need to be present or which systems need to be involved in order for shock to be present but a combination of multiple abnormalities from more than one system particularly in the presence of hypotension should definitely trigger the medical provider to become concerned about the presence of shock now I would like to take a short digression to discuss the physiologic description of what's occurring within the cardiovascular system during shock this is very important because a basic understanding of the physiology will allow us to understand why shock is divided into separate subtypes and the different subtypes are treated very differently first let's begin with Ohm's law for some reason I find that almost no medical students can remember the equations of fluid mechanics relevant for the cardiovascular physiology but nearly everyone remembers V equals I or voltage which is a gradient in electrical potential equals to current times resistance we can apply Ohm's law to hemodynamics now the pressure gradient in the cardiovascular system equals blood flow times resistance if we are a little more specific with our substitutions we now have perfusion pressure equals cardiac output times systemic vascular resistance perfusion pressure is map minus CVP or mean arterial pressure minus the central Venus pressure next we know that cardiac output is a product of heart rate and stroke volume finally that stroke volume is dependent upon preload and contractility therefore low profusion pressure which is a reasonable surrogate for shock can be caused by low heart rate low preload low contractility or low svr from here let's discuss the four major subtypes of Shock first hypovolemic hypovolemic shock is caused by low preload next distributive distributive shock is caused by low svr then cardiogenic cardiogenic shock is caused by low contractility finally obstructive obstructive shock is also caused by low preload well then you may ask what is the difference between hypovolemic and obstructive shock the low preload in hypovolemic shock is due to lack of total intravascular volume while the low preload in obstructive shock is due to a physical obstruction to LV filling one might next ask then what about shock caused by low heart rate what subtype is that this brings us to a few additional subtypes to mention first there is what is occasionally referred to as AR rhythmogenic shock as the term implies this is shock that arises as a consequence of an abnormal heart rhythm profound radoc cardia leading to low cardiac output falls into this category also extreme teoc cardia that leads to low cardiac output by insufficient diastolic filling time is also is included here next is Toxin mediated shock referring almost solely to the toxins of carbon monoxide and cyanide both of which can lead to impaired oxygen delivery and or utilization through mechanisms not requiring low blood pressure because both of these additional subtypes have unique mechanisms and very specific treatments we won't discuss them more during this particular lecture now that we have some familiarity with the subtypes of shock how do we go about recognizing which subtype is present in any particular patient this is critical to do very early and quickly as the management for the different subtypes can differ significantly there are four basic physiologic parameters that can help us distinguish one subtype from another central venous pressure systemic vascular resistance cardiac output and the temperature of the extremities obviously some of these are easier than others to quickly determine at the bedside let's first take a look at hypmic shock the CVP is low because of low intravascular volume as part of the body's compensatory mechanisms to try to maintain adequate blood pressure svr is very elevated regarding cardiac output although these patients are typically quite tacac cardic which might lead you to think the cardiac output is high it is actually low due to the fact that the effect of the low preload outweighs the effect of the fast heart rate finally the extremities are typically cool and and the peripheral arteries are clamped down to help raise blood pressure and shunt blood towards the vital organs next obstructive shock CVP is high because of backup in pressure as central venous blood meets a physical obstruction to flow within the cardiopulmonary systems SV is elevated for the same reason as in hypmic shock cardiac output is low because of low preload in the left ventricle finally the extremities are cool in distributive shock CVP is usually low at least initially because the vast majority of patients with distributive shock also have a component of hypovolemia as well svr in distributive shock is low as that's the predominant physiologic abnormality cardiac output can either be high or low depending on the presence or absence of sepsis induced cardiopathy the extremities are usually warm but can be occasionally C tool particularly if the patient has been in shock for a while or if cardiac output is low lastly with cardiogenic shock CVP is high svr is high cardiac output is low and the extremities are cool occasionally in cardiogenic shock svr is so high as to balance out the low cardiac output enough that the peripheral blood pressure is actually normal this can be a very difficult case of shock to diagnose as the effective patient may have all of the consistent signs and symptoms of shock but normal blood pressure luckily this scenario is relatively uncommon I would like to briefly go through the major ideologies of the shock subtypes hypmic shock can be further divided into Hemorrhage induced such as GI bleeds or from trauma and fluid loss induced from diarrhea or third spacing hypmic shock from third spacing is probably most commonly seen in severe acute pancreatitis there are essentially five distinct causes of distributive shock they are sepsis anaphylaxis mixoma from profound hypothyroidism adrenal crisis and neurogenic shock from acute spinal cord transsection some people have advocated taking miedema and adrenal crisis out of this category and forming a separate subtype for endocrine shock but this has not caught on widely yet cardiogenic shock is usually due to an acute M cardio infarction but can also be seen in M carditis acute aoic insufficiency such as that which can accompany bacterial endocarditis and papillary muscle rupture arhythmia are sometimes put into this category but as mentioned previously are quite distinct in their management and are probably best separated out although this is never seen at most hospitals at Stanford heart transplant rejection is another important cause of cardiogenic shock lastly obstructive shock can be caused by a massive pulmonary embolism tension pneumothorax or cardiac tanod one important caveat to categorizing a patient in shock is that more than one type of shock May coexist in the same patient there are two scenarios in which this is most commonly seen first as previously mentioned most patients with septic shock have some degree of concurrent hypovolemia this is the consequence of poor po intake while in the earlier stages of their illness as well as due to Leaky capillary membranes and third spacing second septic shock can induce a denovo cardiomyopathy one which can be completely reversible with successful treatment of the underlying sepsis
Medical_Lectures
How_the_Gastrointestinal_System_Works_and_Goes_Awry.txt
[Music] Stanford University good evening everyone glad to see you here again so last week as you remember we had an opportunity to hear from dr. norm risk who reviewed the pulmonary respiratory system and you may remember that when he was giving his presentation he talked about the place where the epiglottis becomes really important because that sort of bifurcates the connection between the respiratory tract and the gastrointestinal tract and so tonight we're going to be focusing on the GI tract and how it works and when it goes awry and it's a bit ironic that our speaker this evening dr. Jay Patricia who joins Stanford two years ago to lead our gastrointestinal division in the Department of Medicine actually began by going down the respiratory tract so after he completed his training in India and came to the United States and did some residency he started out in respiratory medicine and critical care medicine just like dr. risk last week he did this at Tufts in New England in Boston and then something happened and he decided that he went down the wrong tube so must have come out Annie moved south and went to Johns Hopkins where he decided to go down bypass the epiglottis and get into the GI tract where he's been ever since so he has made his life journey focusing on the GI tract and actually has uncovered some very very important clues along the way something that you probably don't know but would be interested to know is that just like we think and you've heard in past discussions we have our brain functioning and thoughts and various activities well there is a if you'll almost separate nervous system that operates the GI tract as well and so for those of you who had dinner before you came your GI tract is already functioning and for those who are going to eat like me after you get home tonight you'll learn a little bit about how it's working or perhaps even how it's gone awry so there we are tonight's presentation Jay Patricia [Applause] thank you very much Phil it's a pleasure to be here and I want to applaud all of you who braved this stormy weather to come in and listen to what is arguably the most important system in your body I knew everybody would say that what today I'm going to actually prove to you so what we'll talk about today is an overview of the structure and function of the gut and as Phil mentioned I would like to emphasize the roll off the guts nervous system and then we'll take a brief history of your last meal how many of you've eaten already okay well this stock is then going to be very educated for you then we talked a little bit about the guts lining because that's really how the gut interfaces with its environment and it's a very very important aspect of the digestive system will go on to enteric micro flora which is a very hot and emerging field not just in gastroenterology but in all of Medicine talk about the guts immune system and then finally end with the brain got access as you can already see the gut is not just a single system it's actually a composite of many many different kinds of systems from the nervous to the immune to the microbiological and it's really a very interesting area to be immersed in and I hope you will enjoy it as much as I have so the general approach will begin with an outline of the organs that make up our gut and conceptually we look at these as either luminal organs those that are hollow so called tubes which most people think synonymously with the word gut and then the solid organs which are also part of the digestive system and these include the liver and the pancreas now in the interest of time I'm not going to spend a lot of time the liver and pancreas and perhaps that can be the subject of a later talk will focus really on the luminal organs now this carton of course doesn't give justice - what's really the true picture of what's going on and I got and I thought I'd start with this video which is a actual endoscopy of a patient and I doing this for two reasons first because I think it illustrates what our guts look like much better than any cartoon can and secondly because it highlights the importance of this tool the endoscope which has really revolutionized our specialty not only allowed us to diagnose and treat conditions but really further our understanding of the biology of the digestive system so here goes we're positioned in the esophagus which is the tube just past the epiglottis that phil referred to and we are pushing the scope down and we come up to the GE junction and you can already see that the color changes we're now in the stomach and this is the distal part for the part of the stomach this is us looking back at how we were entering the stomach so called retroflex view and these are the Falls of the stomach and in the distance you can see the pylorus which is the opening to the small bowel of the duodenum I just wanted to give you that as an introduction because we will use these images as references as we go along so let's start with the guts nervous system and how it works this is really the answer to the question of what controls the gut how many of you feel that got is sometimes out of control but actually it has two sets of nervous systems that control it the one that is more commonly known which is the central nervous system the big brain inside our heads really is involved in control of the gut in two main places which is at the beginning and at the end and that is where the gut really interfaces for the external environment and it's very important that you have multiple controls because if you don't things can really go awry as we'll show you so the brain in our head actually starts the process of regulating gut function very early on that's I noticed a few of you hadn't eaten but perhaps if you can those of you who haven't eaten can think of the meal that's awaiting you after this lecture you will have triggered what we call the cephalic reflex just the thought the sight the smell of food triggers this reflex and the brain up here starts priming the gut so to speak in anticipation of the meal that is coming or hopefully is coming and this results in a stimulation of the saliva glands it results in stimulation of the stomach to produce acid and it may be responsible for the rumbling of your intestines that you often hear when you're hungry and are thinking of a good meal so that's the cephalic phase but more importantly is the fact that the brain gets involved in controlling the swallowing process as Phil mentioned and as you may have heard last week the epiglottis bifurcates this four got into the esophagus the food pipe versus the respiratory tract and it's very important that as you swallow this bolus or this collection of food material pass this epiglottis that it doesn't go down the wrong way it doesn't either go down to your airway or it doesn't come back through your nose and this is really where the big brain exerts very fine control of this process making sure that there is safe passage of the food down to the esophagus and in fact there is a center in the brainstem that is devoted just to swallowing and controls many nuclei which to multiple cranial nerves really make sure that the swallowing process is normal this is one reason why patients who have strokes or other neurological disorders often have difficulty with swallowing and you may have seen some of these patients at the can have difficulty human swallowing these secretions things go down the wrong way and it's really a big challenge to try and help them maintain nutrition so we'll come back to the other end in a few minutes but then the question arises what controls the rest of the gut if the ends are controlled by the central nervous system and in fact there is another nervous system one that many of you may not have heard about what we call the enteric nervous system and when I was growing up as a kid my mother often used to berate me for having no brains at all and one of the biggest satisfactions I found when I became a gastroenterologist was to go back and tell my mother you were wrong not only did I have one brain I actually had two brains and this is the tale about the second brain the enteric nervous system to understand this let's just begin with an overview of how the gut is organized if you take a section of the gut it's a tube and if you cut it slice it like this you'll see this consists of many concentric layers pretty much in the same fashion from the esophagus all the way down to the rectum the inner layer is called an epithelium and underneath that is the submucosa and then you have two layers of muscle an inner circular muscle and an outer circular most outer longitudinal muscle this is just another representation of the same process if you look at the gut you peel away the layers this is the inner mucosa and these are the circular muscle and longitudinal muscle layers now sandwiched in between these two muscle layers is really the brain the mind track plexus which is the major part of the guts brain this controls the motor activity of the gut in other words it controls the contractions controls the relaxations responsible for spasms is responsible for pushing food down the gut and the other component of this is the submucosa plexus which lies just beneath the epithelium where the glands are and this nervous system controls the secretions of the intestine in response to food the enteric nervous system is a surprising nervous system for those of you are not familiar with it it has hundred million neurons more than the spinal cord instance these are all contained within the wall Fugate they're capable of performing very complex functions including sensation processing the information and coming out with complex motor or secretory algorithms it has multiple synaptic mechanisms multiple neurotransmitters some of you may not know but more than 90% of the body's serotonin for instance is found in the gut more than 50 percent of the body's dopamine is found in the gut there are many neurotransmitters that have yet to be discovered in the brain that may first be identified in the gut so this nervous system is actually very very important not just for digestive function but for its implications for other nervous systems in the body so what does it look like if you take a photo micrograph this is what it looks like here is the outer muscle layer here's the inner muscle layer and this is the mine tract plexus that controls the contractions sandwiched in between this is a scanning electron microscopy showing the neurons on top of the muscles and conceptually it's different than the kinds of nerves that control your skeletal muscles muscles if you live there are two types of nerves that we find in the my enteric plexus those that cause the muscle to contract and those that cause it to relax any one time muscle in the gut is under the influence of these two opposing messages one that's causing it to relax and one that's causing it to contract but it's not just the nerves that are important for muscle relaxation or contraction there's a third kind of cell which we call the interstitial cell of Gaia ICC which is named after the Spanish on Europa thought oh geez you're an anatomist and this cell we now know actually controls the rhythm with which the gut contracts and you can actually record this rhythm in human beings using a machine that we call the electro gas program somewhat similar in a very crude way to the electrocardiogram or EKG and when you do that you actually see these nice in a normal person a nice slow way of rhythm 3 cycles per minute this is a our spectral analysis just showing over time nice Peaks off this rhythm and then in patients who have diabetes particularly those who have problems with their stomach you can see that this rhythm is disrupted and it's completely chaotic and that results in problems with their stomach not working properly so the muscle and the mute of the music of your bowels Israeli complex but consists conceptually of the muscles which is like the piano or the organ the interstitial cell that is the metronome and the my enteric plexus which is the organ player and together they make very beautiful music and some of you may be listening to it already so this is fairly the brain and this is the ENS the enteric nervous system and it controls many reflexes in the gut we talked about contractions but it's also responsible for secretion as I told you as well as regulating the amount of blood that is coming into the gut in response to a meal or other stimuli let's just look at the peristaltic reflex to illustrate this the peristaltic reflex is a very basic motor mechanism by which the bowel contracts and it contracts when you distend a segment of the bowel either mechanically so if you swallow some food it will distend a segment of your bowel or it contracts in response to chemicals that you ingest which are part of the food these chemicals or the stretch in fact cause cells that line the mucosa that contains serotonin to be activated resulting in the release of serotonin this serotonin release then triggers the activity of nerves in the enteric nervous system that causes a complex series of signaling going up and down the gut the result is that behind the bolus there is a contraction so that generates a pressure head pushing the bolus forward but more importantly or equally importantly ahead of the bolus the gut has to actively relax not simply stretch passively but actively relaxed so as to allow the bolus to go for and this is what we call the peristaltic reflex and this is just another way of illustrating it you have a bolus you distend a segment of the bowel you contract behind it and you relax ahead of it now the gut and the enteric nervous system in particularly uses this reflex to come up with a lot of programs that are almost wired into it and these programs depend on one whether you're eating or you're fasting and to which part of the gut we're talking about so the fed programs vary with the region of the gut you have the esophagus which is mainly associated with transit of the ball it's a very quick transit the bolus moves within seconds or less from the mouth down to your stomach in the stomach it takes a lot longer because part of the stomach function is to store the food instead of just dumping it in an intestine also as we will show later the stomach participates in grinding the food down and then in the small bowel there's a program which we also share which is called mixing and then finally the large barrel which is a gain transit but much slower and exit and then in between these fed programs there's what we call the fasting program which is really like a housekeeping program anything that's left over that's not digested is swept out of the body in cyclical giant contractions that that move debris from the stomach all the way down to the to the small bowel so let's just now go through the history as this as I said off off your last meal and we'll talk about the process of digestion and the session is actually a very simple concept but it's very complicated as a process essentially what you're doing is converting energy one form to an energy that your body can use so you take these energy particles which we call food which are large particles we chew them down a little bit and then we add water ions enzymes we mix it grind it the end process is small absorbable material and then whatever's left over that we can't use we really minh ate it indigestion actually is not only the breakdown of the large particles but absorption of nutrients absorption of water and has said the elimination of waste it begins in the mouth as many of us tell our kids chew your food properly and there's a reason for doing that you want to reduce the size of particles you want to form a proper bolus some digestion in terms of starch and lipid breakdown is initiated there but saliva also is antibacterial and can neutralize some of the reflux acid in the esophagus the real action however begins in the stomach once the bolus reaches the stomach and I'm just going to show you what the stomach does to grind the food down so here's a bolus of food coming down into your stomach your stomach first relaxes to accommodate that so you don't feel full that easily and then it starts pushing that bolus towards the pylorus and as it does so the pylorus is partially closed the pylorus is the region between the stomach and the first part of the small bowel it's partially closed sorry how quickly does the stomach start contracting so the whole process of where the whole process should be over at the most four hours of a normal meal your stomach should be almost completely empty by the end of four as half of your meal is cleared and by about an hour and a half so so that the important thing here is that the stomach will not allow the passage of large particles into the duodenum it will keep pushing them against a partially closed pylorus which acts like a sieve and the stomach is acting like a mill against the sieve and it keeps working normally to crush these particles down till they're about one or two millimeters in size considering the size of the bolus that you normally that's a huge reduction in size and it does so through very fine intricate mechanisms that I will have just shown you but here's a video of an actual human stomach and this is the part is the antrum this the mill part here is the pylorus and that's the duodenum and there's contrast in the stomach and the idea is to show you that this contraction is taking place against our partially shot by Laura's very little contrast is actually flowing into the duodenum and the stomach is really working right now to grind things down and if you actually put some non-digestible solids like we have here in the stomach you will see that they keep being pushed against the pylorus but they will not make their way down into the duodenum because their size is too large and they'll go there and they'll come back they'll go there and we'll come back and because they're not adjustable and they're plastic they can't be broken down during this phase eventually all the foodlab being broken down this plastic material will stay in the stomach till we get to the fasting program which is the housekeeping program and that's when things are going to be swept out okay so what controls this process a very tightly controlled process as I said the stomach serves as a filter and a valve allowing particles of only two millimeters in size or less to pass through with an average caloric delivery of about 150 calories per hour and this is very finely tuned and partially dependent upon the size of the meal partially dependent upon the nature of the meal the viscosity of the other contents the fat contents but suffice to say that this process is tightly controlled and under the feedback of many different mechanisms which I will talk to you about very shortly one of these being what we call the ileal brake as soon as food starts hitting the small bowel signals start going back to the stomach okay we're receiving the first supply of nutrients you can slow down now because we don't want to get overwhelmed in our capacity to absorb this yes ma'am I I see can I just finish the section I forgot to tell you at the beginning I'm going to break this up into sections I want to finish that and I open it up for questions if you just give me five more minutes and we'll open it up for questions thank you so now this food is starting to get into the small bowel and the small bowel has several different parts but the one of the most important parts in terms of the process of digestion is what we call the duodenum because it's here that a lot of the mixing that's important for breakdown of foods chemically takes place and that's because the duodenum is where both the liver ducts open as well as the pancreatic duct soap and both of these supplies essential ingredients that are necessary for digestion of absorbed food and so as I say on this slide the duodenum can be likened to a master brewer you have this food and it's got to really get into this process akin to fermentation but has to control a lot of different inputs that go into this process so here's the gastric food we call it kind now which is coming out of the stomach and then these are the enzymes that are being produced by the pancreas that are being regulated by local reflexes this is the bile that's coming out from the liver that's being regulated by local processes gastric contents are very acidic because the stomach produces acid that has to be neutralized by the production of bicarbonate in the small bowel otherwise the enzymes won't work and all of this process has to be monitored looking at the pH looking at the osmolality looking at how much volume there is and this is where the enteric nervous system is sensing what's going on and relaying feedback through either hormones or nerves that are controlling this process so a lot of enzymes are involved in this process and this mixture amylase and lipase actually begin before they even come into the duodenum the stomach produces lipase and the saliva contains amylase but a large part of the enzymes responsible for breakdown of fat and protein come from the pancreas and the pancreas decreased as enzymes not in an active form this is very important it secretes them an inactive form it's only when they get into the small bowel that they become activated and when they get act then they form all these various enzyme the list is very long which act on various segments of these food particles and breaks them down into simpler constituents when I said it's very important for these enzymes from the pancreas to be secreted in an inactive form these are the most powerful enzymes that we know in biology they can chew up almost any tissue between the trip sins and other proteases as well as the light bases and what happens sometimes is in the pancreatic cell under certain conditions for instance we know with with certain forms of alcohol intake or in the presence of gall stones this process is bypassed and the enzymes actually get activated within the pancreas itself and that results in what we call pancreatitis and you have this normal-looking pancreas that is then undergoes a process of what we call Auto digestion the enzymes are now active in the pancreas and cause acute pancreatitis and over time or with repeated insults this can result in chronic pancreatitis now again these enzymes are coming in these secretions are coming in the food is coming in it all has to be mixed up properly if you're not going to mix it up then the enzymes are not going to get a chance to really attack the food and that's where again the contractions coming this is the fed pattern off the intestine this is the segmentation the mixing and sloshing off the enzymes and again this is an endoscopic view that that gives you an idea of the active contractions that are going on in the duodenum there's no food here but the duodenum is still contracting and it's it's a gives you a clearer vision of how it actually works and so with all of this mechanical and chemical interaction we start seeing the breakdown of these complex chemical molecules such as the starches or starch like sugars for instance under the influence of these enzymes two smaller molecules and so on the process occurs with proteins as well as fats now there are some therapeutic opportunities for those of you who want to lose weight and you're familiar with this drug or least known by its brand names shown here where you want to actually inhibit some of these enzymes and orlistat actually inhibits lipase which is an enzyme produced by the pancreas responsible for fat digestion so the result is instead of the enzymes being able to act on the fat particles they are blocked by this drug and this results in a certain amount of malabsorption of the fat and then the fat then goes on to produce some diarrhea and makes you lose weight it is one of the Messier ways to lose weight less producer and this process the liver is not a passive bystander it actually also participates not to the same extent perhaps as as as the pancreas does but but importantly and it does so mainly by producing bile salts bile salts are salts that are present in bile which is a complex fluid and bile is stored in the gallbladder awaiting the signal that it's time for the gallbladder to contract and eject the bile into the duodenum what bile does it acts as an emulsifying agent so it takes these big droplets of fat that are coming out of the stomach and breaks it down to smaller particles that the enzymes can then act on more efficiently and when the fat is broken down bile salts are also helped repackage those fats into what we call miss my cells which are small particles that are more efficiently absorbed by the cells lining the small bowel so bile salts are important in fat digestion in particular and bile also serves as a way for the body to get rid of cholesterol which is made by the liver and bile salts are important in the process of maintaining the cholesterol in a soluble state so if you look at this triangle here whether cholesterol stays in a soluble form depends on the concentration the bile salt concentration of another detergent lecithin and the concentration of cholesterol in the gallbladder bile is concentrated in some individuals either is concentrated too much because they have a genetic problem that cholesterol is high or the gall bladder doesn't work very well and the bile stays in there longer allowing more concentration to happen the result is that cholesterol may precipitate out and this is what forms the usual variety of gall stones you can see here a gall bladder that is full of gall stones so after digestion once the food has been broken down into these chemicals that can absorb that that the smallest amount of particles that that they're capable of breaking down into they have to be absorbed and here's where the 20 plus feet of the small bowel really is necessary because throughout the length off the rest of the small bowel absorption is taking place and it's not that twenty feet is enough in fact it's not what the gut has to do is to form these layers and folds off the lining which we call villi which are really microscopic tiny folds that increase the surface area of the small bowel a hundredfold or more that really allow efficient absorption to takes place and when these nutrients are absorbed then they have to be transported and there is a very rich blood vessel network right underneath the epithelium that carries most of the nutrients away and there is also a lymph network which we call the lacteals where the fat the fatty nutrients the fatty acids and the other lipids are absorbed and these go via the lymph node back into the circulation whereas the other nutrients go to the liver this is a endoscopic image showing you these villi and you can see how many hundreds of thousands of small villi are there on each phone and this is really what gives the small bowel it's tremendous absorptive capacity now I told you about the fat absorption so those of you who had the typical us dinner of Big Mac and fries and if we will examine your the surface of your bowels we will see these lymphatics which are full of this milky fluid which is the fat I can see somebody's headed for the salad bar tonight okay now as I said the nutrients apart from the lymphatics which take the fat the other nutrients don't go straight to your heart they actually are carried to the liver where the veins actually break down into capillaries again before they go out of the liver and this is very important for two reasons first the liver absorbs some nutrients from the circulation and stores them such as glucose to glycogen and so on but also the liver is a very efficient filtering mechanism for potential toxins that are coming out from the gut so it's very important that this stream of blood that's coming from the intestines not go directly to the heart but why are the liver and so liver plays a very critical role in preventing infections or preventing the entry of toxins from the gut in addition to its role in digestion and in fact in patients who have cirrhosis where there's destruction of the liver and scarring of the liver tissue there is an obstruction to the flow of this blood coming from the intestine this pressure buildup in these veins and they start backing up and this results in what's called varicose veins or very quickly called varices which are the equivalent of varicose veins and that you would see in your legs just veins under a lot of pressure enormous Lee dilated and the problem with this is not only now there's not enough blood going to the liver from the intestines but these dilated engorged wanes are at risk for bursting and causing bleeding so I'm going to show you and this happens particularly in the esophagus sometimes in the stomach and I'm going to show you an example of a bleeding esophageal verax this is one of the reasons we are often called as gastroenterologist in the middle of the night to take care of cases like this so here is a verax this is the lining of the esophagus and you can see it's actually bleeding here that's the stream of blood coming and that's in the stomach fortunately we now have the tools where we can fix this we can go down with the endoscope with a little cap like device at the end of it suction the virus into it and slip a rubber band on it and choke it off and that's the Communist way now of treating varices we go took care of the bad guy all right okay so we talked about a lot of different things and I'm just going to wrap up this section by saying there are other factors that I didn't emphasize as much but are equally important and that's hormones so the nervous system communicates through nervous reflexes but also through hormones for instance in response to the meal arriving the duodenum very important hormone called cholecystokinin CCK is produced and this results in contraction of the gall bladder which is why it's got its name cholecystokinin but also stimulation of pancreatic enzyme production and part of this is mediated by direct effects and part of it is mediated by reflexes that go to the brain there are other hormones that act outside of the gut and these are for instance hormones that we call in kittens you may have heard of this name hormone like glp-1 this is produced by the intestine in response to the meal it has some effect on stomach empty but it also prepares the pancreas particularly the cells in the pancreas that produce insulin and tells them okay a meal is coming you need to start revving up the insulin production and so this is all being coordinated at multiple levels and this is in fact now a target this axis is now a target for drugs on which you may have heard about which are really augmenting the effect of the impotence in various ways okay now for the fun part okay so we've absorbed as much as we can the small bowel has done its job and what's left over now goes into the large barrel and the large bowel is very important it has to get rid of this waste matter but it's also still about a liter of fluid there that it has to absorb because otherwise you would lose a lot of fluid every day so look so the colon absorbs water and electrolytes and then gets rid of what's left as you can see here and it doesn't do this on a regular basis as those of you are regular it may happen once a day or once every other day you get these giant contractions that move this fecal matter from the right side of your colon all the way down to your rectum so I won't repeat this too many times but don't be embarrassed everybody does it okay so now that the students the stool reaches the rectum and it's very important again now it you can use one when the stool reaches the rectum one of two things can happen a usually you get an urge and if the and I'll be socially appropriate and the timing is right you know you were the New York Times and a cup of coffee and it's okay things have but otherwise you got to control it and this is where the brain here kicks back and I told you works here it also works at the other end and that's where it's very important a part of the function is very important we have maintaining continent so I'll just show you that process so this is where the pelvic floor muscles are very very critical so when when when the bolus or the fecal matter arrives in the rectum director understands and that triggers an intrinsic reflex that causes the rectum to contract and if you don't do anything to stop it you may have a bowel movement what you do if you don't want it to happen is you tighten this muscle the sling muscle which you call the pubic talus is part of the pelvic floor and you really make this angle very acute preventing anything from moving down to the sphincter when you're ready you relax that you bear down Yoop you use you squeeze your abdominal wall muscles increase the pressure there and your sphincter relaxes and you have an evacuation and we actually can study this in the clinics and another surprising thing that gastroenterologist do we put put contrast in the rectum of patients and then we ask them to sit on a toilet and we x-ray them and then we expect them and we tell them we still have say relax just pretend you're having a novel bowel movement but you can see that this angle here when you don't want to defecate is very cute and when you do then you start losing that angle you relax this muscle and now you things will start coming out okay just to summarize the section and we'll take a little break for questions we've talked about the process of digestion and you can liken it to an industrial process if those of your engineers here start for the choppers that's the mouth you go through your blender acid sterilizer the reservoir that's your stomach and you go to your reaction vessel which as I've said is the duodenum you have your detergent supplier and the enzyme supplier to the pancreas the detergent supplier which is the liver and then you go to the catalytic and absorptive surface which is a small intestine there residue combustor and desiccator and the letter and then the emission control device okay so I'll take some question I know I want to fuse vinegar to ask me that so I'll start with you start reviews when you eat the food is it I've heard that it's best not to lose liquid when you're eating that does it make a difference you mean as opposed so drinking half for the meal is or as opposed to drinking with the meal are not drinking at all yeah yes so the question is is it healthier or less healthy to drink whatever fluids you're imbibing with the meal or after the meal and the answer is I don't think it makes a difference there are some myths about that and there are lots of myths about the GI system but as far as I know there are no adverse health effects now some people who want to go on diets advocate drinking before so you so get a sense of fullness before you eat but that's kind of another story yes so squatting actually is a is a healthier more physiological way to defecate than sitting on a western-style toilet seat it's a requires a you know a sense of balance perhaps but but I I think it is a better way to do it yes sir yes the fat the breakdown products of the fat are absorbed by the lacteals we then drain into the lymphatic system go through the nodes and eventually drain into the circulation back why are big lymphatic routes to the thoracic duct so they eventually come back to the circulation but they don't necessarily go through the liver in the same way as as the other pathway does okay ma'am so the question is how does the system continue to function without a gallbladder yes so happy this is a you know I'm not a surgeon and this this any good surgeon will tell you the only good gallbladder is one that they've taken out but there are possibly some problems when you take out the gallbladder you don't get the storage function but it doesn't really interfere with the flow of bile bile is still flowing out and there are in response to the meal in most patients most patients there's enough bile there to continue to act as a detergent in some patients there's X perhaps we think excessive bile that is released because it's not being stored and that may result in what we call bile salt malabsorption and diarrhea and some patients who have had the gallbladder removed yes there was a question over there and then so the question is whether there are differences in the digestive cycle of vegetarian versus non-vegetarian food so maybe I'll touch upon this a little later in my section but it's a very complicated question and depends on exactly what you mean by the digestive process because many different aspects of it can be affected so I'll talk about it perhaps a little later yes sir what makes you feel hungry and what does it mean you never along I'll talk about that and second I'll talk about what makes you feel hungry - okay last questions so this is the esophagus that we're we're talking about we use yes it's a it's a it's a rubber band I mean it's not you know it's not an ordinary rubber band it's with medical grade rubber band if you will but if the concept is very simple we put put that thing around and what happens is the blood supply is choked off there's a clot that forms and then this little ulcer that's left over and that heals so yes there's not not not usually a problem one more question I'm curious I was always so this great and especially not over that we have actually to rate how come the one drain upsets the other so much oh yes yes yes it says yeah so the minor brain the one up here is subject to a lot of problems we'll talk about this I have a section on the brain gutter actions that will come to show I know you have a lot of other questions and we'll have other opportunities to ask the questions if it's okay we'll just go go to the end because I don't want to shortchange the rest of this lecture soap so we talked about structure effects now let's talk about sort of some of the common diseases I've touched upon a few of those but I really want to get into this in terms of the more common things that we see so let's start with the guts line right so the guts lining is very important it serves many different functions and it the functions it serves depends on which part of the gut it is and so for instance in the stomach the lining it produces both acid and enzymes it produces an enzyme called pepsin but the most important substance that the stomach's line produces is acid hydrochloric acid and it produces in a highly concentrated form from the cell that that is part of this lining which we call the parietal cell this is also by the way the process on which proton pump inhibitors your purple pills and such a actually block this process of producing the acid so with this high concentration of acid that has constantly been produced in normal people who are not on drugs for instance the question arises why doesn't the stomach corrode itself and that's because the stomach has other cells that protect it by producing substances such as mucus which acts as a buffer so if you actually look at the stomach lining and this is the inside of the stomach the pH is very low which means it's highly acidic a pH of 2 but right next to the cell this mucosal defense layer actually produces a pH of 7 which is very close to the normal physiological state and this is what the mucus layer actually looks like if you do a scanning electron microscopy of the stomach you see this this silk like material that's lining these cells these are the cells that line the stomach these are the glands that dip into it and that's where the acid is coming from and this mucus forms a coat that is protecting the stomach from actually being digested by itself so the question that arises why do all sirs occur so for a long time for even when I was a medical student the dogma was that stress type-a personality is you know major cause of ulcers and it was only about I guess 15 or maybe 20 years ago that we discovered the true cause of common ulcers and that was this bacteria that we called Helicobacter pylori and this this bacterium actually infects the stomach sits underneath that mucosal layer and damages the cells off the lining and results in houses so this is a cartoon showing the bacteria this is an actual photo micrograph these blue specks here are the bacterium and this is electron microscopy showing the bacterium sitting on the stomach's lining this very important discovery was made by two australians dr. Barry Marshall and dr. Robin Warren dr. Robin Warren was a pathologist who on his birthday I was approached by Barry Marshall it was a medical student who asked him for a project and Robin Warren said well you know I've been noticing funny things whenever I get these biopsies from these patients who have ulcers I see this bug there and I don't know what it means and can you work on this and go back to the records and really show that that this bug is associated else and that's how it began and it was a series of almost heroic investigation that they took culminating in dr. Marshall actually in testing a solution of the bacteria himself to prove that in fact it can cause infection of the stomach because at that time this was poopoo I mean these guys are stood up and were ridiculed in public forums that this is crazy how can you say that an ulcer is caused by a bacterium and that all that do in the stomach with all this acid how can it ever exist so this their efforts were rewarded by a Nobel Prize in 2005 so peptic ulcer disease now mostly not I would say mostly well in a very important proportion of patients is caused by this bug and this causes an increase in acid production that can then result in ulcers in the duodenum it can cause inflammation and injury to the lining of the stomach that can cause gastric ulcers and these ulcers can be associated with complications not just pain and one of the important complications is bleeding and this is another example of an ulcer in the stomach that is bleeding here you can see this blood vessel here that is pumping our blood it's an artery it's not just simply a vein you can see the pumping and blood is just oozing out so we show these these nos khipus have gone down with the endoscope identified the side of the bleeding and they take a needle out through the endoscope and inject that area with a solution of epinephrine just to stop the bleeding and after that there's a variety of things that we can do such as clipping or cauterizing that can take care of the bleeding's you see here are the ulcer and then you see this open vessel which is really a pretty big artery and then you go down with another instrument here for instance which you call an endoscopic clip and you put it around the artery and you clip it off so all this can be done endoscopic lis and you know 2025 years ago this would have required major surgery now it's not the only cause of now as be recognizing this and more and more especially in the Western world this the use of antibiotics this bug is is sort of fading away the other common cause is non-steroidals drugs such as ibuprofen or even aspirin that act on the cells lining the stomach that produced mucus and there's impaired production of this defense layer this defense layer breaks down and then the acid has a chance to act and damage the stomach producing ulcers so that's another thing that you need to be very common cause of injury oh just give me a few minutes and we'll stop and I break for questions again so we talked about the lining of the stomach let's talk about going back up a little to the esophagus the lining of the esophagus now the lining of the esophagus is very much like the lining of your skin in fact this is what it looks like under a microscope and this is very similar to what skin would look like it's very thick what we call a squamous epithelium and it's thick because it really has to deal just like the skin with a lot of roughness so when you eat a bolus of free swallow and chew it's really still a very rough large bolus it's got its sharp edges it's it's it's got its fibrous material it's not been broken down it has to go down very rapidly to the esophagus and this lining is adapted to make sure that it can handle that and not result in any injury but this lining is not very good at handling acid because the esophagus should normally not see acid and the and the stomach is where the acid is produced so as you all know in many of you in this room probably have this condition I certainly do which is gastroesophageal reflux disease where contents of the stomach reflux back into the esophagus and this is because this very important area which is the junction of the esophagus and the stomach what we call the GE Junction the gastroesophageal Junction doesn't work properly there is a ring of muscle there which we call the lower esophageal in addition there is the diaphragmatic muscle which separates the chest from the abdomen and together they act in concert normally to prevent contents from the stomach going back into the esophagus but in a lot of people this mechanism doesn't work properly and as it actually goes back to the esophagus them we don't fully understand actually the cause of it we just know that every time a person who has reflux burps there's a greater chance that their contents will be more acidic than somebody else who doesn't have reflux burps and part of it may be due to the fact that many of these patients have either obvious hiatal hernias or they have pockets of stomach or acid lined stomach that are creeping up above the diaphragm and our source of acid if it was just causing heartburn you could take care of it fairly easily the problem is it has many other ill effects and one of these has to do with the lining of the esophagus which I told you is not generally resistant to acid and in the presence of acid it can not only break down and form ulcers but in some patients who perhaps a genetically predisposed this lining changes its characters almost as this lining is trying to adapt to this constant reflux of acid and becoming more stomach like or more intestinal like so it becomes more resistant to the acid and so what you see is a change from oops sorry about that what you see is a change from the squamous epithelium shown on the left and here's the segment of the esophagus where the lining is changing to where it starts looking more like the intestine and this is what we call Barrett's esophagus this is a change in the lining that resembles that of the small intestine this is what it looks like when you go down the esophagus with the scope you see these tongues of red you cos'è coming up the esophagus this represents Paris esophagus and I'll show you a video in real time of what it looks like here's a tongue of barracks that is coming up from this is a change in the lining of the esophagus and that's not the only thing if it was just changed by itself it would not necessarily be such a big deal but what happens is this lining because it's not really native to the esophagus is unstable and unstable genetically and that means it's at risk for developing cancer and so in some of these patients small percentage of these patients who have Barrett's esophagus they're at risk for developing what we call adenocarcinoma or cancer of the esophagus here you see for instance a nodule in a patient with Barrett's where we think that the lining has now actually transformed into cancer fortunately in this patient appears to be superficial and we can again treat this endoscopically we can go down the scalp here we're showing a cryotherapy were actually freezing it in other instances we can use radio frequency to burn it and so on if the if the Barrett's lining is precancerous or cancerous but has not quite invaded them layers of the esophagus it's but it has the potential for being treated endoscopic lis that brings us to another cancer which is perhaps the most common and most important cancer in the GI tract it's the fourth most common cancer as a whole we're talking about colorectal cancer it's the second commonest cause of cancer death lifetime cumulative risk in people in this country is about 6% it's important to remember 80% occur without any obvious risk factors unlike lung cancer or other forms of cancer and it's also important to note that the prognosis depends entirely on the stage of the disease the earlier you catch it the easier it is to live a cancer-free life and this is important as I'll tell you in just a minute so many patients ask me what can we do to decrease the risk of colorectal cancer like I said 80% of the time there are no obvious risk factors but and so things that you can't control are your age most patients who get this or elderly over fifty polyps or inflammatory bowel disease is a risk factor for this family history of colorectal cancer as a risk factor and some patients have these syndromes of cancers where they have cancers of the ovaries or breast and they are at risk for colorectal cancer there are some factors that you can't possibly control there's a diet a high and red processed or heavily cooked meal being overweight especially around the waist exercising too little smoking or drinking alcohol but all of these are of course good general measures anywhere now the reason it's important to note that the prognosis depends on the size because colorectal cancer at least most forms of polar ëtil cancer that we know actually undergo a series of orderly transformation starts as this very small dead cell becomes a polyp which is still benign although it's precancerous still denying and only over time if we let it grow that it will become cancerous so it provides us a window of opportunity to intervene before it becomes cancer and that is why it's very important to undergo screening screening colonoscopy which we recommend for everybody over the age of 50 is the gold standard and you can see here a polyp that is found on a screening colonoscopy there's a lot of talk about virtual colonoscopy which is a CT scan which mimics what you would do with a colonoscopy except you don't have to actually put a scope in it and these are side by side images of the same lesion on a on a screening colonoscopy as a virtual colonoscopy currently virtual colonoscopy is not recommended in favor over a screening colonoscopy because it's felt by experts not to be as sensitive particularly for smaller polyps but that may change in the next few years or months even as the techniques evolve here is a video illustration of a polyp that we are going to remove by an endoscopic means is the colonoscopy is a polyp you see that we just injected some saline there to lift this polyp and we're going to well I say we I'm talking about the endoscopy it was really me put a snare around that polyp and basically guillotine it off it's a lot of fun believe me and it's gone and then you take care of anything residual and that's what's left and that little hole peels over the next few days typically without any problems there are some occasional complications like bleeding and perforation which are very very rare okay finally one in weather okay well you know we're we're in that part of the anatomy so I thought I just finished it because actually a very common complaint and most people not quite sure what hemorrhoid is even though a lot of people suffer from it samurais are caused by successive straining which in turn are caused by changes in the diet or hormonal changes such as that occur with pregnancy but essentially the hemorrhoids are hemorrhoidal tissue is part of the normal anatomy of the rectum and it's part of the continence mechanism and actually these are very vascular spongy tissue collections that are part of the rectum it's and these veins get engorged that these hemorrhoids enlarge and then can get inflamed sometimes can prolapse and cause problems with pain irritation and itching so I just wanted to finish that and I will now open this section up for questions before we move on to the next there was a question over there the gentleman president is it present in all stomachs and if it is and why also developed in service so the question is is Helicobacter present in all stomachs and if it is why do all says only develop in a few patients both are very good questions it used to be highly prevalent in the Western world and perhaps in the generation before mine I guess the prevalence was very high but because of changes in Social Hygiene and the way we live the prevalence in Western countries has declined significantly although in other parts of the world in Latin America it is very high and prevalence rates may exceed 50% typically now we don't see it very commonly even in when we see it it can and most of the time probably does occur in an asymptomatic State most patients who have Helicobacter will not develop ulcers and only a small percentage well in fact it has been argued and many still argue that the organism is actually playing a beneficial role to some extent in patients and there is some epidemiological evidence to suggest for instance that it may actually protect against reflux even though it may cause ulcers may actually protect against reflux so it's it's one of those stories where there's probably good and bad to it but if somebody does have all says there's no and they have Helicobacter there's no question that we need to eradicate that and we eradicate that now with fairly simple antibiotic regimens question yes ma'am the question is back to going back to gallstones the risk factors are I didn't say this I'm just quoting female fat and 40 yes it's true that more women are prone to gall stones and that obesity is a risk factor and as they are more common with increasing age and actually there's a fourth F which is foot I'll so that that's sort of more equivocal whether or not that's a risk factor or not so there is some truth to that yes the back yes or Emily cook they processed they there there's probably production of some toxins with heavily cooked meat that may be harmful to the lining of the intestines when you have acid reflux this opinion about using acid suppressant as treatment yes so the question is what is the opinion on acid suppressant for reflux that is the mainstay of treatment for acid reflux is the use of acid suppressants particularly what we call proton pump inhibitors that sphincter is really to close and be suppressed it yes that's a very astute comment we are not treating the cause of the reflux we are simply making the reflux 8 less harmful by taking away the acid so it won't cause damage so there are still reflux going on but it's no longer acidic so it doesn't damage the lining which you haven't addressed the underlying problem which is your sphincter is not functioning properly some drugs are being developed to counteract that but at the present time we're really doing symptomatic therapy yes sir no it's not true and does that make you happier good the question is this spicy food cause also in fact there is some evidence that spicy food this is another constant capsaicin which acts on certain receptors lining the nerves may actually protect against also last question of medium one so I can comment on that later at the end the last question is how does gastric bypass affect this whole process and the first question is video that polyp goes so we collect that polyp actually we go in and grab it we have means to grab it basket it and bring it out and give it to our pathologist we'd only want to examine it very carefully make sure there no changes that we be surprised by okay let's move on okay so let's talk about an emerging field the enteric microflora you're not not aware that you had a hundred trillion Rumi's but that's true ten times more than all the cells of your body combined are in your gut 60% of your stools if you would try them out will be these bacteria there's some fungi and some protozoa but really it's the bacteria that we're concerned with and they actually very good for us by and large these do many important things for us and list here which I'm not going to go to but perhaps the most important thing is that they salvage energy so a lot of the food that we can't digest because we don't have the enzymes they actually are able to digest and some of the byproducts of the digestion are useful nutrients for the lining of the colon so it is short chain fatty acids they also produce vitamins such as ke they helpful in absorption of iron calcium very importantly they aid in the development of the protective a function of the intestinal value they maintain it and we'll talk about that in a little bit but also they out-compete harmful bacteria so these are your allies these are your friends that are shutting out the invaders at the gate and telling them to stay away because they're just sort of hugged all the posts that are there to occupy and we'll talk about that and how important it is and then like I said they are actually talking to the epithelial cells the lining of the cells and the immune cells in the gut and maintaining our system in a very finely tuned state of health the types of bacteria in the are are many but in the colon which is really where most of current research is going on there are really two broad phyla of bacteria one we call froma cuties and the other bacteria Dedes you don't have to really remember the names except that they're two broad phyla and then within these there are nearly about 500 species of different bacteria and I was going to just illustrate this concept of maintaining friendly bacteria by this disease which we call see dispersal colitis Clostridium difficile colitis me as I've read about this it's very common when the Communist Hass hospital-acquired infections and this occurs because these friendly bacteria are somehow destroyed by antibiotics that we use so here's a cartoon illustrating friendly bacteria and the lumen of the gut this is your epithelial lining coming to the hospital either you are infected by somebody else or typically you're given antibiotics many of these friendly back to your sort of caught in the crossfire are the innocent bystanders in there there they die and then you have this this like I said this barbarian at the gate which is waiting for this opportunity and overgrows because the defenders are no longer on guard and this then destroys the epithelium and you can see ulcers and pus in the colon and this is a process we call C difficile colitis it's a very very common cause of morbidity and one of the more serious problems that we face in the hospitalized population today of course there are other forms of harmful bacteria which I'm not going to have the time to go to you've all heard about equal line uncooked meat and then the Salmonella from poultry products these call similar processes either in the small bowel or the large bowel of inflammation these are not necessarily due to problems with our own defense these are problems because we just ingested a massive dose of these foreign bacteria and they're so virulent that they can overcome whatever defenses we have so I'm going to switch a little hear about these bacteria an emerging role for them outside of the outside of the gut okay so you know we have a big problem no pun intended with obesity in this country but in most of the world now and as Americans we love to blame somebody else for this so here's a cartoon that illustrates you know what this person is going to do as far as his weight is concerned and of course the other problem is of course we don't exercise enough and [ __ ] but you know all of this is now no longer I'm and I'm joking and no longer informed because we found the real culprit something's of the bacteria in our cards this is elegant series of studies and a lot of this work is ongoing by Jeff Gordon his lab in st. Louis was shown that maybe is the change in the composition of the normal bacterial content in a column that may contribute to obesity so that they've been looking at they're studying this in mice and they show that an obese mice this ratio of these two big file of Jet Set bacteria Dedes and Fermi qts is changed and in fact this results perhaps in greater energy capture these bacteria may be more efficient at energy capture from the undigested food that we take and the theory is that over time this this energy this excess energy is available and may contribute to increasing weight in the bodies now this is controversial and still work in evolution so take it with a little bit of pinch and salt but this is a very hot area of research and this really brings us to the question of uh.what RV i had a slide here which is not come up with a coral reef for instance coral reef is not just the polyps that make the coral and coral reef is a complex very large ecosystem many many different kinds of animals and many different kinds of plants make up the coral reef and we always thought of cells as human beings as kind of single individuals but I think what this research is now telling us is that we are in fact by coral reefs it's not just us it's not just this body which these trillions of neighbors that are residing within us that may actually contribute to our health and it's not just a philosophical point it's a practical point because that means if you are going to affect the health of our bodies another target instead of just our own cells is these bacteria so if we can come up with drugs for instance and we call them drugs but anything that can manipulate this environment to a healthy healthier one we can affect our own health and so this is really going to be the next 10 20 years of amazing area for discovery and those of you watch the news should really pay attention to this because this is where some new drugs will come up with antibiotics but also perhaps drugs for diabetes obesity and this will also provide major insight into the kinds of foods that we should be eating and that goes back to the question that was raised about how much and vegetarian versus meats and so on so this is something that's still in evolution but we're all very excited by this area now we should talk a little bit about related subject which is probiotics which is now obviously you can't you can't you know walk past a store now without an advertisement for some probiotic or the other and these are live microorganisms which when administered in adequate amounts should confer a health benefit the two most common kinds of probiotics or lactic acid and bifida bacteria and these are all pardon so-called normal flora that we have inside us except now we are taking it in therapeutic doses and these can be done the form of active culture yogurts or just as supplements now despite all the talk and there's still a lot of research that needs to be done about the actual health benefits or probiotics and this is only the some of the only some areas where there have had what we think a proven benefit so in certain kinds of diarrhea's certain forms of inflammation particularly after surgery canada vaginosis some forms of eczema in all others the health claims are still to be really validated but here's how a probiotic can help an antibiotic related diarrhea so these are actually patients with a Helicobacter infection we're treated with antibiotics and put it either on a placebo or on probiotics and the ones were on probiotics had much fewer diarrhea days than those that dense suggesting that it is in fact restoring the normal flora to some extent this also brings up the question of fiber and how can we use food to to change the floor now some of you may know but as may not that they're actually two forms of fiber whether it's a soluble fiber and there's the insoluble fiber it's the soluble fiber that really affects the bacteria in your gut because it's the source of complex carbohydrates that they can digest so this is what we call a prebiotic effect as opposed to the probiotic effect where you're actually taking healthful bacteria in prebiotics you're taking foods or other substances that change the floor and so some of the fiber products fall in that category fiber can also improve sugar control and type-2 diabetes and then there's the other kind of fiber the insoluble fiber which really doesn't do that it's not necessarily digested by the bacteria but it does add bulk to the stools and makes the flow better improves constipation and reduces pressure in the colon and reducing pressure in the colon is important for a variety of difference but one of the my common diseases in the western country is what we call diverticulosis in which because we think because of high pressure in the colon because of constipation because of reduced stool bulk you get these weaknesses that that erupt in the wall of the stomach resulting in pouches that we call diverticula and most of the time there is symptomatic but in some patients they can cause inflammation or bleeding so that brings up the issue of okay so we have all these trillions of bacteria are living inside of us how do we prevent them from getting everywhere and this is a very important role that the got plays and this is part of what we call the mucosal defense system so the gut lining is actually quite tight it absorbs food mainly through the cellular pathways that means the food has to first nutrients have to first enter the cell and then go into the blood vessels and in between the neighbors are very close together and there's lots of proteins here that that act to form what we call tight junctions that that prevent leakage of material across this lining another part of the defense system is an immune system that lies just underneath the surface if you look underneath the epithelial lining there are all these immune cells lymphocytes plasma cells that are right there ready to handle any breach of this lining and attack whatever comes their way and so it's a very complex system that is acting solely to regulate this barrier now there are some conditions in which something goes wrong with this function and the common example this is inflammatory bowel disease and most of your familiar inflammatory bowel disease is a group of diseases which we don't really know why they occur but we classify them as either having Crohn's or ulcerative colitis and the difference is several-fold Crohn's is a patchy disease that affects both the small bowel as well as the large bowel ulcerative colitis mainly affects the colon and acts in a diffuse way either in the distal part or throughout the colon Crohn's disease also affects the entire wall of the gut whereas ulcerative colitis only affects the lining of the garden these are what the endoscopic pictures look like the other difference within Crohn's is it can cause painful anal conditions can cause fistula which are tracts that form between the rectum and the skin and that can of course be disfiguring but can also be a major health problem now like I said we don't know what causes it but he says we're beginning to put together the pieces over the last few years in some patients this lining is leaky to begin with so what normally happens is in a tenon health is there is some leakiness it's not a perfect the some leakiness some bacteria or other proteins may get through and what happens is that the immune system responds checks those bacteria but also maintains the inflammatory response in sort of some kind of control in patients who have inflammatory bowel disease either this membrane is due leaky that overwhelms the immune system or the immune system itself reacts in a in a in an aberrant fashion and it's too vigorous in attacking these bacteria and forms products that in fact start destroying the lining itself and that's what we think causes inflammatory bowel disease here is a video for instance of somebody who has ulcerative colitis and this is soft and normal-looking nicosia here but now we get into a part of the colon that we start seeing our ulcers redness and there's actual pus here that is coming out you can imagine these patients having this in the colon they're having diarrhea blood terrible cramps so it's a very debilitating in disabling disease so it's moving to other kinds of diseases where the immune system may play a role and one of the common ones is celiac disease or celiac sprue and for celiac screw to occur and there's a lot of publicity about this and many of you are aware of this you need to have two things you need to have a genetic predisposition we know you have to have certain class of gene important for the immune response and you have to be exposed to this protein in wheat or other cereals that we call mutant it's quite common supposed to affect between 150 to 1 to 100 RS and it starts eating away at the lining of the small bowel here is a normal small bowel you can see nice villi on histology and here is a bowel with celiac disease you see that the edema the swelling and there's what we call scalloping of the folds and on histology these villi are lost so these villi is that mentioned earlier very important for absorption and when you lose those you no longer able to absorb those nutrients and you have diarrhea but diarrhea is actually maybe the just the tip of the iceberg in these patients may have silent or atypical disease and there's an increasing and growing list of non intestinal manifestations that are linked to celia such as arthritis liver disease or even sort of neurological disorders even including autism the problem in celiac is the gluten the antigen is is present in a lot of foods a lot of foods that you don't even suspect so it's actually very difficult to maintain a strict gluten-free diet but people are working on other ways to treat this so it's I just want to end the section by saying it's not a true allergic disease though many people think of this as a food allergy is not a food allergy the body is not reacting directly to the gluten the gluten is triggering an immune response which is somehow wrongly directed at the body's own intestine and that's different from a true allergy we do we have started seeing increasingly true allergies in the in the bowel affecting the BAL I'm not talking about systemic food allergies like peanut allergy which cause systemic problems but here's an example of what we call Hughson if you like esophagitis first recognized in children but now increasingly being recognized in adults where's some unknown antigens perhaps in the food or in the air that we breathe are triggering this immune response and associated with the presence of these inflammatory cells that because using the phils and you can see the lining of this off because normally should be very smooth is corrugated what we call a cat's esophagus or a feline of stuff because because that's what it looks like and cause all sorts of problems with swallowing and pain so I'll stop here very quickly for any questions and then we'll just have another five or ten minutes left question sir the question is what leads to incontinence in men or women it's a good question it's a it's a it's a not as straight not there's not one answer to this in in women one of the commonest causes is childbirth either multiple childhoods or trauma sustained during childbirth either a perennial injury with an epi Ct episiotomy that was too deep or just traumatic livery and it was tearing off the pelvic floor muscles involved in maintaining countenance in some patients there's some nerve injury patients who have diabetes have combination and perhaps nerve injury and some muscle injury so this it's it's very difficult as we grow older some of the nerves that are responsible for maintaining the muscle tone get are either lost or not as vigorous as they used to be so there's a combination of factors and really depends on each patient has to be individualized yes one more question the composition is there any research going into the composition of an optimal stomach and you want to clarify you in terms of size I was talking about the bacteria what should be a normal well yes in fact David relman from Stanford has done a lot of work on this and he's still engaged in carefully typing the kinds of bacteria that are present in normal individuals and a how they evolve from sort of child birth all the way to adulthood because when you're born you're born with a sterile gut and as you start drinking your mother's milk you start getting bacterial entities which is one other healthful reason for breastfeeding and so yes there's a lot of work that's being done just that in fact the NIH has commissioned tens of millions of dollars to to sort of really map out what they call the human microbiome which is you know you've heard about the human genome project we just map out all the genes and to characterize the genes and this is to characterize the bacteria both in health and disease so it's a work in evolution I think the next ten years you'll see a lot of that yes ma'am that so the question is how do you cure diverticulitis so I want to distinguish diverticulosis from diverticulitis diverticulosis the condition in which you have these sacklike outpouching x' and then one of the complications is diverticulitis there may be a minor leak from that pouching and there is an inflammatory response and that's diverticulitis and that you treat typically the first part with antibiotics and it usually gets better if it happens in a recurrent basis you may have to take that segment of the colon out diverticulosis per se if it's asymptomatic you don't necessarily have to do anything it's once we don't think it's reversible you can do stuff to prevent it and that has to increase the bulk of your stools and typically we do that by increasing the fiber so it's a it's a much part of a much broader question that I would probably try to say to the end it's a connection [Music] where do some of the insults that occur in the rest of the body originate there's a body of thought that not just for autism for Parkinson's for instance they may be insulted in the form of foreign proteins or bacteria that are getting access to this to the body and may be eliciting the immune responses or by direct invasion causing problems we will talk about that there one last question ma'am and then knowing we want the doctors name that's doing the typing of the types of bacteria here at Stanford David relman Arielle MA and okay okay let's talk about the brain gotta access a little bit and that may have some relevance to this question that was asked so how do these two brains actually it was me left to me I would switch the labels around but most people won't let me so how do they talk to each other that's what we call the brain got access so they communicate why are hard ones that's one way and a somebody had asked a question earlier what makes you hungry so one of the hormones that makes us very very hungry is ghrelin it's produced by the stomach and it's the most potent factor that drives hunger that we know of it signals to the brain makes the brain act in a certain way to get energy get food but also signals to the rest of the body directly or via the brain to alter the metabolic status the alter the energy status of other tissues it is fat or skeletal muscle and it's communicating this along with other signals that are coming from the adipose tissues is this other hormone called leptin which tries to counteract some of this which is an anorexic harmone that means it decreases the appetite so together these signals act in the brain particularly in the hypothalamic area and regulate whether you're hungry regulate how much we eat how full we feel after a meal and what we do with that energy how is it stored where is it stored and what are the consequences of that the other way that the brain and God communicate is through nerves now there are there's another nervous system we talked about two nervous system the central nervous system the enteric nervous into the third nervous system which is the autonomic nervous system which actually is responsible for communication between these two brain as that's classically divided into the sympathetic and the parasympathetic nervous system we don't have to go into details but one of the more important parts of the parasympathetic is this nerve called the vagus nerve which you can see here from a rat stomach is in intimate relationship with the wall of the stomach and is gathering information and how much the stomach is distending is gathering information on what the chemicals inside the stomach are what the pressures are and conveying them back to the brain where it can trigger reflexes such as satiety start feeling full and if you too much you start feeling nausea and other reflexes in fact this nerve can be stimulated artificially and there was a company that was actually doing this for treating depression so they were planning electrodes on the vagus in the neck and this was approved by the FDA not just for treating seizures but for depression and again sort of lends credence to the fact that what you eat may in fact influence how you feel so this is actually another very exciting area of research that we're doing this Vegas is largely sensory but it also carries impulses in the other direction from the brain to the gut and through other communications can affect whether the gut contracts or not here's an experiment that was done in the 50s I think on a medical student in a prominent medical school not Stanford thank God but here's a student was brought in for a experiment by his professor they put this large sigmoidoscope in the rectum and monitor the pressures in the rectum and about 5 minutes into the procedure everything was going fine you know the contractile state of the coils relaxed as relaxed as you can be with a big rigid tube but then this professor skulls out and says wow look at that bad ugly cancer and you can see now just hearing that the colon and the rectum of the student started contracting like crazy and then a few minutes looked at that let this subject suffer a little and then they explained that up he's just killing it's a hoax and and of course the contractile state of the clone came back to normal now what the student did to the professor will never find out it wasn't part of the published literature but this is an illustration of how the brain actually can affect you and this is I think the basis of gut feelings we all have felt something like this and we're now beginning to understand how these got feelings occur what is the biological basis of this again the brain is producing certain hormones for instance if you feel nauseous under stress some patients some people do just feel nauseous under stress feel just that they're having a bad case of indigestion that ulcer like feeling is not really an ulcer and that's the stomach contraction slowing down the gastric emptying is slowing down makes you feel full and nauseous and that is part of the vagus communicating to the stomach in it through this hormone that we call C Rho is quite extern releasing hormone which is part of the stress system in the brain in the colon the same hormone produces increased contractions and diarrhea and this is why typical you know right before a test you have to go to the bathroom or before a stressful situation and that's the basis of this so this is really the gut feeling that we all experience finally there's another communication between the gut and the brain and that's through the pain sensing fibers so these nerves are not really involved in pain as much as these is go to the spinal cord and sense pain and this is or during conditions such as the irritable bowel syndrome where we know that patients are much more sensitive to distension of the colon so we think that their pain fibers or the pain sensing system is hyperactive so if you distend a normal volunteer again one of the things gastroenterologist like to do is put a balloon and test people the responses to it you can see that normal volunteers don't really report a lot of pain whereas if you have IBS or irritable bowel syndrome you're really shooting off the chart here in terms of your pain response on this occurs an association of the change in bowel movements and this constellation of symptoms is part of the edible bowel syndrome we now know with new research that the brains of patients with IBS actually react differently to GI stimuli so it's really you know I used to be said it's in your head as sort of a pejorative way of saying you're making it up for patient but now I know it's really in the head but there's a biological basis for this and if you do PET scans or functional MRIs of patients you can see different parts of the brain light up in response to the same stimulus in patients with IBS and with different in densities we're not beginning to understand what is the biological basis of that that's not the only problem it'll bowel syndrome is a very complex syndrome it's actually a very good example of where both the brain and the gut are affected you have problems with the brain you have stress you have a hypersensitivity of the nerves you have alteration of bacteria actually in the colon which is becoming an important factor and you have alterations in the immune system the motility and we still don't know how all these pieces come together to make up irritable bowel syndrome where there's active research going on in this field so in conclusion I think we've hopefully had a good overview of the structure and function of the gut particularly in regard to the nervous system I hope you have better understanding now of the meal that you're digesting which is now making its way to the last part of your small bowel doctor birthing guts lining and hopefully many of you haven't had reflux I talked about this ecosystem that's within our body which is really really important talked about the guts immune system I talked about the brain got it they started off by saying gastrointestinal system we've really scratched the surface obviously in an hour and a half that I've had you can see how complex it is how many different systems are involved it's unlike any other GI any other system in the body and I hope I've convinced you that it is the most important system in the body now if we have little time so I'm just kinda just one more slide I want to show you because I think it's important as we go forward in terms of the specialty of gastroenterology we I've Illustrated a lot of the points with endoscopy have shown you how endoscopy so useful diagnostically and I shown you how useful it is therapeutically in some instances and a lot of ways where the field is going is this minimally invasive way putting a scope through your mouth which will your rectum is going to replace a lot of the traditional surgery many of you may recognize this figure this is doctor dr. Halstead father of modern American surgery and he was the one who really sort of came up with the concept of sterile surgery using rubber gloves using carbolic acid using well not he was not carbolic as they were using rubber gloves and using as special rooms which were in a sterile environment and he start generating and generations of modern surgeons throughout the world I bring him up because we talked about the gallbladder the gall stones and Halsted was actually called by his mother because she was having this pain on the right side and he went to her and immediately diagnosed her as having a red-hot inflamed gallbladder put her on the kitchen table took out his kitchen knives sterilized them as best as he could and open up the abdomen giving her some ether and took out the gallbladder and drained it of his pus and probably saved his mother's life so that was the beginning of cholecystectomy actually back then since then it's evolved to this laparis Copic form of Sochi ironically by the way as an aside halstead himself died of complications of cholecystitis he had the operation but then he died post referral which is one of history's irony's so we have traditionally been going through the skin the wall to get at these organs and remove them and a few years ago a group of us came up with this concept of trying to do this through the mouth or through the rectum and here's a cartoon that illustrates this here's a scope that's going through the mouth down into the stomach making a hole in the stomach push the scope out and then going towards the gallbladder dissecting it and then cutting it off and removing it and bringing it out through the mouth so actually since we started doing this an animal several years ago now there are a few hundred patients that have been done mainly through the vaginal not through the stomach as much tall though if you have been done for the stomach the stomach is technically more difficult because we don't have the right tools but this whole process this whole approach is called notes natural orifice transluminal endoscopic surgery I'm going to show you an example of the first human case of notes taking out the appendix through the stomach so they this is now you're already in the abdominal cavity you've already made a hole in the stomach the scope is out there right here in the right lower quadrant where the appendix is and you're putting in a tool out there you're cutting away at the base of the appendix just like you would remove a polyp you can actually remove this your snaring that and you're taking it out cutting it off and you're capturing it and this was done by actually dr. Reddy and Rao in Hyderabad India and once you have that cut off you just here you get the area and you capture the appendix and you pull it out of the mouths okay how many of you still hungry alright so I hope you've learned a little about GI diseases thank you very much [Applause] [Applause] well I think you must agree with me that this was really quite a tour de force I do take note of the fact that we've been here for almost two hours which means that you have about two hours to get home for your next other meal or evacuation your choice but I think dr. bouche Rica thank you very much for that wonderful presentation for more please visit us at stanford.edu
Medical_Lectures
Acute_Coronary_Syndromes.txt
All righty Why don't we [...] let's gather let's begin to gather please thank you ok so we thought we've set the stage for this review by talking about chronic coronary disease this morning and EKG yesterday so now we're going to talk about acute coronary syndromes and these are my disclosures keywords you learn some of these yesterday ST elevation mi non-st emi acs acute coronary syndrome cardiac biomarkers how do we treat it what are some of the complications if you have heart muscle death by hypoxia so the objectives of this lecture are to learn about how the EKG helps in dictating early treatment how do we use cardiac biomarkers I want you to be familiar with the stages of therapy for acute coronary syndromes and I want you to be familiar with most of the mechanical complications of this problem ok so we'll start by talking about the pathogenesis I've introduced this already and we'll talk about clinical features treatment complications and then how we stratified people after they've had an acute coronary syndrome and then treat them ok how many let's see how many cardiac deaths will there be in michigan today any idea about 85 about 85 most of them are this most of them are acute coronary syndromes do any of you know person who's had a heart attack just raise your hand that would be most of you more than half i do took the life of my grandmother several uncle's three uncle's to be exact addict you sudden death from acute coronary syndromes so I'm real interested in this topic because it hits close to home we're going to talk about normal homeostasis and then we'll talk about some of the and doggedness and tie thrombotic mechanisms that we all have that try to help us not have acute coronary syndromes what actually happens what are some of the non after Scott clauses cause of heart attacks ok so most of the time and acute coronary syndrome is caused by a more than ninety percent obstruction to coronary flow usually it's a plaque that's ruptured and it's got platelet clot on it and it may have fibrin clot and it's causing at intracoronary thrombus ok we're going to talk about the concepts of clot formation and the continuum from unstable angina all the way to st elevation mi this is a continuum several of you ask me about this during the blade break we're going to talk about that continuum you might say well why doesn't coronary stenting prevent heart attacks you've got an eighty-percent blockage causing some angina and you stand it why doesn't that prevent a heart attack and the answer is very curious the plaques that are most likely to rupture are mild there there typically less than fifty percent they have a thin fibrous cap a lot of lipid and they rupture during stress this has been the real confusion for my specialty over the last 30 years starting to realize that you know when you get Angela we find a blockage and we fix it your engine is better but the lesions that we're going to cause next week's heart attack often are not the lesion we fixed but there's 25 other moderate plaques in the coronary tree and one of them is heating up and it's vulnerable ok so acs the whole thing here is the idea of a vulnerable plaque rupture and it's often not a severe narrowing it's often a milder known it shows you the Cascade the plaque ruptures either from external stresses or internal enzymes cholesterol content is is available and the platelets adhered activate and aggregate then if this goes on for a while the clotting cascade is activated forming thrombin and then we see downstream myocardial oxygen loss and then myocardial necrosis ok so plaque rupture platelets thrombin ischaemia and infarction alright and there are various things that affect the platelets various receptors we're going to talk about them the fibrinogen is converted to thrombin forming a fibrin clot this is the spectrum of acute coronary syndromes and you know it's very interesting you can have unstable angina what does that mean well patient comes and you know I used to get Angela just when I walked the golf course and yesterday I had three episodes just sitting watching TV that's unstable angina it's gone from a stable pattern of something that is far more worrisome you can have non-st elevation my this means typically ST depression but not always but there's a symptom and we measure the biomarkers of necrosis huh they've released they have a release of troponin there's a heart attack here there is a symptom and a biomarker it's a heart attack it's not just Angelina but there was no ST elevation and then this end of the spectrum is the ST elevation mi that usually leads to Q waves eventually we talked about how you see that st elevation and often soon thereafter we begin to see the formation of Q waves on the EKG ok so the spectrum of acute coronary syndromes goes from unstable angina all the way to st elevation mi what happens with normal homeostasis well the vessel walls injured the first defensive play lence so they rushed to the scene and they try to wall off the defect they form a platelet plug there's a second offense and this is in the sub endothelial cells and layers tissue factor activates plasma we see coagulation proteins and secondary hemostasis then is a fibrin clot it's very interesting if you take a patient who's having a non-st elevation mi you go down into their corner with a scope stop blood flow and take a take a picture the clot looks white what's that platelets most patients if you go down the corner vessel they're having an ST elevation mi take a picture the platelets have been followed by a thrombin mesh and now the red cells are getting stuck in it and it looks red it's a thrombin clot a fibrin clot and the red cells are stuck in this rates a red promise so there's this interesting continuum it starts with a platelet plug that looks white and as the fibrin mesh grows and the clod gets larger then we start to see red cells and tangle in it and it appears red if you look at it ok that's a fibrin clot so how do we in activate clotting factors we have various mechanisms that are trying to make sure that we don't clawed off our vessels during a lecture right antithrombin 3 we have protein seed protein s thrombomodulin we have tissue factor inhibitor these are all endogenous things that try to inactivate clotting factors we have in the lining of our coronary arteries a substance that tries to lyse clot is called tissue plasminogen activator TPA TPA is used externally to bust clots but you actually have some in your coronary artery right now its present there its job is to clean up any unnecessary clot so that you don't include a blood vessel like a major artery and of course we have endogenous factors these are involved in playland ambition and in vasodilation rasta cycling and nitrous oxide these are especially going to promote basil dilation you will not be asked to know this slide but I it illustrates the complexity of the endogenous mechanisms that are present whether we're talking about antithrombin three or protein C or protein s inactivating clotting factors or process I cleaning and nitrous oxide trying to vasodilate the arteries of the clot is less occlusive understanding there's these various mechanisms that play is important if you think about plaque rupture I think about in very simplistic terms physical stress emotional stress a vulnerable plaque and then there are inflammatory cytokines that may be at play what's a good example of that influenza right influenza releases things like il-6 and other cytokines what do they do well they make you shake and shiver and feel like your muscles are dying they also dissolve plaques so if you say well what's with this influenza vaccine thing if you take a town like ann arbor in vaccinate everybody for influenza we reduce heart attacks by a lot twenty to thirty percent during flu season one of the most common causes of death during the nineteen eighteen flu epidemic was myocardial infarction not necessarily caused by fever or tachycardia or being dehydrated but actually the cytokines themselves interacting with a vulnerable plaque leading to plaque rupture ok so these are all important in plaque rupture atherosclerosis plaque rupture maybe we just get a little interplant camerajay and it reduces the lumen diameter we get a little corner thrombosis plaque rupture can lead to release of tissue factor and this can activate the coagulation cascade leading to coronary thrombosis maybe there's exposure of the sub endothelial collagen this activates platelets adherence activation aggregation or maybe they're just turbulent blood flow and that turns on playlist all of those can lead to coronary thrombosis all can be involved in a plaque rupture there are other patients who just have a dysfunctional endothelium and they days of the script and they get coronary thrombosis this is rare as a single isolated cause or they have anti thrombotic effects leading to coronary thrombosis probably for most patients many of these mechanisms are at play a plaque ruptures there's some basil constriction the platelets are turned on and so forth ok but these are the mechanisms of how stable atherosclerosis leads to coronary thrombosis the consequences of this are multiple most people think the most likely outcome of a coronary plaque rupture is nothing it heals itself i might be having one right now I don't know public speaking is very stressful for the coronaries you take patients with angina and have them do public speaking and do nuclear imaging on them they have actually more schema then when you put them on a treadmill always think twice about lecture you know what i mean but I got over it right so most of the time we heal these if you look at people who have car accidents and I not a small percentage of have a plaque rupture that's just recently healed itself so we have plaque rupture some time to time most of the time our body deals with them we heal the plaque usually probably in largest after spin ruptured but it was a small clot there was no change there was no symptom and it healed if it's partially occlusive we often will see transient ischemia most of these are not going to give st-elevation they're going to give ST depression or T wave inversion and they may lead to unstable angina in that case the testing we do for necrosis is negative so you got to have you got released proponents or cpk to call it a heart attack so they come in with unstable angina and st changes but they're biomarkers are negative we're going to call it unstable angina if the biomarkers turn positive we're going to call it a non-st elevation mi they might have had SD depression they might have had t-wave inversions they might have had just little flattening of their EKG t-waves but they had a symptom partially occlusive thrombus and a biomarker and the this extreme is an occlusive thrombus prolonged ischaemia sometimes often st-elevation leaving later 2q ways they're biomarkers are up and they're having an ST segment elevation mi I told you last hour we could have st elevation on a stress test transition that you stop at they can it goes away it would probably having transferal ischaemia but it stopped and they didn't release biomarkers they didn't have a heart attack that's rare though most of the time when we do stress testing we see if it's positive ST depression or sometimes T wave inversion ok so no consequence unstable angina non-st elevation mi st elevation might ask the spectrum of what can happen when you have a coronary thrombosis crosses of acute coronary syndromes by far is atherosclerosis but you'll see patients with ass colitis-like lupus systemic loose lupus erythematosus SLE e2 have a vasculitis of their coronary arteries they can have a coronary thrombosis from that they can have a myocardial infarction you can have a corner embolus you might see this next year in the ER you're sitting there you got a 24 year old drug addict who's got a high fever and suddenly they have crushing chest pain and st elevation what's happened they may have an infected cardiac valve an embolus through into the coronary artery blockage suddenly had a heart attack from a corner embolus from endocarditis on a on a valve that was induced by repeated injection in the drug user ok so it is possible to have corner emboli endocarditis artificial valves some people are born with funny coronaries you gotta left corner that actually goes between the a order and the pulmonary artery and when you exercise both vessels dilate and they push the left main or the left artery together and it blocks causing a heart attack that would be a common cause of a heart attack in a young person nine-year-old with the big and to all my probably they have a funny corner and anomalous coronary artery coming off in the funny place is possible that you could have coronary trauma blonde injury to the chest can tear the corners so their corner a spasm this is what happens with Prince metals and jenna i mentioned but also cocaine rarely blood viscosity if you get huge levels of red cells or platelets you can clot a coronary artery that's really rare you might see it in advanced leukemia sometimes and increase myocardial oxygen demand if you got severe aortic valve stenosis heart muscles very thick the valve is tight amount of coronary blood flow through that valve is low you can have a heart attack not caused by a coronary lesion but a severe valve lesion ok most of the time patience for the Arctic valve stenosis come in with Angela not heart attacks but it's possible ok so what determines how much injury there is well there's a lot of things first question is how much of the heart muscle is being perfused by the artery that's being blocked how long is the duration of the lack of blood flow what are the oxygen demands of the tissue affected do you have collaterals and are the tissues young or old old heart muscle does less well with the schema the young heart muscle your compensatory mechanisms to deal with injury are better than in a patient that's older ok so these are some of the mechanisms that determine how much injuries going to be okay clinical features all the way from unstable angina to st elevation mi that is the continuum unstable and what is it well it's if they had prior answer C stock and I used to get Angela three times a month and i would take nitro but you know what in the last three days I've used 10 nitroglycerin it's coming on quikr it's coming on and it's more persistent now i'm needing to nitroglycerin is to get rid of it or you know what I'm having it rest i'm giving this lecture i'm feeling it you know that would be if it's increased in frequency duration intensity with prior on stable stable and we're going to call that unstable anjanette addressed well that's probably unstable this is a patient who may go onto a heart attack in the near term and we tend to call any new onset of Angela unstable because we don't know what's going to happen they probably have a plaque rupture their vessel has changed we don't know if they're going to heal it and be asymptomatic or occlude and have a heart attack so nuanced and we treat all of them like you know what this could be unstable could be an unstable plaque and I when I get a patient watching my office and so you know last night I started having chest tightness 30 minutes went away this morning at two more episodes boom to the hospital by ambulance they don't drive you got to take them because they may be having new-onset Angela they may and fart in the next 24 hours how do we diagnose acute MI we use our history and exam we use the EKG and use serum biomarkers okay what are the symptoms we talked about that pressure now so it's interesting some patients say you know I'm having this burning in my chest you always ask is it acid or is it heat coronary simple are more typically hot an acid not always but if they say and I've got this burning pain that well is it acid is that's it's hot is like somebody's putting a poker in here that moves you toward myocardial infarction and maybe away from esophageal reflux which is more acid we talked about radiation chest arms job back I've even seen a woman who came in with bilateral ear low pain that I started having pain in my earlobes and I wondered if I had my big earrings on so i looked in the mirror I didn't have any earrings on what could it be he's having an inferior wall my cardio infarction I've never seen anything like it usually it's in the job but that was a strange one bilateral ear low pain if you see one of those let me know will write it up together we're all about academics here right sympathetic response so most patients when they're having a heart attack they don't feel good often they're sweaty they may be tachycardic they often will be cool or clammy they may drive parasympathetic they may have nausea vomiting or weakness it's interesting anterior wall that's led left anterior descending artery myocardial infarctions tend to have more sympathetic activation proximal right coronary artery occlusions tend to have more parasympathetic activation they may have inflammatory reaction with mild fever sometimes they're short of breath they may be almost as a symptomatic so blood pressure goes up in an TMI it often goes down and inferior wallet sympathetic nervous system parasympathetic nervous system heart rate usually up with answer often down with inferior the right ventricle if the right ventricle is hurt and the right atrial pressure goes up and you can see it in their next things you know when you walk around in a cocktail party suddenly you see somebody's got these bulging neck veins you want to say sorry man but you're in right heart failure but you don't but you know I have to say I notice a lot of funny neck veins in my life a right ventricular infarction caused an elevation in the jugular venous pulse so somebody says that its patients having an inferior wall my you walk in the room and you see these neck pains balancing the earliest i think they're right ventricles involved right atrial pressure is sky-high here I mean you're right atrial pressure is what three if I lay you down and mash on your liver i might see it this right atrial pressure is like 18 and you can see it when they're you know sitting there and you're you said well serious neck veins here sometimes those three ball veins are bulging that really looks bizarre you know it's kind of like Spock you know I mean doesn't Spock have big neck veins in his head I don't remember anyway pay attention of the neck veins if they're up your thinking the right ventricles involved RV pressure overload leading right atrial pressure road leading to juggle vs pulse engorgement sometimes you can feel the heart attack now when you palpate the heart you're trying to feel the left ventricular what you're trying to feel anything that's creating an impulse let me go ahead and go on one of my favorite trance right now when you learn physical diagnosis you're going to learn something called p.m. i have you ever heard of PMI point of maximum intensity please erase it from your brain what you want to do you want to feel the heart you want to feel for the left ventricle which is out here under the nipple the right ventricle typically is along the sternum the pulmonary artery may be in the right or left second inner space and the aorta is up right by the sternal notch I want you to tell me what chamber you're feeling you can tell me which one is most dominant but the term PMI is not very helpful when you round with me next year I'm going to say okay tell me about palpation ok left ventricular apex what was it like was it sustained was a big o you felt the right ventricle it was lifting you felt the PM pulse that's unusual as probably pulmonary hypertension you felt a pulsatile aorta sounds like a Nordic aneurysm the PM is kind of like oh I'm going to feel PMI it's not very helpful typically it's the left ventricular apex and typically it's happening during what is that I so voluminous contraction the heart the left ventricle starts to tense it rotates up at taps themselves yep i'm here and then it goes away during systole it's the size of a quarter or smaller mi doesn't i just don't like it it's not specific so you sometimes will feel a left ventricular impulse if the anterior wall is dyskinetic when the heart is supposed to be moving away from the chest wall it's actually coming to you you you feel this this pulsation along the storms a well during systole I'm feeling your heart impulse it's dyskinetic instead of contracting its expanding during systole that would be a left ventricular bulge you can hear gallop a fourth hard sound is from left ventricular stiffness the third heart sound is from left ventricular fatigue and it's very important that I adopted at that others have sound after the second heart sound early and s3 gal indicates usually major lv dysfunction sometimes you hear murmurs if one of those 2 papillary muscles is knocked out and the valve the the mitral valve might open during systole and suddenly you have a whole of systolic murmur your listing this person that did you know you have a murmur no never had a member before could have anything to do with a heart attack yes one of your papillary muscles might be infected or ruptured and your mitral valve is leaking ventricular septal defect if the heart attack has blown away part of the septum then they might get left-to-right shunting and you hear this loud systolic murmur right over the sternum well where'd that come from myocardial infarction causing a VSD giving you a murmur obviously any of these mechanical lesions are associated with the worst prognosis because they can affect forward output a great deal the differential diagnosis pericarditis is typically sharp erotic pain patient wants to sit up when you listen there's a friction rub the EKG can show st elevation but it's not like a tombstone it's not convex its concave it looks like the prudential rock you look at the gate well you got 11 of 12 leads with st-elevation you want to sit forward because every time you take a breath it hurts and your medical student it's not an ST elevation mi its final pericarditis happens in the fall very painful ok ok order to section that's a very dangerous lesion you're going to hear about that crosses an instantaneous onset of pain you might see post emphasis or made blow out the aortic valve often the aorta on x-ray looks wide pulmonary embolus causes paretic pain often shortness of breath usually a reason for clotting like Oh had hip surgery last week I saw a yellow football player when I was in residency injured his knee got surgery was rehabbing came with crushing pleuritic chest pain it wasn't myocardial infarction he was having a pulmonary embolus pneumonia i think one of the one of the three Williams right he had she had knee surgery or something that she had multiple pulmonary emboli after her knee surgery she nearly died she had really big problems recovered wonderfully and won Wimbledon and gold medal amazing gastrointestinal esophageal spasm is retrosternal burning pain acid after meals or at night typically so the diagnosis how do we diagnose it unstable angina symptoms crescendo rest or nuance tangina no biomarkers right the biomarkers are normal often the EKG shows ST depression or to inversion non-st elevation mi often chest pain or more severe angina yeah they got biomarkers and usually the EKG is abnormal st depressed or teeway flip but not always you can have a non stemming just with symptoms and biomarkers rarely the EKG doesn't show it rarely usually a circumflex lesion with its way back here and we're looking on the surface here with occasional not seen it that's rare and then st elevation st elevation biomarkers yep symptoms biomarkers st elevation and EKG that's just any and that wants to be reperfusion right away we talked about this yesterday the evolution of st elevation mi ok most patients will end up with two ways but not all how do they treat heart attacks and television anybody I have an idea well they have a coronary care unit on wheels so instead of the patient coming to you you go to the patient doctor nurse Angeles driver you're you're on your way they send the EKG electronically if it's st elevated and they think they're pretty certain it's a heart attack it's not a medical student with product s pain it's a personal risk factors having crushing chest pain st-elevation they give thrombolytic drug in the home if they get there within 20 minutes a lot of these patients don't up have any damage their biomarkers don't rise they actually prevent the heart attack and they may not have a cute wave their QA may not occur because you open the artery in time ok so the Q waves that form have to do with the duration and severity of the myocardial ischaemia an infarction so they don't always happen that most people who have an ST elevation mi will end up with some level of QA formation I acute can you can see t-wave inversions so here's a normal their ischaemic they might be depressed or inverted and then when it goes away it might be normalized or if they had an infarction it may normalize typically they don't have key ways non-st elevation of my often do not end up with Q waves alright what about the about the biomarkers so when you have my cardio necrosis it causes disruption of the sarcolemma this releases macromolecules into the circulation and we can measure these by taking blood samples their pattern and how high they go as something to do with the size and timing of their heart attack that makes sense if dying cells are releasing proteins and you're measuring that there should be some correlation between the amount and timing of the damage and what the levels look like okay so cardiac specific components are the most commonly used biomarker for acute coronary syndromes this is a regulatory protein that you probably have studied already and it controls the interaction between actin and myosin the sliding elements of the myocardial cells proponents have three subunits eint and their present both and skeletal and cardiac muscle there are unique cardiac proponents both int and these are typically absent in the serum of normal healthy medical students watching a lecture you probably don't have much if any troponin ty being released right now if you do then i stressed you unusually and I apologize this is a powerful marker of myocardial damage rises at three to four hours after the infarct peaks at about 24 and then it declines over the next week or week and a half so troponin is what we typically use at our hospital will get one when you're admitted again either 8 or 12 hours and again at 12 hours later it is important to note this information right here it takes awhile for the troponin to be released so if this young lady's having a heart attack in front of me and her troponin normal i should not be reassured by that it's too soon now she has started having symptoms last night and she's in front of me today her troponin should be up it's been there for 12 hours but it is important for you to know that early in acute coronary syndrome presentations troponin maybe normal hasn't been released yet ok we also can use creating kinase is an enzyme that converts ATP ATP ck is found in many tissues including the heart and brain and can be elevated an injury to any of these there are three isoenzymes mm mb and BB and it's the NB that were interested in typically mb makes up a very small amount of skeletal ck but a much higher amount of cardiac ck so if we measure cpk mb in the blood and it's elevated dramatically it's probably indicative of myocardial damage it rises at 48 hours and it peaks by 24 $OPERAND hours and then returns to normal in about three days this part actually explains why we tend to measure both at our Hospital we get both troponin die and ck-mb the reason we do that is that the troponin is kind of stand up the cpk comes and it goes right back down if they have another event we may want to know if they've had another heart attack did they extend their heart attack with his chest pain or was it just a little bit of Angela after a heart attack me they have some basil spasm after we put in the stent so we tend to use both biomarkers at our Hospital some hospitals only do troponin but we tend to do both and largely it's for this reason that unlike troponin the ck comes right down and can be used again if there's inner current events here's the curve some places use myoglobin goes up very fast and then down most places use a troponin when I was an intern we used ldh which peaks very late and you may occasionally you you might see somebody who comes to your clinic and you know say you know last week at a really interesting scared doc I had is crushing chest pain for 24 hours and you do an EKG and they got brand-new q waves all across their heart you may do an LDH to see if they're still elevated or not but it's not very useful for the acute phase so we tend not to use it very often what about treatment let's move on them to treatment i said yesterday the irony of the EKG is that it's the single most important test to determine early therapy for a mechanical event in a coronary we got an electrical phenomenon that's telling you what to do for mechanical solution it's amazing through treatment so we want to use anti ischemic drugs we want to get rid of ischaemia so beta blockers if the patient's have ongoing pain or EKG changes we're going to use nitrates they can use beta blockers we may use a counseling blocker but not if they have heart failure if they're showing signs of heart failure this makes things worse so we would not add a calcium blocker we want to control their pain pain drives catechols catechols drive my cardio oxygen demand so we want to get rid of their pain we typically are going to use a narcotic like morphine if there if you do an o2 SAT measuring on their finger and their low we're going to give them oxygen otherwise we probably don't have to if they have a ninety-nine percent oxygen saturation options not going to do much so i can do anything except increase the expense antithrombotic therapies everybody gets aspirin now why is that because aspirin open up the artery know there's aspirin promote opening of the artery yeah it's it's almost like wait did you just say that aspirin is not a clot Buster but as an antiplatelet there's this war going on on the coronary plaque the body is trying to open the vessel and the injury is trying to make it clot and being on aspirin shifts the balance toward opening the clot and keeping it open by making the platelets not clump you're moving the balance toward opening the vessel you know you're increasing increasing the bleeding risk out but aspirin tends to be very helpful and is often forgotten ten percent of the time it's forgotten less so now pitted grill there these are other antiplatelet drugs you're learning about that affect the plate in a different place for selected patients will use a third antiplatelet drugs that affects the to be 3a inhibitor and usually this is going to be four patients are going to the cath lab right away so we're going to use at least one of these antiplatelet drugs often too i will use an anticoagulant that affects the thrombin cascade it could be low-molecular-weight heparin or IV unfractionated heparin we have two other choices these days fondaparinux or bivalirudin that are also effective on thrombin ok so we're using antiplatelet drugs anticoagulants for patients with heart attacks and then adjunct of therapy statins for everybody right away start the Staten tonight it affects risk within 24 hours it's amazing it's not just about cholesterol statins it's also about inflammation statins tend to help the health of the blood vessel and selected patients will also get an angiotensin-converting enzyme inhibitor if they have a left ventricular dysfunction if they have severe hypertension etc alright the EKG determines what we do we wanted we want to give everybody this stuff but if they have an EKG that shows st-elevation then we need to open up the drop that the artery if you can get them into a cath lab within 90 minutes preferably within 60 then opening it with a balloon is more effective strategy than giving him a clot-busting drug ok so if there's a cath lab available we're going to take a rush him to the cath lab and open the artery as quick as you can you're going to hear the term time is muscle the longer you wait the less opportunity there is for salvage the other thing to know is that the earlier in the game you get there the better chance you have of helping so if they're there within an hour they're having a massive anterior wall and I you can salvage a lot of the at-risk Mike rhodium if they get there at our 16 and it's a small region then the amount of benefit you're going to get from opening the vessel is less most of the deed is done you know by four to six hours most of what is going to happen has been done you love to get the ones early you know of course what's the average weight in America people from the time they get their symptoms they come the ER two hours and what have community-wide education programs done like in seattle in Memphis where every billboard talks about the symptoms to watch for for a heart attack and what to do what have they done to this interval nothing we haven't touched it the average interval in America remains the same as it was 20 years ago also in European countries you know the typical responses yeah I don't feel good first thing i did Doc as I tried milk and that really didn't do much so that I took some toms and that into us so i took some bengay and gave that an hour and still have the chest tightness and then I figured well maybe my reflux was just not going to respond they don't know what it is they don't come in and they miss their chance at major reperfusion by doing that if they if you can't get them to a cath lab quickly then we news of a thrombolytic drug-like tissue plasminogen activator ok so the st segment is telling you whether you want early reperfusion or not you might say why is that why don't you reap refuse these people the ones with ST depression and the answer is it's been studied if they don't have st-elevation rapid reperfusion with clot Buster's or balloons has no effect on outcome fascinating i still find it absolutely amazing that this simple electrical phenomenon described here by George doc a hundred years ago is predicting what you should do for mechanical problem and coronary artery st-elevation now the other thing you have to remember is st elevation may develop right under your nose so if you come in with an infarct and you're not ask the elevator we're going to do another EKG in 10 minutes and then we're going to do another one when you get up to the floor because if you convert from non stl you at st depression if you convert st-elevation we're gonna rush to the cath lab i published a paper and lasted about 10 years ago where we looked at about 40,000 patients with acute coronary syndromes twenty-five percent of patients who ultimately developed q waves and had asked the elevation were not recognized why well a lot of presented with non-st elevation like ST depression and then they didn't get the serial EKGs or nobody was watching and boom the st elevation than the deaths that's when you have the opportunity you want to go to the cath lab there s the elevator now there are places that don't have cath labs right how many cath labs are there in warsaw poland one so if you're one of the twelve hospitals in Warsaw it's a pretty good chance you're going to get a clot-busting drug if you go to boy is iris if you go to the private hospital you get a balloon and an angioplasty of drug-eluting stent you take out 2,000 bucks a hit this is for the stand if you're in the city hospital across the street you get streptokinase so the oldest cloudbuster we have takes a while to work but it's pretty good i mean it's not bad you won't get clip integral they don't they can't afford that and play with their basic care is well you get streptokinase and a prayer so there's a lot of variability if you get admitted to chelsea hospital with an infarct that your question is if you're their staffing ER and you call us and say can you and be in the cath lab in 90 minutes if yes i'm going to send the patient if not i'm getting a clot-busting drug now that's the middle of the night on a holiday weekend you might say you know what I'm going to open this artery I'm not sure we're going to get there you want to salvage as much heart as you can if it's a non st elevation mi if they're low risk and I want you to look up the grace risk or just go up go to grace if you go to the grace risk or race risk Dichter it's a it's a tool you can use to predict in hospital death and six-month outcomes it's called a global registry of acute coronary events and basically a hundred hospitals we work together to create a tool that you can use to predict risk if they have a low-risk then we will only katham if they have recurrent symptoms or a positive stress test before they go home if they have a high-risk then we're going to Catherine the next day we're going to Catherine within 24 hours non-st elevation mi in our Hospital we're going to cath most of them but if you're 90 years old and relatively low-risk in the lady says you know what I really don't like somebody poking my groin please ok medical therapy of things don't go well we can always reconsider most patients in our Hospital are going to get calf but in some places it's going to be about fifty percent alright what happens with the playlist adhesion activation aggregation aspirin clopidogrel ty clothing process the grill effect activation g2b 3a inhibitors effect aggregation so we're hitting the platelets in various yes sir going to the cath lab we're gonna if we're dealing with an ST elevation mi we're going there with the explicit decision to try to find a coronary thrombosis and open a vessel that's causing an event ok similarly if we're going there for a patient with a non-st elevation mi that we think has higher risk we're looking for the culprit lesion that's cause this clinical episode and if there is significant flow limitation we will open it now ten to twenty percent of the time you get there and you see what was the vulnerable plaque in the class already opened the endogenous TPA has done its job and the vessels wide open obviously if it's wide open we're not going to stand it there's no reason to we only expose the vessel to more injury by putting a foreign substance in there but if the if the lesion is narrowed from clot we're going to open it ok so the point about this line is that we but by attacking the platelets in different places we reduce the thrombogenic city of the platelets and we tip the balance toward flow ok similarly the drugs that we use for antithrombin therapy have different effects here Odin or bivalirudin or unfractionated heparin or low molecular weight heparin don't rely on this slide four mechanisms rely on Marshalls discussion about the anticoagulants okay i don't want to use this slide for that because it's probably a little confused alright the point is that when we attack platelets and from and we're hitting different aspects of a clotting cascade trying to get the vessel to be less filled with clot aspirin now this amazing this is a study of non-st elevation mi an unstable angina and this looks at the incidence of death or subsequent heart that look at how big the benefit is of aspirin placebo vs aspirin amazing 50-percent risk reduction by giving aspirin how do you give it one adult aspirin shewed there's no reason to give antara coated aspirin that's going to kick in tomorrow having short-acting shoot aspirin what if they're vomiting give them a rectal suppository with aspirin it will be absorbed through the mucosa of their distal colon must give aspirin if they say well i already took my aspirin you know what I don't believe it give it again because sometimes they took tylenol you know that tylenol does not do much for myocardial infarction it just doesn't what about the anticoagulants here's a study looking at several studies looking at happened asked vs aspirin alone and basically the likelihood of death or heart attack tends to be lower if we add one of those anticoagulants two aspirin ok we also see it with low-molecular-weight heparin there's a trend in these larger studies to show benefit so we're going to use either unfractionated heparin or low molecular weight heparin it's interesting in America we tend to use IV unfractionated heparin what happens in Europe they tend to use low-molecular-weight heparin given it a shot twice a day the company that makes Enoch Sabourin is in France they sell Enoch safran sheep in France it's actually almost is in expenses of as heparin what about in America they charges a fortune it really ticks me off you know so we use a lot of unfractionated heparin in this country either one is probably fine they're both good night rates reduce ischaemia but not death there was a study of believe it or not 60,000 Chinese having an acute coronary syndrome randomized to nitrates or not no impact on mortality but if they're having symptoms or SD depression we like to relieve ischaemia now if they have a low blood pressure don't give nitrates because they'll have a lower blood pressure you can put a person into shock by giving nitrates if they already have a low blood pressure ok this promotes coronary basil dilation it also reduces right heart return if they're having a little heart failure that's good we usually give it sublingual and then we can give IV nitro if they're having ongoing symptoms or ischaemia beta-blockers reduce synthetic drive heart rate blood pressure a lower oxygen demand they decrease year and they reduce sudden death death and recurrent mi so everybody who's eligible is going to get a beta-blocker often give some IV and the ER and then early but if they have Rawls signs of heart failure or have a low blood pressure or they already have a pretty low heart rate don't give it because you actually increase their risk of hypotension and shock so most patients get beta blockers but not all calcium blockers like diaper still ties them lower heart rate they vasodilator they release the schema but not death so we tend to only give diltiazem the patients who can't take a beta-blocker if they have heart failure calcium blockers increase the risk of death so don't give a patient having a heart attack calcium blockers if they're in heart failure what about these patients with non-st elevation in my who should we think about just conservative therapy versus going to the cath lab within 12 to 24 hours early invasive strategy is to perform the CAF probably the next day after initial therapy proponents of this say you rapidly identify the problems they have a critical legion open it most of these patients many of them get an angioplasty or some get a bypass the conservative strategy is you only would take the cath lab patients who have recurrent ischaemia or their high risk by the grace risk or and you would cast patients who have inducible ischaemia on the stress testing before they go home what they have recurrent Angela in the hospital you know they came in today with a heart attack tomorrow they're having chest pain again you're worried they still have a critical lesion and you go to the cath lab to see if you find it and if so you open it because you're early therapy doesn't seem to be relieving the problem in America most people get a calf in Europe depending on the city probably half the people don't get a calf it just depends on the resources we spend more money on healthcare in this country right twice as much as many European nations and this is manifest by the number of cath labs we have and magnets we have and so on early invasive is a little better it's a little better most trials show that if you do a calf and intervene on significant disease you reduced infarction reinfarction by six months and maybe death the patients who have ST depression elevated troponin multiple risk factors especially diabetics maybe older age or a little failure yeah we're going to take it to the cath lab higher-risk definitely go to the cath lab so acute treatment of STEMI we want to reap refuse everybody gets aspirin oxygen if they need it beta blockers nitrates for ischaemia ace inhibitors if they have heart failure or left ventricular dysfunction treat their pain we talked about anticoagulants as well this is TPA which is a fiber and specific thrombolytic ok this is streptokinase which is a non fibrin specific thrombolytic streptokinase i'll be a little less effective but a lot less expensive 60 bucks vs 800 bucks if you're in England you choose sk if you're here you do TPA or tnk this is a study which looks at short-term and long-term outcomes comparing invasive therapy versus lytic therapy for patients having a STEMI and and basically the invasive strategy is better alright so if you look at short-term outcomes death that non-fatal mi recurrence can you see that an invasive strategy is better than clot-busting drugs for stemi so in this country if you're within 30 miles of the cath lab and you have an ST elevation mi most of the time you're going to get percutaneous coronary intervention as the treatment strategy if you're in roundup Montana and you're 90 miles from any cath lab you're going to get a clot-busting drug and you may get helicopter to a center in case something happens after that so after the treatment of a STEMI we want to maintain vessel patency we want to restore this balance who are relieve pain prevent complications right so aspirin reduces death and reinforcing give it right away in daily an adult aspirin initially and then after a month will typically go down to 81 milligrams if the patient is allergic to aspirin give them clopidogrel this is another excellent antiplatelet drug many patients get both these these agents affect the platelet receptors slightly differently so you're going to see most of our patients who have an acute coronary syndrome get both aspirin and clopidogrel at least for a period of time now obviously every time you add something that affects clotting what are you adding bleeding risk and so you may give oh I gave low-molecular-weight heparin I put them on aspirin i started clopidogrel and they have a continuous nosebleed the next morning that's the price we pay for paralyzing the clotting side is bleeding risk and so that we were always paying a price when we do these things most patients with with both Stephanie and non-stem me these days are going to get aspirin and clopidogrel ok I'm heparin we talked about we usually give it for a day or two we give it after pci whether we also give it after TPI you don't give it after sk because sk steptoe kinase creates a systemic lytic state and you don't need to give heparin to those patients if the patient is in atrial fibrillation the atrial fibrillation they may form clot in the atria we want to prevent that it's a fibrin clot aspirin won't prevent it adequately will give them my happen so atrial fibrillation if you do an echo we're going to SS left ventricular function in every heart attack victim we want to know how much damage there is to the left and right ventricle when you do that say with echo if you see a clot in the left ventricle what's happened is they've injured the mark our team that's been a kinetic and clot has formed and that can break off and cause things like strokes we're going to give them an anticoagulant in the hospital are going to get heparin and then long-term they're probably going to get warfarin or three to six months whether it's what i do i'll give them Warford for three months re-echo to see if the clot has been absorbed if it has i'll stop the war front and do another echo in a week to make sure that clot has not come back we also have non vitamin K drugs now available pradaxa apixaban and so forth you're going to learn about these newer anticoagulants as well most patients who don't have this are going to get subcutaneous heparin well their bed rest to prevent deep pain clot ok beta blockers they increase they reduce the risk of arrhythmia reinfarction and rupture you give him IV in the ER and then typically orally but I mentioned the contraindications right low blood pressure significant bradycardia they're actively asthmatic you can make that worse or if they're in failure if they have Rawls halfway up their chests beware don't give them a beta-blocker you can put them into worst failure nitrates only first Khemia they do relieve pain if they're having congestion it may help their symptoms of heart failure by reducing right art return ace inhibitors when your heart gets injured what happens that were approaching football season the left ventricle likes to take the shape of the apex of a football when it's injured it starts to remodel and become more spherical like a softball now you can imagine that remodeling process is not good for function because normally at the apex you're squeezing blood out that aortic valve and when it suddenly doing this sum of the forces are being directed against itself it's not efficient so a remodeled heart is shaping like itself spherically that's not good ace inhibitors try to help the left ventricle reverse remodel and find that football shape again they're very effective for that so we're trying to limit this adverse remodeling with ace inhibitors by doing that they reduce the risk of heart failure and death in people have had a bigger heart attack so everybody's going to get an angiotensin-converting enzyme inhibitor interestingly they tend to reduce the risk of subsequent heart attack and we don't really know why not sure why though their benefit is added aspirin beta blockers and they're especially useful if they patients had a big anterior wall my or when you measure the ejection fraction it's below forty percent there's definite benefit of an ace inhibitor okay or if they have any signs of heart failure have pulmonary congestion during their admission even if the ejection fraction was forty-five percent they're going to get an ace inhibitor large interior wall my low F signs of heart failure during the admission we're going to give them an ace inhibitor ok statins reduce reinfarction and death and we want to start them early and it doesn't matter what the LDL is the Staten benefit is not limited to the cholesterol affect everybody with a heart attack gets put on a statin ok and it doesn't matter with their LDL 50 or 250 they're gonna get my patients say you mean this is for life nobody wants to be on anything for life so my trick for that as I say you know what let's just say for now and it's science proves in a few years that you don't need this I'll be the first person to take it away and never say for life i just have for now we'll renegotiate this year I know you don't like to take drugs I know you like to take herbs but this drug is going to prevent you from dying so let's take it okay okay I got it acute MI complications i want to go over a couple of these are important after the first after the insult all kinds of things can happen we can have a recurrent event maybe the vessel opened and then included again or god forbid maybe you had two plaques that ruptured when far to the anterior descending today and tomorrow it's the right corner territory doesn't that stink that happens fifteen to twenty percent of patients who rupture one plaque have more than one when you do the corner and say oh my god there's a plaque rupture in the right corner there's another in the LED what's going on here well that's a possibility arrhythmias the heart muscle may be heard enough that it's not working properly there may be mechanical complications you can get pericarditis and thromboembolism these are all bad things you don't want these to happen to your patients with acute MI and here's my party infarction and cause problems with contractility is can cause cardiogenic shock hypotension this can lead to low perfusion pressure and more ischaemia sometimes the heart gets hurt bad and there's an eight kinetic zone and forms a clot that can lead to an embolus i just mentioned that one sometimes the infarction leads to electrical instability and atrial arrhythmia like a true fibrillation or god forbid a ventricular arrhythmia which are much more dangerous like ventricular tachycardia rate of 220 beats per minute usually patients pass out when that happens we gotta get him out of it right away some hard and look for a defibrillator ventricular fibrillation is the pre terminal rhythm that most patients who have sudden cardiac death have tissue necrosis can cause all kinds of bad things that could wipe out a papillary muscle leading in micro education and heart fighter it can wipe out part of the ventricular septum that also can cause heart failure you could have rupture leading the cardiac tamponade or you can have pericardial inflammation and pericarditis and if you bleed into this you can get tamponade lots of bad things now what's the good news for you well the good news is in my era of medical school we didn't know how to open up arteries and we saw all kinds of these complications nowadays by opening up the arteries earlier we're reducing the mechanical complications by a lot and so thankfully many of these complications are going to be quite rare you'll probably see them but they're a lot less frequent than they used to be recurrent ischaemia Angela orskimi encourage this these patients service for reinforcements and we're going to go to the Catholic and we're going to look to see if their culprit lesion is back or if there's a new plaque rupture what's going on if they have arrhythmias they can be all kinds the cause of Brady arrhythmia maybe that the sinus node is affected so the heart rhythm goes down or if it's an inferior wall in my there's more vagal tone and they slow their heart rate if their heart rate is fast you're worried that they might have a lot of left ventricular damage hearthfire the heart rate when the left ventricle fails it tries to find a heart rate of about a hundred so the thing I hate the most is walking into a patient's room sad and fart and look at their monitor they have a blood pressure of 90 460 their heart rates a hundred and their neck veins doing this and I know they got bad left ventricular dysfunction and bad right ventricular dysfunction and they're borderline shock this is a very dangerous situation fifty percent chance they're going to not make it just walking in the room you know that when we see sinus tachycardia we always think about heart failure but are they volume depleted maybe somebody blasted him with a diuretic in the ER and suddenly they're hyperfuse because they're dry you look at them in their their skin looks like a prune or raising seiki are you thirsty doc I'm dying for some water i'm i am but i feel like i'm in a desert and it's Michigan you know if they have pericarditis they can get tachycardia from the pain they might be tachycardic because you're there on a drug that's driving their heart rate like dopamine or WT me atrial arrhythmias like atrial fibrillation can come come come be affected by a troll ischaemia or ventricular ischaemia or heart failure atrial premature beats atrial fibrillation ventricular premature beats ventricular tachycardia ventricular fibrillation these two are the real dangerous rhythms you're going to learn about them this month they might have heart block now yesterday didn't we talk about the sinoatrial node done going through the AV node bundle of His so that interval is called the PR interval if that region of the heart muscle has been hurt by the injury ski Mia and the AV block maybe more so maybe their parents . 24 or maybe it's really advanced every other bead is dropped you see an atrial beat there's a delay and hope there's the ventricle and the next a trilby there's no ventricular complex associate with that's second-degree AV block if you see the atria beating separately from the ventricles that's third-degree AV block they've knocked out that entire area and usually the lower pacemaker that's come in is like a heart rate of 44 35 and it's a wide complex because it's coming from either the right or left ventricle and it creates this slur there's either right or left bundle branch block because it's coming from one side the other side activates late ok so you need to know about AV block ok that's an important arrhythmia what about the blood supply anyway with the sinus node is usually fed by the right coronary so the AV node so proximal right coronary lesions are more likely to give you dis function of these structures the bundle of his stead by the LED the right bundle is typically fed by the LED approximately and the distal is by the art right cornering so if you see right bundle branch walk you might be dealing with a proximal la d problem or it could be a right coronary problem the left bundle is usually led so if they have a left bundle branch block a new left bundle branch block on their EKG you're thinking that it's led all the way the other problem with that scenario is patient comes in with crushing chest pain they have a left bundle branch block that creates difficulty reading the st segments you can't really read st elevation in the presence / left bundle so when we see that symptom of corn race disease new left bundle we go to the cath lab we can't afford to miss an acute st elevation in my causing a left bundle branch block because the left bundle was knocked out by the ischemic insult ok so symptoms consistent with acute MI left bundle branch block not known to be old we go to the cath lab and try to reproduce use just like it was st elevate ok important to know myocardial dysfunction happens in several different shapes and sizes they can have systolic dysfunction they can have diastolic dysfunction what's that stiffness right systole diacetyl diacetyl e is an energy-dependent process your heart is not a rubber band when it is hurt it doesn't naturally relax it needs energy to do that and blood supply what happens as we get older what happens to your systolic function as you get older any ideas I'm happy to tell you it stays strong your systole is preserved to a ripe old age what happens to die astley not so good as your myocardial cells die a few die every day there for me they might die every hour those cells get replaced by fibrous tissue so an aging heart becomes gradually stiffer it beats well because the this sells her life can overcome the fibrosis and squeeze but it doesn't relax as well so left ventricular end diastolic pressure goes up older patients are much more likely to develop heart failure because they already have impaired I astley from the term with this fibrosis is called presby cardio I won't test you on it but i love the word is no it's a great word presby cardia what's let's see system is called the trophy right what's the vigor of diastolic called lose the trophy you don't need to know that word either but i love that word to lose the trophy that's the energy dependent part of relaxation old hearts are diastolic Lee impaired they have more heart failure with their heart attacks not just because their wall motion has changed this way but because they don't relax as well so hard fire we treat him with days of dilators we die recent if we need to and we look and treat ischaemia if they're in shock they have a depressed cardiac output and the low blood pressure they often have poor perfusion vital organs like their kidneys their legs their brain we want to look for treatable causes why is this woman in shock as you have a VSD I could repair that does she have a blown-out papillary muscle I could fix that is she actively ischaemic I could help that is she anemic I can correct it is she federal i can reduce that so when you see mechanical dysfunction in heart failure you're always thinking what's reversible I gotta find a reversible cause otherwise this patients going down the tubes ok we use inotropes to increase squeeze nasal dilators appropriately some patients we put on pumps to get them through a crisis all of these are possible the shock trial was the study which basically looked at taking patients who had shocked with a heart attack and either going to the cath lab and revascularization them or not and there was better survival if you are young in reality or that this to me looks very young nowadays you did better if we took in the cath lab and try to open up any culprit potentially culprit lesions so that's what we do in our cath lab if you're a young person in cardiogenic shock with a heart attack you're going to go to the cath lab we're going to put in a balloon pump we're going to try to open up any arteries that we think we can open to try to improve mortality no neither group does great right shock is not a good thing to have for the heart attack in the best scenario you've got about a 40-percent death rate right ventricular and yes sir well rock is having a persistently low blood pressure cardiogenic means is being caused by heart muscle or valve ER could be valid problem or heart muscle disease in this case it's usually a big myocardial infarction a big am i right ventricular infarction the right ventricle is usually served by the right coronary artery and that territory is usually picked up on the inferior leads of the EKG like leads to 3 & F so you may see a patient next year who has an inferior wall st elevation mi they have a low blood pressure their neck veins are up to their earlobe and you're going to say you know what I remember that that's a right ventricular infarction ok low blood pressure elevated right atrial pressure measured on the neck veins we give them volume monitor them very carefully and try to nurse them through most of the time if we can get them through this the right ventricle wheel recovery likes to heal itself but you got to get him through this . and God forbid don't give them a diuretic or nitrates because then they're shocked it's much worse papillary muscle infarction most common in inferior posterior wallem eyes so you're seeing st elevation in the inferior leads to 3f ok you hear a blowing murmur left sternal area I they may be in pulmonary edema severe shortness of breath and we try to fix their core nice and if needed we'll reply to repair or replace the valve high-risk problem the other thing is i would say if you see a myocardial infarction complicated by symptomatic microgrid education push for early repair the longer we wait the more likely we end up with irreversible pulmonary injury ard s or irreversible renal injury from poor forward output and you can imagine if you're a cardiac surgeon and Kim Eagles trying to sell you a patient who's got severe mitral regurgitation after a huge heart attack and is in borderline shocked you know they may see not it's kind of a high-risk patient and my answer is that's the point they're not going to get better from this if we don't operate today a good chance that their lungs or kidneys are both will be worse tomorrow and then their operative risk goes even higher you end up in this interesting dance with your colleagues where you're kinda way risk versus benefit and in this scenario i argue for early repair because a RDS and renal failure are such disasters complications free while rapture happens more often in older hypertensive women and it's usually fatal right away occasionally the rupture walls itself off in us in a little bit of the pericardium and you have a shot at taking the surgery and to end and basically cutting it out and putting in a patch ok so free wall rupture not always not always fatal typically 85 year old woman had an inferior wall mi yesterday sort of borderline blood pressure overnight still making your own talking's suddenly complaints of extraordinary pain and then is in full-blown shock that's probably a rupture if there's enough blood pressure you may have a chance to get that patient to the operator possibly put the SD this is a infarction of the ventricular septum and this can form a whole the integrity of the septum is lost looks like hamburger and suddenly there's blood flow and it's go from high pressure to the lower pressure so left-to-right shunning right tire pressure in the left ventricle so we're shutting left to right we hear an isolated murmur over the septum and often the patient is in right ventricular volume overload they may have some edema blood pressure might be low often their neck veins are high and they're in trouble you hear this murmur over the sternum and you want to repair these if you repair them they got a reasonable shot as long as they left ventricles good otherwise have a pretty good shot a true ventricular aneurysm occurs late after heart attacks mostly patients who had an ST elevation mi it didn't get fixed the end up with a big area of injury and over time it formed this sack and when when the when the good part of the heart muscle squeezes it actually bulges out like a balloon it's called an aneurysm it can be associated with arrhythmias ventricular arrhythmias and and heart failure ok it can also form clot pericarditis this was more common in the past when we had ST elevation mi that didn't get reproduced presumably what happens is the full thickness of the myocardium is injured and the surface the ischemic surface irritates the pericardium around it and that gets inflamed or there's bleeding into the pericardium blood in your pericardium's very irritating causes a lot of pain you'll see in patients who have car accidents hit the steering wheel and they come in with severe pyretic pain some of us there musculoskeletal some might be blood in the pericardium they bled ok so you the typical scenario patient heart attack yesterday st elevated maybe they came in late $YEAR 12 hours and today they wake up and say Jesus I have terrible pain top o what's it like what's its much sharper than what I had yesterday take a deep breath for me I can't do that it really hurts i want to sit up all the time thats pericarditis from inflammation of the pericardium over and in farted myocardium ok you might have fever sharp pain terrific you might when you listen to them you might hear a rub so as their heart is going back and forth and systole and diastole during both components it's like it's not a murmur which is more about this is going stretchy that ok we treat that usually with aspirin anti-inflammatory usually cools off right away from beau embolism o'clock informing the a kinetic myocardium is usually in big and to Allah mais can cause a stroke and we treat them with early heparin and then another anticoagulant most likely war from but some of the newer agents would also work like rival Wroxham and so forth if you see a clot on the echo or the EF is very low after a big anterior wall my some cardiologist will just go ahead and empirically give an equivalence for three to six months i tend to use it if I see clot or rarely if it's a huge and to all my knee and the the wall is a kinetic it's ironic if it's an aneurysm is still moving and there's less clot formation but it just sits there it likes to form clot so I'll treat those patients for three months and and then we'll reimage them so risk stratification how do we predict what's going to happen well we get an echocardiogram and everybody and we're looking for lower ejection fraction the number one predictor of future outcome is going to be age and ejection fraction those two ok echocardiogram so we're going to get typically and we're going to make sure we have these people on ACE inhibitors and beta-blockers as it dramatically reduces their risk residual ischaemia we either find that by patients are symptomatic we may do a stress test before they go home or a maximum stress has later if they have residual ischaemia we're going to CAF looking for something to open i'll get aspirin beta-blocker we find arrhythmias by monitoring or there's some comes and there directed therapy depending on what it is depends on what it is so the standard discharged today standard treatment most patients with a heart attack you're going to be in the hospital three to five days you're going to get aspirin and clopidogrel we're going to get a beta-blocker selected patients are going to get an ace inhibitor we talked about the afib patients or LD clot they'll get warfarin everybody goes to cardiac rehab please refer everyone of these patients to cardiac rehab what happens there well they learn about their disease you learn about the right diet they learn about the value of exercise and what exercise is going to be safe and they do it in a controlled environment where they gradually increase exercise they learn about their drugs so they know what they're taking and why they're helped in stopping smoking if they're depressed we try to find it and treat it and they have a safety net ok so cardiac rehab reduces risk by twenty percent for all of those things we want to give all of them nitrates to have if they have returned symptoms they take nitro and they go to the hospital we don't do this take three Nitro and call me in the morning anymore that didn't work you know that we told the pitch well if you have recurrent and John to take a nitro in five minutes take another 15 minutes taken up guess what happened about five minutes turned into an hour and suddenly three hours later the third night oh I guess I should call now so nowadays if a patient has recurrent symptoms after a heart attack we say take nitro but go and don't drive either call an ambulance or have somebody drive you if it's the real deal call an ambulance because it should fibrillate in your in your car and your spouse is driving it's not helpful it's very unlikely they can drive into CPR at the same time I've never seen that they have to learn how to exercise we give them a . we want them on a low-fat diet what's the best diet after a heart attack well we there's no perfect diet is there but typically we're going to recommend a Mediterranean type diet rich in fruits and vegetables low in fat and the fat they use is going to be better fat like olive oil ok and we want to try to avoid fast food that tends to have high fat and we want to go too low for the slow food where they eat lots of fruits and vegetables they must stop smoking and we give them drugs if they need drugs to stop fine we have several drugs that help will give them nicotine you're addicted to nicotine fine i'll give you a nicotine gum shoe as many of those as you smoke at least you won't get the tar and you won't get the smoke and that's what hurts you it's not the nicotine that hurts smokers it's the stuff that comes with it so we gotta get rid of this and everybody's going to get a stat and doesn't matter everybody's going to get a step in this slide is wrong everybody gets a step wow so survival the grace risk or I'm basically shows that early invasive therapy for the non-st elevation $OPERAND m eyes is much better than delayed so if you do a greater risk or and you find that there in that five ten fifteen twenty percent risk that's the person you probably want to recommend going to calf if their risk is one percent then the difference between early invasive therapy versus you know let's just do a stress test and see how things go but at the higher levels of the grace risk or that's that's important what's in the grace risk or age heart rate renal function evidence of congestion opponent up or down st segments did they move very intuitive but you can look it up the greatest risk or just going to grace calm and get the risk calculator and use that when you present your cases next year and it it will help you so I've covered a ton jeez I i apologize i like use the whole time but it's a lot isn't it is quite a bit you have questions if you have others later just email me all i'll be happy to help this is a lot of information to cover a short time it's at least what you need for now ok thank you have a good day
Medical_Lectures
05_Biochemistry_Protein_TertiaryQuaternary_Structure_Lecture_for_Kevin_Aherns_BB_450550.txt
Kevin Kevin Ahern: Maybe not. Hold on. Ah, now let's get started. How's that? Too many cords. I vote for cordless. How's everybody doing today? "Good, professor, that's very good." Male Student: Well caffeinated. Kevin Ahern: Well caffeinated! Then learning shall happen, right? Okay, so, today I'm going to finish talking about protein structure. I've talked about primary and secondary so far, today I'll finish talking about tertiary and quaternary. Then we'll probably spend most of our time down here talking about sequence and structure and there's several things in there that are kind of really interesting things that start to bring together, I think, the whole perspective of why protein structure's important. Of course we'll be talking about that over the next week are two actually. Protein structure relates to every property of a protein as I've been saying almost endlessly. I started talking about tertiary structure. Before I talk about tertiary structure, I want to give you a definition for it. Tertiary structure is... So first of all I'll give you secondary structure I said resulted from interactions between amino acids that were close in primary sequence. Tertiary structure arises because of interactions between amino acids that are not close in primary sequence. Well what's close versus not close? Close, roughly ten or fewer amino acids. Interactions between ten or fewer are what we would categorize as secondary structure. Something that's more than ten apart is something we would refer to as interactions giving rise to tertiary structure. Well tertiary structure has a very different appearance than secondary structure does, okay? This is a schematic example on the left and a space filling example on the right of the protein known as myoglobin. Myoglobin is a very important protein that's found in our body, primarily in our muscles, and in our muscles, it serves the function of storing oxygen. And we'll see later the significance of that storage of oxygen. Myoglobin is closely related to hemoglobin which is the protein in our blood that carries oxygen and myoglobin's function is better described as storing oxygen. It's kind of what I like to think of as an oxygen battery. Now if you we look at the structure of myoglobin and this is the 3D image of the structure of myoglobin, what we see is that it has secondary structure in it. It has these alpha helixes that we've seen before and it also has those turns. And it's because of those turns that portions of the protein that wouldn't otherwise be close together are brought into close proximity. So we could think of tertiary structure as a rising between interactions between one part of the protein here and say another point of the protein here, but they're not close in primary sequence. This might be amino acid number amino acid #41, this might be amino acid #200 for example, alright? And the only way those interactions those happen is because they've been brought into close proximity and we'll see a little bit later how that actually occurs. It occurs in a process we refer to as folding. And folding is as close to a magical a process as you will find in biochemistry. Folding is an absolutely phenomenal process. Now, tertiary structure is simply that. Tertiary structure gives rise to something that we call globular proteins. And the structure go of globular proteins is not random. It looks like it's fairly random but it's not. Myoglobin when it's given the proper chance to fold will always fold in the same way. It will always have that same structure. Okay? That tells us that there's something that's driving that specific structure and as I said in the very first lecture on protein structure, the thing that drives all the properties of a protein is the amino acid sequence. The amino acid sequence determines what this folded structure is going to look like. The vast majority of proteins that we find in the real world are not fibrous. I talked about fibrous proteins last time. The vast majority of proteins in the real world, no, okay, let's see. Is that ringing, are people hearing a ringing? Okay, and it's not a phone, it's me. The vast majority of proteins in the world are in the globular form, okay. Fibrous portions are not nearly so common as globular proteins are, okay. If I change the amino acid sequence, I will change that folded form. The amount that I change it will determine how much actually varies. If I change one amino acid in here, I probably won't change it much. If I change 50 amino acids, I will change it a lot, alright? So we start to think now that every protein has its own specific amino acid sequence. And if those specific amino acid sequences give rise to specific structures, then we will have a different structure for every protein, bear [inaudible] amino acid sequence, and the answer is that's exactly correct, okay? So again structures function. When I mutate, if I mutate and I change an amino acid, I could change a structure very slightly and sometimes very slight changes have enormous effects. Sometimes they have virtually no effect. You can't product necessarily that a mutation is going to be what we think of as bad. Some mutations are what we think of as good because they actually make the enzyme more efficient, alright. But in any event, when we change an amino acid, we're going to change something about the structure of a protein. We're going to spend a lot of time talking about enzymes in structures and I think you'll see how very, very tiny changes really can have enormous effects on the properties of a protein. Okay, this shows the amino acid... Forgot my code here. So this shows the amino acid distribution of the amino acids in a protein and they've been color coated. So it's color coated is the amino acids that are the most hydrophilic or charged are in blue and those that are the most hydrophobic are shown in yellow and those that are sort of in between are shown in white, alright? Now what we see and it may not be real obvious at first, but what I will tell you is the case is there is an uneven distribution of amino acids in this folded protein. The hydrophilics are generally on the outside, and that makes sense because myoglobin is found in the dissolved portion of the cells found in aqueous solution, the cytoplasm, okay? Now hydrophilics like water, they associate with water very well, and because of that, this protein is soluble in the environment in which it's found. That's not totally surprising. What about the hydrophobics? The hydrophobics don't like water, so just like oil doesn't like water, and oil forms that layer that stays away from water, it stays with itself, so to do the hydrophobic amino acids associate with themselves. We find an uneven distribution, we primarily find for proteins that dissolve in the aqueous solution of the cell, we find that the hydrophilics are on the outside of the protein, and the hydrophobics are on the inside. Now this isn't just a random phenomenon. This tendency of hydrophobic amino acids to like to associate with each other provides a driving force, one of several driving forces that cause a protein to fold. Hydrophobics like to associate with each other. This really helps get them they need so we can think of this as being a hydrophobic glob coated with hydrophilics on the outside. That enables this protein to be soluble. Now we'll see later in the term when I talk about things like LDLs, and HDLs which are complexes in our blood that carry fat, that they're multi protein complexes that arrange themselves in exactly the same way. They put that most hydrophilic portion of themselves on the outside, and the most hydrophobic portion on the inside. When we examine proteins that are desolved in the aqueous solution of the cell. As I said, we almost universally see this arrangement. It tells us that structure, and function are important. If I try to put too many things hydrophobic on the outside, I'm going to have problems. Why do I have problems? Well if I have a lot of hydrophobics on the outside, and hydrophobics like to associate with hydrophobics, what do you think is going to happen when one full of hydrophobics on the outside encounters another full of hydrophobics on the outside? They're going to glob together. Now I'll show you later an example where that actually has important human health implications when proteins don't fold properly. By the way, this little thing in the middle, this little, you're probably wondering what that is, that's a group called heme, it's the group that gives hemoglobin its name. It's actually the portion of the protein that carries the oxygen, and myoglobin has a heme just like hemoglobin has a heme. We'll talk a lot more about that later. Now, I described to you a situation where proteins that are dissolved in the aqueous environment of the cell have hydrophilics on the outside, hydrophobics on the inside. Are there violations of that rule? For every rule that is made, there are violations, and one of the violations that we see are in proteins that are found in membranes of the cell. Not all proteins are dissolved in the aqueous environment of the cell. Many proteins are embedded in the membranes of a cell, and that's important because cells use those proteins in the membrane to perform very important functions, and we'll talk about some of those again later. One of the proteins that I want to show you in this regard is a protein called porin. Porin is found in the membrane of the cell, and when we examine the distribution of amino acids protein, what we see is what people describe as an inside-out protein. The hydrophobics are on the outside, and the hydrophilics are on the inside for the most part. Why is that the case? That's the case because membranes have long, fatty acid chains in them that are hydrophobic. So the outside of this protein is interacting not with an aqueous environment, but instead with the hydrophobic side chains of fatty acids. Structure, function go hand in hand. If I'd try to put hydrophilics out there, associating with it it doesn't work. Well why do I even have hydrophilics in this protein at all? The answer is the function of this protein. This protein is called porin, and porin has the function for the cells that have it of letting in water. It's channeled to let water in. Well water of course is water, and water likes hydrophilic molecules, and look where the water goes. It goes right there where the hydrophilic molecules are. Alright? So yet another example of structure matching function, and teaching us something about protein structure in the process. So that's a very important consideration. Okay, so I'll stop there and take questions. Any questions of what I've said so far? You guys are all asleep today, huh? Male Student: [inaudible]... Kevin Ahern: There are beta barrels, and beta barrels are like the structure I showed yesterday. Beta barrels can have several functions besides doing things inside of membranes, but one function can be actually what you described, yeah. Male Student: Now with that being, are they only one layer thick, and if so, are they neutral or are they predominately polar, or... Kevin Ahern: Okay, so the question is are they only one layer thick, and the answer is most proteins in the membrane you find are not really one layer thick. No, they're sort of surrounded by things kinda like what we see here. I'll show you an interesting example later of a an interesting protein in that respect. Alright, so that's the sort of general things I want to say about these. I do want to spend a few minutes talking about tertiary structure in terms of stabilizing it. Tertiary structure is actually a fairly fragile thing. It's fairly fragile. If I look at an alpha helix, alright, I see an alpha helix coiling, coiling, coiling, coiling, and that coiling might go on for 40 amino acids. And every 4 or 5 amino acids, I've got a hydrogen bond that's helping to hold that alpha helix together. That structure I showed you for myoglobin has an alpha helix over here interacting with an alpha helix over here, but there might only be a couple of hydrogen bonds stabilizing that arrangement, alright? It means that there's not as much force or not as much energy holding that tertiary structure together as there is the secondary structure, and that means that tertiary structure is relatively unstable by itself. I'm going to show you some things that help to stabilize it, but it's important that we consider all the forces that help to stabilize the tertiary structure of a protein. If you've ever tried to purify a protein in a laboratory, some people find that they pull their hair out more than I've already had go off on the top of my head trying to get a protein purified because what they discover is that the protein literally falls apart or stops working half way through the purification because the tertiary structure doesn't, isn't very stable. There's not much energy holding it together. Not all proteins are that way, gazuntite, not all proteins are that way. So let's talk about some of the forces then that stabilize tertiary structure. One of these is actually a very, very strong stabilizing force. It's actually a covalent bond, and that covalent bond is called a disulfite bond. It's a bond between two sulfurs, and it arises as a result of two cysteine side chains being brought into close proximity. Cysteine, I hope you remember, in the amino acids is the one that has an SH side group. If you didn't get that before, you should know that. Cysteine has an SH side group. If I put one SH side group next to another, an oxidation will occur that will join the two sulfurs, getting rid of the hydrogens. And that forms a covalent bond. That disulfite bond is very strong. Remember that covalent bonds are much stronger than hydrogen bonds, and that can be a very important force in helping to stabilize a protein. So we see that here, here's a folded protein. It has disulfite bonds, disulfite bonds, disulfite bonds, here's the same protein unfolded without those disulfite bonds. I'm going to talk more about this protein in just a little bit, okay? But disulfite bonds are the most important stabilizing form for the tertiary structure of proteins. What other forces do we have that help to stabilize them? Well, we obviously have hydrophobic forces. Those hydrophobic amino acids associating with each other on the inside do help to stabilize that protein, okay? We've got disulfites, we've got hydrophobics, we've got ionic. Imagine I've got a plus amino acid up here, and a minus amino acid down here, they're going to be attracted to each other, those are also forces that help to stabilize tertiary structure. We can think of hydrophilic in sort of a loose sense of that happening because it's associating with water, and that may help contribute to the structure though I don't think it contributes a tremendous amount to the stability. But the hydrophilic interactions can play somewhat of a role. Hydrogen bonds, we saw hydrogen bonds help to stabilize the alpha helix, the beta strands, hydrogen bonds also help to stabilize tertiary structure of a protein. The last force that helps to stabilize a protein I'll just mention it here, and I won't mention it again are known as metallic bonds. There are some metal carbon bonds that help in fact to stabilize protein structure. We don't really deal with them too much in this class. Now I'm going to come back to this figure in just a bit after I talk about quaternary structures so bear with me on that. Okay, that's the last of what I want to say about tertiary structure. And I want to move our consideration now to quaternary structure so you've seen what we've seen in each case are forces between amino acids that are getting further, and further away, and so quaternary structure arises as a result of interactions between amino acids that are actually on separate protein units. Separate protein units so when I think of an enzyme or I think of certain proteins, they may have multiple polypeptide chains that hold them together. Prime example is hemoglobin, okay. Hemoglobin actually has four separate polypeptide chains that comprise it. It has two units called alpha that are identical, and it has two other units called beta that are also identical. And alpha, and beta are fairly closely related to each other, and they're also closely related to myoglobin. This is hemoglobin, yeah. Okay? So hemoglobin has four polypeptide chains that are held together by, they actually, hemoglobin has quaternary structure, alright? So quaternary structure arises when we have completely separate polypeptide chains that are interacting with each other. This is a very common phenomenon in biochemistry. It's not at all unusual. Many enzymes, many proteins have multiple subunits, they're called subunits that come together, okay? Now, the interactions between the subunits that stabilize quaternary structure are exactly the same ones that stabilize tertiary structure. Hydrogen bonds, ionic bonds, metallic bonds, hydrophobic bonds, disulfite bonds. All of those can also stabilize quaternary structure. Now quaternary structure gives rise to some really interesting things so first of all, you have to have multiple sub units to have quaternary structure. Myoglobin because it has only a single subunit doesn't have quaternary structure. But hemoglobin which is very closely related with multiple subunits therefore has quaternary structure because those sub units have to interact in some way. Hemoglobin will be a lecture I will give, I think it's next week, that is one of the most interesting proteins we'll talk about in the entire term. The amount of functionality that's built into this protein is nothing short of astonishing, and some of the most subtle things you can imagine give rise to properties like being able to be an animal, being able to move, okay? Being able to adjust your oxygen as a result of exercise, alright. These things all arise because of the properties built into hemoglobin, and that just scratches the surface. So that will be coming up in a lecture very soon. Okay, and here is a case of a super quaternary structure. This is the coat, I'm sorry the protein coat, of a virus that has infected several people in here I hear people coughing, and hacking. this is a picture of the cold virus. And we can see that there are multiple proteins, each protein has its own distinctive color on here, and these proteins are interacting to form the coat of this virus. When I talked about the ability of viruses to make their coats, and have the proteins self assemble very much like the pieces of a puzzle putting themselves together, that's actually what's happening here, and what you're forming in the process of that are quaternary interactions, different proteins interacting with each other. So this is a great big example of quaternary structure. Okay, questions on this before I dive into some other stuff? Yes, sir? Male Student: [Inaudible] Kevin Ahern: So your talking quaternary structure, Kev? Is that right? So Kev's question is if we look at the quaternary structure, is there only one force or one bond that stabilizes that, and the answer is no. Just like we don't have one force or one bond stabilizing tertiary structure, so too can we have many bonds, and many forces stabilizing quaternary structure. Yes sir? Male Student: So with that virus, are there stabilizing forces [inaudible] sheer number of quaternary interactions? Kevin Ahern: Yeah, there's definitely, when we see quaternary structure, we have stabilizing forces. Proteins will not associate with each other without stabilizing forces. Absolutely, you will see that. And you might imagine a case with a viral coat that the stronger those stabilizing forces are, probably the better off the virus is because the virus has part of its life cycle in the nice cozy environment of the cell, but for those of you who are sneezing, and hacking, what you're doing is you're expelling cold virus into the atmosphere. That's a pretty harsh environment for a coat to have to survive. So a strong coat is important, and having those stabilizing forces is important. I saw a hand over here, yeah? Female Student: So do the subunits, are they considered proteins or only once they [inaudible] Kevin Ahern: So her question is are individual subunits proteins or what? And so the answer is individual subunits are. And I call the individual subunits, actually that's, I said we, I, use protein, and polypeptide chains together interchangeably, technically, the subunits are polypeptides, and proteins are the complexes that arise from that. That makes it a little bit confusing, but I'm not going to get you one way or another on the exam so don't sweat that. Yes, sir? Male Student: So how does a virus acquire the energy to get the structure? Kevin Ahern: How does a virus acquire the energy to get its structure? Well, there's not a simple answer to that. Virus has all the things that it does from cellular energy in the first place, so making the proteins, making the RNA, making the DNA, they use cellular machinery to do that so the cell's energy contribute to that. The self assembly of these doesn't require a giant amount of energy. Okay? So that's a pretty cool trick they have on a nanoscale. Yeah, that was the one I saw earlier, yeah? Male Student: Do subunits have to be tertiary structure? Kevin Ahern: Do subunits have to be tertiary structure? You mean could I have a fibrous subunit, and then interact them? I can't think of any good examples. But I guess in biology, you never say never, right? So I can't think of any good examples. The vast majority of proteins have tertiary structure, okay? 99.9% of proteins out there have tertiary structure. The examples I gave: hair, or nails, or silk, spider silk, something like that. Those are relatively rare examples of fibrous proteins. Collagen was another rare example, but collagen is pretty much it. Yes sir? Male Student: Is it more common for a protein to have a functional stand alone tertiary structure or are most proteins amalgamated subunits quaternary structures? Kevin Ahern: So are most proteins loners or are most proteins social? That's the question, right? I haven't done a number count, but I would say in my just off the top of my head, most proteins are multisubunit. Most proteins are multisubunit. So you don't don't see nearly as many single subunit as you see multi subunit. And a multi subunit protein has tertiary structure as well. It has all 4 levels, okay? Okay. Well, good questions, good thinking about this stuff. What I want to do is talk a little bit now about that structure, and it's relationship to sequence. So I first showed you this guy here, ribonuclease. Ribonuclease is a protein that sort of violates that rule I said earlier. I told you that most proteins have a tertiary structure that is relatively unstable so we have to be careful when handling them, okay? We can disrupt most proteins' tertiary structure by treating them with detergent. Why? Because detergent interacts with hydrophobic things, and it gets in the middle of that protein, and it literally peels it apart. We wash our hands to kill bacteria because we're denaturing their protein. And by the way, when we unfold a protein, we describe it as denaturing it. It no longer has it's original shape, it no longer has it's original function. Most proteins come apart fairly readily if I add detergent. Most proteins come apart fairly readily if I heat them. Why? Because heat provides enough energy to break hydrogen bonds. I can break hydrogen bonds with heat quite readily. So by putting a protein at a high temperature, I break hydrogen bonds. That's how I'm killing bacteria when I cook food, okay? Now most proteins don't have good stability to, for example, heat. Heat's going to be the example I give you for the most part, alright? Ribonuclease is an exception to that. Ribonuclease is a protein that breaks down RNA. If you ever work with RNA in a lab, you will hate this protein. You'll hate this protein because it's everywhere. It's in your skin. You touch your skin to any piece of glass in the laboratory, and it's automatically full of RNAs. If you try to work with RNA, and you've just contaminated with an enzyme that breaks down RNA, that's a real problem. If you want to kill this protein, you can't kill it by boiling. Most protein's gone, if you boil them, they denature, they've got no activity left. You boil ribonuclease, and it's just happy, okay? It's an exception to that rule. It's very happy. Well, why is it stable, and most enzymes aren't? Most enzymes actually have disulfite bonds. The answer isn't that this has disulfite bonds, and others don't. The answer is that this guy has two interesting properties I'm going to describe to you. One is it has a way of arranging itself so that it readily forms disulfite bonds. Even after you've taken it apart. I'll show you that example in a second, okay? It has the ability to rearrange itself so that it reforms the disulfite bonds if it's gone. To illustrate to that to you, I have to give you an example. Let's imagine I've got some ribonuclease, and I take, and I do what's shown here on this treatment. I treat this with two chemicals. Mercaptoethanol is very simple molecule, and it's property is it will reduce disulfite bonds, and make them back into sulfhydryls. Remember I said that we put together two sulfhydryls, and they oxidize, and form a disulfite bond that's a SS bond? What mercaptoethanol will do is it will break that SS bond, it will put hydrogens on there, and they won't be bound together anymore, alright? Now, if I were to take this room, and I look at these major supports on the side, and I were to chop all of those, we could imagine that this room would probably be somewhat destabilized. We might not want to stand down here in the front, right? It might hold up, but then again it might not hold up. We've just destabilized the structure of this room. Structurally, it's not the same as it was if I get rid of the support beams, right? Right, we can think of the disulfite bonds as being support beams. If I denature this protein by first of all treating it with mercaptoethanol, what I discover by treating it with mercaptoethanol doesn't destroy it. It still stays functional. So just like this might stay sort of hang around here, you might not want to be in it, we still have the basic structure, the enzyme still functions, but guess what? The enzyme is not nearly as stable as it was. I can make it come apart. Before I try to treat it with mercaptoethanol, I boiled it, it was still active. But now I've taken out the support beams, the disulfite bonds, and I try to denature it, I can denature this protein, and it won't work, okay? This tells me that these support beams are very important. Very important. So I get rid of the support beams, I can denature. So the other thing that's here is urea. Urea breaks hydrogen bonds, and this guy comes apart. Yes, sir? Male student: [inaudible] Kevin Ahern: The protein itself wouldn't work, that's correct. Yeah, okay? So the combination of breaking the support beams, and breaking the hydrogen bonds causes this protein to unfold, that's denaturation, and now this protein doesn't work anymore. This protein's an enzyme, it doesn't catalyze its reaction anymore. It loses its shape. You see its shape on the side. I could use instead of urea, I could use heat. Same thing. Heat will break hydrogen bonds. I can heat that guy, and everything's fine. Now most proteins, once I denature them, they will not come back to their native form. They will not come back to the way they were. This room, if I cut the beams, and I take a slight bulldozer to it, it's probably after I've let it fall down, not going to reassemble itself back if I magically wave my wand at it. Alright? Well ribonuclease has the property, but if I'm careful, I can wave that wand, and it'll go back, and redo itself. Now that's really cool, okay? That's really cool. How does that happen? Well let's imagine in this experiment. I start, I use, this is a very concentrated solution of urea, by the way, let's say I take that urea, and start very slowly removing the urea, okay? I very slowly start taking the urea out of there, and each time I measure, does this thing that remains have the ability to break down RNA? That's the question because this guy's an enzyme that breaks down RNA. Before I take the urea out, none of it will break down RNA. But as I start taking the urea out, what I discover is all of a sudden something in the solution starts being able to break down RNA. Now that simple experiment tells you a very important thing, okay? It tells you that the information necessary to make this structure is the sequence of amino acids. Because that's the only thing that's there to make this happen. There's no other proteins, there's nothing in the cell, there's only the sequence of amino acids that's there, and the sequence of amino acids are telling this thing this is the shape that you want to have. That's pretty cool. Now, I'm going to take this one step further. If I let, if I take all the urea out of the cell, and I look, and I say, "yep I've got a heck of a lot more of this activity than I had over here where I had none, but do I have as much activity as I had before I took it apart?" The answer is no, I don't. Now my question to you is why? If the information that's necessary to fold this protein is there in this thing over here, why doesn't it all go back to this? It clearly does not because I don't get everything, I don't get as much activity as I started with. Now, I'll give you a hint, and the hint will surprise you, okay? One of the ways in which I can increase the amount of activity when I'm taking out the urea is to put a little bit of mercaptoethanol in there. Now, what does that tell you? Female Student: [Inaudible] Kevin Ahern: What's that? Female Student: Some of the sulfurs don't bond where they're supposed to? Kevin Ahern: Okay, so sometimes the sulfurs bounce into each other, and not into the places where they do. Because all it takes is bouncing into each other, and once you've got two malformed ones, there's no way it's going to form properly. But if you put a little bit of mercaptoethanol in there, you allow it to come back apart, and give it another chance to fold. It's like giving a person a second chance. If you've ever taken a class, and you got a poor grade in the class, right? And you said I want to improve my grade point so I take the class a second time, you get that second chance. Mercaptoethanol is that second chance. I see a bunch of people looking at each other, I hope that's not a common phenomenon. Okay, does that make sense? Yes? Male Student: Is that a repeatable process? Kevin Ahern: That is a repeatable process. Male Student: So you can add a little bit more after that [inaudible] back up to a certain point? Kevin Ahern: Yeah, you could ultimately optimize it, you could. Yeah, very interesting insight. Yes, shannon? Female Student: Are you ever going to be able to get back to full activity? Kevin Ahern: That's what he was asking, and the answer is if you are very careful, you could, yes. Yes, back there? Male Student: Is it a random event, then? [Inaudible] Kevin Ahern: That's a good question. Is it a random event? The answer is folding itself is not a random event. We don't fully understand it, and I'm going to give you some statistics on that in a little bit. If not, in the lecture next time, I'll try to get to that today. Question back here also? Male Student: I was just going to ask if folding is built in to all the proteins, how is this the only protein that does that? Kevin Ahern: This is not the only protein the does it, but it's a rare protein. If folding is built into all proteins, how come all proteins don't do that? Because we can imagine once we take them apart, they all can make disulfite bonds, maybe they make the disulfite bonds the wrong way. And this one has the unique ability not to make those, and come back, and fold itself. It's a complicated answer to your question. Your question is very complicated, so I don't have a simple answer to that, but that's one way. What we discover is that when we take proteins apart, and we misfold them, we can drive the folding in the wrong direction. That's also something that happens. And when we do that, then there's no way of getting back over that hump. But I'll say more about that in just a minute. Yes sir? Male Student: What's stopping one ribonuclease from bonding with a neighbor? [Inaudible] Kevin Ahern: Nothing's stopping it. That's why the mercaptoethanol is important in helping improve that folding later. Okay, good questions. You're thinking about this. Happy to see that. Okay, I talked about reduction. Here's the actual reduction that happens, it's kind of hard to invision, some people say they like to see that on the screen. When I talk about reducing disulfite bonds, this is what's going on right here, okay? This is mercaptoethanol. That's what it looks like. You don't need to draw this structure. But I'm showing you that you're going from this structure over to this structure. We've broken that support beam, and made this over here. So when I add mercaptoethanol, and there's another reagent that we can add that does the same thing it's called dithiothreitol or you can call it DTT. Not DVT, but DTT, okay? DTT will do the same thing, okay. I've mentioned things that can disrupt structure, and now you've seen two of them. Mercaptoethanol, there's urea, urea's the stuff in your pee that stinks, okay. And guanidium chloride is another reagent that can disrupt hydrogen bonds. So this guy will disrupt hydrogen bonds, this guy will disrupt hydrogen bonds, this guy will disrupt disulfite bonds. What did I say would disrupt hydrophobic bonds? Detergent, right? Detergent will disrupt hydrophobic bonds. All these are important reagents. Okay, let's see, how am I doing on time? Since I had a question on folding, let me just say a couple things about folding. So first of all, folding is, his question was if folding is built into all proteins, how come I can't refold this protein properly. I said it's a complicated question we can't completely answer, but we can imagine there's some misfoldings, and so forth that happen. Some proteins have a better ability to fold than other proteins do. Because of that, our cells have special structures in them that help proteins to fold properly. And as we will see, folding proteins properly is important. Let's imagine that I have a protein that I'm making out of here, and this protein for whatever reason is full of hydrophobic amino acids. A lot of hydrophobic amino acids, okay? Now the mature protein is going to fold it's going to put those hydrophobic amino acids on the inside, but in the process of being made, there's all this long string of hydrophobic amino acids that's floating out here in the cell as it's being made. One amino acid at a time. Could that pose a problem? The answer is it could. Because if I have an identical protein being made right next to it, we can imagine the hydrophobic amino acids of that protein might interact with this one, and prevent it from folding properly because the hydrophobics like each other, and they'll start associating with each other before the folding can really get going properly. We can make a great big conglomeration of proteins that would be of no use. So cells have a structure called chaperones. Molecular chaperones proteins called chaperonins, and you can call them either as far as I'm concerns, concerned that take proteins, and allow them to fold without interacting with other proteins. How does this work? Okay, well a chaperone basically is a barrel like chamber, and that barrel like chamber when you have a protein that needs to be folded properly, its synthesis goes into that chamber. It doesn't give a chance to interact with hydrophobics of other proteins. The inside of this chamber doesn't allow it to interact with the chamber. This protein is left to its own devices. This protein is left to fold on its own. So the chaperone allows this protein to go through its own folding, and not interacting with other things. That's a very important consideration for some proteins. Makes sense what that's doing then? Chaperones? What happens if we allow misfolding to happen? If we allow misfolding to happen, in some cases we have disaster. We've heard of mad cow disease. There's a related human disease called Creutzfeldt-Jakob syndrome that's caused by the very same problem that causes mad cow disease. Mad cow disease is caused not by a virus, but by something we call an infectious protein. It's called a prion. P-R-I-O-N. And people were very puzzled when they first started studying prions because no matter how hard they looked, they could take an infected animal, it's also found in a disease in sheep called scrapie. They're both neurological diseases where the brain basically ceases to function. Same things happen to humans that get Creutzfeldt-Jakob. They can take these infected samples, and they can transmit it from one organism to another, but when they analyzed what was there, they couldn't find it in nucleic acid. There's no RNA, there's no DNA no matter how hard they looked. At finally a man named Stanley Pruisner said, "Well the problem is that we don't have any RNA or DNA." What we have is an infectious protein. And people said, how can you have an infectious protein? What are you, some kind of an idiot? And what happened? What he discovered was that this protein was a misfolded protein. Now misfolded protein isn't infectious by itself you wouldn't think, but it turns out this misfolded protein has a very bizarre property. It induces other identical proteins to fold in the same way. The protein that causes mad cow disease, the protein that causes Creutzfeldt-Jakob syndrome is in every one of your brains. If you get a single misfolded protein in those cells, it can induce those proteins to start misfolding which can in turn cause others to misfold, which cause others to start misfold. That's a scary phenomenon. That's how an infectious protein propagates itself. It's a normal, it's not a mutant protein, it's a normal protein found in your brain. Yes, sir? Male Student: So are prions ever recognized by the immune system? Kevin Ahern: Are prions ever recognized by the immune system? I would never say never, but as far as I know, no. There's some effort right now to make antibodies against it to see if they can treat it, and there's been some limited success with that, but naturally, no. Female Student: So the reason people get, you know, like, mad cow disease is because they're eating infected meat? Kevin Ahern: Will eating meat give you mad cow disease? That's debated. Okay, there was an increase in mad cow disease in England in the late 1980s, and it followed thereafter an unusual form of human Creutzfeldt-Jakob syndrome that arose from that, and the thinking was that maybe that was related. That is argued. I will tell you the thing that scares you though. You talk about stable proteins, the prion protein is stabler than ribonuclease. If want to denature the prion nature by cooking your meat, you gotta take up to 700 degrees. And I haven't seen any recipes that say, you know, baste 700 degrees for three hours. That's not considered a good move, okay? Now, whether it can be transmitted through your food, as I said, that's argued, I won't say that it can or it cannot. I'll make some big enemies if I do that. But that's an important consideration. So prions are really scary things. They induce other proteins to do what they've already done. Here's a normal, it's called PrP, here's the normal protein. Here is a bad one, and what do you suppose is happening? Well we've got some hydrophobics that started associating with each other. And associated, it doesn't fold properly, it folds improperly, and it makes this abhorrent structure, what we call amyloid plaques. And when we analyze the brains of animals or people that have these, they have these big, humongous, ugly structures that are just polymers of this protein that look like this. Okay, that's a bad piece of news. I thought we would finish on a good piece of news. We'll finish with a song. You guys ready for a song? Okay. Let's do, please join me. [professor starts to sing] "Oh little protein molecule..." Join me! "You're lovely, and serene "with twenty zwitterions like cysteine and alamine. "Your secondary structure has pitches and repeats "arranged in alpha helixes and beta pleated sheets. "The Ramachandran plots are predictions made to try." Can't hear you! "To tell the structures you can have, "four angles phi and psi "and tertiary structure gives polypeptides zing "because of magic that occurs in protein folding. "A folded enzyme's active and starts to catalyze "when activators bind into its allosteric sites. "Some other mechanisms control the enzyme rates "by regulating synthesis and placement of phosphates. "And all the regulation that's found inside of cells "reminds the students learning it of pathways straight from hell "So here's how to remember the phosphate strategies..." We'll talk about this. "...they turn the GPb's to a's and GSa's to be's." Alright guys, see you Friday. [END]
Medical_Lectures
Immunology_Lecture_MiniCourse_12_of_14_HIV_Infection.txt
this is going to done discussing HIV infection and for almost everyone here you're really very knowledgeable about HIV am i correct okay so what I really try to do with this particular lecture was try to integrate a lot of the immunological information that you've learned so far into the context of HIV infection some of it I've already alluded to in previous lectures but this is kind of like to try to summarize it and integrate it in a way that you could really get an understanding of how HIV interfaces with the immune system so question is to consider first is how does HIV use the efficiency of the immune system in order to officially infect and destroy the immune system and it's actually rather brilliant because HIV is taken over a lot of the pathways that the immune system utilizes to be incredibly efficient to its own devices namely to be able to rapidly disseminate its infection what features of the eight of HIV replications specifically permitted t-they the immune system as well as becoming resistant to antivirals how is HIV replication regulated to synchronize its replication to those of the host cells so as many of you know a lot of cells are latent ly infected with HIV it's most efficient for HIV to trigger replication when the cell itself is being activated and divided in order to generate the most buyers possible how does HIV know what the activation state of the host cell is and what molecular mechanisms are used by HIV to evade innate antiviral cellular responses in the past few years very exciting discoveries have been made showing that our cells inside of them have what are called innate responses that enable us to protect ourselves from viruses and how does HIV evade those actually rapid relatively effective ways of doing that okay so again this is you're all familiar motive HIV transmission but what's obviously most striking is the difference between the United States and the world the United States about half of HIV transmission is a homosexual of through through homosexual acts only about 10% are through heterosexual transmission in contrast in the rest of the world between 80 and 85 percent is heterosexual transmission and only much larger of a small fraction five to 10% is through heterosexual transmission again this is a cause of a lot of investigation why of this just difference but I'm not gonna go it's dead right now but when I actually thought was an interesting figure that I picked up along the way is actually looking at the question of how infectious are different body fluids in terms of HIV transmission and also different cell populations and I remember in the early stages of HIV infection when I was taking care of patients all those whose concern was can you catch it from saliva candy captured from sweat so for example when Magic Johnson came out and said he was HIV infected it actually went back and played professional basketball as you know basketball players sweat a lot and the question is if you have a wound on your hand and you put your hand against someone who's sweating is that potentially can you get infected that way and the people said probably it is not and again this information basically documents that so where as you can isolate virus very readily from plasma you basically can't isolate HIV from sweat can isolate it from feces it's very hard to isolate it from urine and from saliva so these are all basically indicating that this less concern of transmission from these bodily fluids and again telling us again that casual contact is not how HIV is transmitted and but what is actually very eye-opening is the difference between semen and vaginal cervical secretions so we're a semen you could basically isolate HIV from about at least a third of semen samples and in fact it has a reason amount of virus in fact vaginal secretions have even though it's detectable it's at much lower levels suggesting that semen is probably more infectious than cervical vaginal secretions which really is on point in terms of male to female versus female to male transmission infected cells again obviously PBMCs have a high level of virus but in addition cells present in a semen also have high levels of virus again indicating why that is such a location for transmission to come from okay so now talking about HIV infection how does HIV get past the mucosal barriers again you all primed and ready for this lecture because we just learned about the mucosal system so now I'll show you the slide of the dendritic cell these are epithelial cells and you recognize that these are processes of the dendritic cells that are breaking through into the lumen of the intestine and in fact in in a previous study people had indicated that DC sign a molecule expressed on the surface of dendritic cells specifically found gp120 and internalized that into early endosomes of dendritic cells and it's unique about these early endosomes is it doesn't have any digestive capacity so even though HIV has been taken up by this endosome it's not harmed his infectivity has not changed so if we all if you are familiar with the biblical story of Joe Jonah how Jonah got swallowed by the whale but clearly he didn't get into the whale's stomach and get digested he was in a compartment that allowed him to basically be fully functional and active and then subsequently when he came to the shores of Nineveh he got basically spit out to where he was supposed to go HIV does the same thing with dendritic cells it's got swallowed up in this early endosome still maintains its infectivity but now it's a hitchhiker inside the dendritic cell and where is the dendritic cells headed for the lymph node and now listen to the lymph node spits out the virus and now the virus is exactly where it wants to be in an environment where in there large numbers of t-cells many of them are activated by this very dendritic cell and therefore now it has targets that it could rapidly infect okay so now if we look upon that so the early targets for HIV dendritic cells also cd4 positive T cells and present as you know now in the lamina propria but you also appreciate that that that intestinal the interstitial T cells are not that every relative resistant because their cd8 positive so it's only they're gonna be allowing appropriate T cells and now it gets trained to the lymph node and ultimately ultimately to spleen and if you want to look at a cross section here is HIV here's the mucosal barrier it either gets across through dendritic cells it can also get through M cells so these are M cells buyers can pass through the M cells and now there are only cd4 positive T cells waiting to attack any pathogen that comes through but instead they themselves get infected with HIV another mechanism by which HIV passes through the mucosal barrier these cells get infected and now those either dendritic cells infected T cells now drain into the local affair and lymphatics where they then to mesenteric lymph nodes where they can infect cells there and in addition now they drain ultimately through the lymphatics and into the and to the heart where they can now get around in the circulation as I'll show you it again in a few minutes now what's and this is a very dramatic picture from a paper by Danny thwack that I alluded to before so you see basically two pictures of the intestine taken by a colonoscope okay who here thinks that this is the normal appearance of intestine raise your hand what that you know if you think so just raise your hand and who here thinks this is the normal appearance of intestine raise your hand okay and I have to be very honest you know if I had seen this I would have said this is clearly abnormal I mean look at it all this yellow disgusting stuff here you know that can't be that can't be what my beautiful clean intestine looks like I mean now take good care of myself it and this looks nice and clean and smooth that's what you'd want it to look like right you know it looks like you know you could have company and they let me be proud it turns out that this is the normal appearance of the intestine why what do you think these bumps and yellow all modules are peyer's patches lymphoid tissue that's what it's supposed to look like if you and this is an HIV negative individual however after HIV infection these have disappeared why are they disappeared because HIV s wiped out the cd4 population in this gut and if you do pathological examination you consider any of us out of chemistry these are all cd4 positive cells all throughout the land appropriate in the after HIV infection is completely wiped out and this is showing and this happens within within weeks of the acute infection so it's very rapidly disseminated and very rapidly attacks these cells now you can appreciate tremendous amounts of viruses being made by these t-cells that are being infected and killed and that's really the major source of the high level of irony or early on in infection and HIV has basically hijacked the process that the immune system uses to rapidly to set disseminate antigen specific t-cells throughout the mucosal system I mentioned in the previous lecture that that's good that way all you mucosal lymphoid tissue throughout the body knows what anthems are out there so for example in a female breast tissue has these cells to make IgA against whatever pathogens are in the gut that way you could pass it on to the baby however HIV uses that to disseminate so basically it drains through the draining lymphatics into the thoracic duct the thoracic duct drains into the heart now it gets into the certain internist into the circulation and then now those infected cells disseminate all throughout the lymphatic lymph nodes in the in the mucosal system and this happens incredibly quickly in studies done in macaques has been shown that within a week of exposure SIV is are throughout the lymphoid cells well why is that important the reason that's important is what we'd like to be able to take patients that have been exposed to HIV and treat them to block infection so for example let's say somebody is stuck with a needle that's infected with HIV well clearly you'd want to start them at a highly active antiretroviral somebody has a high risk exposure they come to see you they say what should I do you'd want to be able to block it it turns out that because of this incredibly efficient rapid spread the window for preventing HIV is probably at most one to two days and that's why the recommendation is you have to take anti retrovirals within hour or two after the initial exposure because of the fact so rapid in terms of its ability to get transmitted if you kind of go home you say what should I do if it's dumb tie and we'll call a doctor it's a disaster you want to start it and probably within within two or three days it's probably too late because this process has been so spread that you can't block it okay any questions okay so basically now you have very Mia probably fuelled by the vast infection occurring in the mucosal t-cells and then you have basically an immune response that generated and I'll show you the time person if in a few slide but ultimately you can make large amounts of anti HIV antibodies cytotoxic T cells partially control replication for years and years and years in the absence of therapy there's a clinical leg phase but ultimately you basically get depletion of enough cd4 T cells that you lose your capacity to fight infection and you develop quiet immunodeficiency syndrome and if you now plot out cd4 T cells over time what you see is is that during the acute infection patients have as you know flu-like disease I'll show you what the virus does in a minute but you have a drop of the cd4 T cells your mute you have Sarah converse and you start having an immune response you tick up your cd4 cells a little that but nowhere near your baseline before infection and then over the period of years measured in terms of almost ten years you have slower depletion of cd4 t-cells it breaks less than 500 less than 200 once it's less than 200 you now basically have AIDS in a sense that you're not able to fully protect yourself from other infectious agents and that's ends up being a terminal event eventually and now if you look at the global picture of what's happening if you focus on HIV RNA and the plasma which is an indication of violet production within the first few weeks of infection you have this dramatic rise of plasma viral load over a million copies per ml because you have unrestrained replication of HIV first of all there's a ton of cells in the gut that HIV can infect secondly there's no immune response to bout counterbalanced AB however ultimately an immune response is generated and I'll show you in a minute and then the viral loads decrease and then you have this long chronic low-level viremia and then as your immune system gets depleted you no longer able to control and the virus spikes up if now that you look in terms of cd4 and cd8 you have a spike of these cd8 cells that's that's basically probably the reason for the control of the very Mia you keep these cd8 cells they don't get depleted from most of the infection which makes sense because they're not susceptible to HIV because they don't express cd4 however your cd4 counts slowly inexorably decrease until as I showed you before you have the development of AIDS in parallel with that you also start losing your CDA anti HIV response why is that what does CDA cells need to be most functionally active cd4 positive T cells so as you lose the T cells ultimately they're no longer as functionally capable as they were previously okay you all now appreciate what is the most important tissue in the body these are the where the immune system is generated lymph nodes because that's where they all the b-cells and t-cells go that's where the antigen is brought so you'd want by someone's immune system what tissue would you want to destroy the lymph node and that's exactly what it HIV does initially again lymphocytes are going to go through the lymph node HIV is going to be introduced a lot of cd4 positive T cells that's a target for HIV infection and in fact as HIV is replicating inside the lymph node it slowly destroys the infrastructure of the lymph node so if you want to look at it pathologically early on infection this is this esteem for germinal centers you have a lot of germinal centers packed with lymphocytes a relatively normal looking architecture however as time goes on the lymph node undergoes an involution you see losing large amounts of T cells and other cells so in addition to undermining the actual T cells themselves by the time you get into the later stage you've basically destroyed the infrastructure of the lymph node itself so even B cells and cd8 cells themselves can have the normal level of function that you have because you don't have that capacity of the lymph node to function as the organ where an antigen specific responses can be generated so there's another mechanism at which HIV compromises the immune system well if we now look at what the virus itself does and again this is something all of you are familiar with if you know double-stranded RNA HIV brings proteins with it gp120 at the capsule proteins eat before you want if one looks at the genome of HIV what one notices again is that you oh you tend to see three different rows and I'll discuss that in a second predominately structural genes you all know gang Pol and envelope the several regulatory genes and the ones that I'm going to talk about today briefly are attack rev and viv and now but particularly in their role in interfacing with the immune system so what else three different rows mean when you see the genome what these three rows this row is Row one this is Row two and this is Row three what it's referring to is the capacity HIV to use an alternate reading frame and normally as you know our chromosomes use a single reading frame so you have an Aug start site and then triplet triplet triplet and you keep on going throughout the gene however HIV only has a 10,000 base pair genome so it knows that if it would only use one reading frame it can only pack so much genetic information so what HIV does is very very efficient it uses all the reading frames and in so doing it is able to basically get not exactly three times but a lot of more information or more proteins made to just off of one reading frame but you pay a price because it's impossible to generate three reading frames all of which are exactly perfect in terms of being able to give you proteins in order to do your alternate reading frames you probably have to do a lot of splicing to get usable genetic information so your primary reading frames getting called on our own spliced RNAs however the regulatory RNAs tend to be spliced so for example here you haven't so a piece of tag is transcribed here a piece of tat our RNA is transcribed here and then these two pieces are spliced together a piece of rabid piece of revenue is spliced together and that will allows HIV again to maximize the genetic information for protein production one thing you hold in the back of the mind is the fact that grab and tap are too critical early proteins required for HIV replication and they are spliced which has ramifications in terms of their transport from the nucleus okay so again all do you know the fact that HIV gp120 binds of cd4 and a co-receptor fusion occurs and then the capsid with the RNA enters the cell again all of you are also familiar with the fact that if you look at under higher magnification the cd4 molecule juts out very very high off the membrane why does it have to be so tall cd4 what molecule the cd4 have to touch every c-class - and in a normal situation you would probably have TCR + peptide over here and in order to be taller than that it has to be high right is that so so now gp120 before even gets anywhere close to the membrane binds to these tall cd4 molecules but when cd4 binds to gp120 this binding causes a conformational change of the gp120 molecule and two things happen first and now spreads lower down and opens up and reveals a face of the structure previously have been concealed inside and this space is what is able to bind specifically to the chemokine receptor either ccr5 or cxcr4 why is that critical that this is hidden only and only revealed when gp120 binds cd4 because the immune system can't see it normally as we know antibodies want to see three-dimensional structures the three-dimensional structure is on the outside of the molecule it's not on the inside of the molecule by virtue of keeping this internal face that binds the chemokine receptor hidden this protects it from antibodies seeing it as another way by which HIV avoids the immune system evades the immune response and after this binds to the chemokine receptor further conformational changes are triggered gp41 which previously had been in this tightly coiled structure like a harpoon inside of a gun basically releases the pressure shoots GP 140 gp41 with hydrophobic end into the membrane and this now allows the viral lipid bilayer to fuse with the cell membrane lipid bilayer and allow fusion to occur and where does this lipid bilayer come from it comes from the cell itself and therefore that's why the fusion happens so smoothly it's almost like a reunion of the lipid bilayers as soon as they come upon they fuse now HIV yeah the question why don't they kill HIV yeah it's a depressant why does any antibodies kill HIV well first of all how effective are antibodies of killing viruses I mean they're really not that effective because you know they're very killing bacteria for example because they recruit complement they're able to stimulate optimization but to really kill a virus with an antibody is very difficult I mean there are reports for example that antibodies can sometimes rip apart of viral proteins and basically make them not functional but the major mechanism by which viruses protect the who as antibodies protect us from viral infections is by neutralizing the virus which basically means it binds to regions and the virus that are required for the virus to get inside the cell and by binding to that it blocks it so now the question is why doesn't HIV efficiently neutralize the virus by doing that and I'll discuss that in a few minutes so if you look at the different types of HIV ice lists that are out there there are at least two major types of strains of HIV that have been described and these have been described based on what cells they specifically in fact and there were two types initially were labeled and trophic for being able to infect primary macrophages and teach Ovid they wrote to the fact that they were able to infect T cell lines so M tropic HIV isolates infect monocytes primary T lymphocytes and not T cell lines and again this is a common misconception because people some people think oh and trophy macrophage trophy means it can't infect peripheral t-cells that's not true and Rodrick isolates very well in fact primary cells they just do not in fact T cell lines like h9 that's how it was first described in contrast and M Trebek my slits are really critical because they are the ones that initiate HIV infection and the observation was made that if you have a patient that has a mixture of M trophic and T terrific isolates in the bloodstream now they don't infect a new individual if you look at the blood of the new individual its M times M trumpet isalus so it seems for some reason that still hasn't been definitively demonstrated and trophic isolates are preferentially transmitted mucosal E and sometimes like you're resetting the infection clock because as time goes on T Charlie isolates which can infect T cell lines primary T lymphocytes and not Manas monocyte as the individual starts heading towards AIDS teat recognized let's become the predominant iceless so early on and trumpet iceless are the transmits the virus T trumpet iyslah's become the predominant Eisler why is that important if you would be designing a vaccine so now if you're designing an HIV vaccine to prevent transmission what type of virus would you want to protect someone from and tropic isolate and in fact that sounds very obvious but it turns out that early on in the history of HIV the early vaccines would generate it against Teatro de Guise loads and why is that because teach our devices were very easy to grow up because they grew up in T cell lines so you could read buckets and buckets of virus call amount sequence them and make enough to make a vaccine and mature the bicylist were a lot harder to grow because you required primary T cells to grow yeah get together from patients and they don't expand as well and that was obviously looking back that probably wasn't the way to go those aren't going to be most effective you want to target and tropicalism weather what is the mechanistic basis and again although we were familiar with this of why some strains of t-chart we can watch summer and traffic it turns out it depends upon the chemokine receptor utilize so all the ccr5 is preferentially used by macrophages macrophage trophic strains and cxcr4 is preferentially utilized by by a teacher provides loads and again this is a reasonably good explanation desire one more problem with that initially people who would say well the reason that export isolates cannot infect macrophages is because macrophages don't express cxcr4 that's logical makes a lot of sense however when you actually look at macrophages they do express cxcr4 so it's a little bit more complicated the problem of subsequent blocks and macrophages that somehow prevent infection with cxcr4 or teacher of the guys lists but B that hasn't made em tremor basslets ccr5 and teach recognize lists cxcr4 and this was very gratifying because it explained the basis for why some strains of RM traffic and some strains or teach over there also are dual trophic isolates called our 5x4 iyslah's that their parent li in fact macrophages peripheral T cells and T cell lines relatively easily well and it's unclear exactly what the pathogenic implications of that are okay any questions well how can we definitively say that am from the isolates are the critical eye slits for transmission of HIV infection how can we say that definitively you can do an experiment with try and pick with only teeth clipped in who you can only infect no mice are not infected with HIV because their cd4 and CC are 5 & 6 don't bind gp120 so again a lot of you are involved in cohort studies in patient studies so let's look at a large group of cohorts who are at high risk for HIV infection and ask the question all the people who are exposed to HIV that don't get infected right oh so this is to show you again the chemokines that SDF one is the normal lydian for cxcr4 CCR and midpoint alpha neuquén beta and rant these are the normal lie against ccr5 but now if you look at a Coldwater patients there was a small number of patients that had partners that are HIV infected and despite multiple multiple exposures and not taking the appropriate precautions these individuals did not get infected with HIV and once this discovery of the chemo con usage of HIV was discovered then the researchers went to these patients and said what about their ccr5 and cxcr4 is there something special about them and it turned out that there are individuals who have homozygous for a defect in ccr5 expression then the stress stop codon and cr5 they don't express the are ccr5 so maybe about between fifteen or twenty percent of the population are heterozygous for this defect if you had designed it is still expressing lower levels of ccr5 so you not absolutely resistant but less than five percent are homozygous don't express ccr5 and therefore these individuals were not we're relative very resistant to being infected there are a handful of patients that have it affected overwhelmingly they're protected so what is teaching us that M Trebek isalus ccr5 are critical for transmission because they were exposed to cxcr4 positive iceland's as well and that didn't seem to be efficiently transmitted so this is a very very important observation in terms of teaching us the critical role of ccr5 in transmission and again this is very important therapeutic ly because now we have ccr5 inhibitors that are now licensed to use as a new therapy for HIV okay so now HIV comes as a double-stranded RNA and it has to get translated into the C DNA in order to ultimately integrate into the host chromosome this process occurs through the enzyme reverse transcriptase that HIV brings with it well this moment good news and bad news which happens which actually is that the bad news is that reverse transcriptase is well actually the good news is that reverse transcriptase makes a lot of mistakes well you think that's good for us because that means that the genome of HIV is going to continuously maybe have mistakes made and not be effective in terms of generating future variants however it turns out that's actually the bad news because instead of having a stable genome that allows our immune system to target a epitope that will stay the same or drugs to target a protein or enzyme that stays the same it's continuously mutating and this has resulted in HIV being able to continuously generate variants that may turn out to be resistant both of the immune system as I'll show in a few minutes as well as to drugs so the use of this error-prone reverse transcriptase has major occasions in terms of HIV capacity to rapidly evolve and even what the immune system as well as drug therapy worsen so in terms of weird weird HIV made where's the primary source of HIV in tissues and it turns out that HIV can replicate an activated cd4 positive T cells in macrophages as well as slow levels in wresting memory cd4 positive T cells the primary location for HIV replication is activated t-cells a lower amount comes from macrophages and extremely low amount and intestinal comes from resting a memory t-cells welcome come back to this population in a few minutes but now let us focus on the large amount of HIV being produced by cd4 positive T cells what happens is is that every round of infection you introduce mutations how much HIV do you make in a day this is know from a study that David halted demonstrating that you make a billion particles of HIV every day that sounds like a lot and it is let's put it into context on what that means the lifespan of HIV a free thirty on eight hours and infected cell two days that means that the virus can go through 300 replication replication cycles in a year because RT does not proofread and it's error-prone and basically introduces an average of one mutation every ten to the fourth basis whether the size of the HIV genome 10 to the fourth basis which means that if you make 10,000 very ons you've basically mutated every nucleotide at least once in the entire genome it's mind-boggling imagine how long that would take me to do in a laboratory using appropriate reagents but you don't make 10,000 viruses in a day make a billion viruses in a day which that means that you basically you every position of HIV over 10,000 times every day and you have 300 replication cycles the year HIV is a mutation in this gene it's continuously changing and you have to appreciate the fact that this is an average of one mutation for 10 to the 4 faces some cycles you may get two three four mutations which is what allows HIV to develop resistance to multiple drugs so this general persistent mutants that will selectively become the predominant population in fact if you now take patients that have a stable level of virus in their bloodstream you put them on a single drug that's anti HIV basically the level of device drops dramatically but that only works for certain period of time within a matter of a week to do level virus goes back up to where it was before why is that same drug same patient why does it happen well now you sequence the virus and you discover at time zero there were no mutants against this drug however by the for each layer a hundred percent of the virus now has the mutation associated resistance to this drug what happened is at this time point you basically had a very very very very small level of mutants resistant to this drug didn't replicate as well so is extremely small subpopulation you had a new selection pressure the drug now the variants that I've been replicating very well more susceptible disappear when it becomes a dominant Iceland now is the is the mutant mutant that now is resistant to drug but it's not just for drugs this happens let's imagine and this is the question that you asked you've made a great neutralizing antibody it basically blocks HIV from getting to bind to cd4 fantastically what would happen when the library virus goes down because you've been successful but now what's going to happen is that HIV mutates that ever total so you have this great antibody but the other target recognizes this bond now what's going to happen to HIV replication goes right back up again so you make a new antibody recognize some new epitope blocks HIV replication level the virus goes down what's going to happen new theory you mutant doesn't have that Apatow promise right back up this is going on over and over and over again during the course of HIV infection and in fact that is the major problem with HIV that most of the epitopes that antibodies seem to recognize our ones that are really dispensable in terms of function of gp120 that's why it can mutate away and the virus is still infectious the holy grail of at least humoral immunity and vaccine development is identifying those epitopes that antibodies can bind to that HIV can mutate because if the mutates Roseanna those epitopes it will no longer be infectious and I'll discuss it in a few minutes those particular epitopes how does that answer your question so all right you know mutism is doing a pretty good job don't knock it but the problem is HIV because of its mutation is able to avoid it so now if you look in terms of the immune response it controls infection but doesn't eradicate it it probably cytotoxic t-cells play a critical role during the initial resolution of the high level of viremia because antibodies are made a little bit later but subsequently both CTLs and the antibodies play a critical role in keeping the level of virus down as the cd4 counts go down CTL is and compromised you basically now have mutated gp120 so many times that it finally has a pattern of epitopes that you can make antibodies against so you can't be controlled anymore with antibodies and now the by me Mia goes way up now CTL response is critical for controlling viral infections as you well know cytotoxic T cell is seeing a peptide derived from in the context of class 1 MHC molecules and again enough to go through this because we just learned this class 1 class 2 and again class 1 is presenting endogenous virus and dodges peptides derived from in this case viral infections presented on class 1 MHC on the surface now if one looks at the structure of the peptide as you recall it has two phases one phase points down towards the MHC molecule and what do we call these residues that are pointing down into the MHC molecule and residence it's even helpfully written over here for you to see right so anchor residues and then we have epitopes that are pointing up towards the T cell that's seeing it well it turns out as I'll show you in a minute that not only can HIV avoid the immune system by mutating the epitopes that the t-cell sees so the same way mutates the epitopes that antibodies see it also can afford the immune system by mutating these anchor residues because imagine if the peptide lacks the anchor residue is this peptide now going to be presented in class 1 MHC no so therefore even though you have fantastic cytotoxic t-cells you can do ten from your nails the C of loads and loads of the cytotoxic T cells they're looking for that peptide to be expressed on the surface of the HIV infected cell and they don't see it anymore and that's again because HIV can mutate it can generate these variants and in fact there are possible strategies used by HIV to evade CTL response generation as I said immune epitope escape mutants and I'll show you in a minute also down regulate class 1 MHC expression and also you can decrease the qualitative activity of HIV specific cytotoxic T cells chronic antigen exposure will decrease cytotoxic T cell activity decreased perfect production telomere length and increase their susceptibility a pathetic death we mentioned before you need cd4 positive T cells in order to rev up cd8 function if you lose cd4 positive T cells not only will you diminish cd8 activity you're also going to diminish b-cell activity and the ability to make new neutralizing antibodies to mutants that have been generated and this is just to kind of refresh your memory this is showing peptides with anchor residues and this is a paper that I believe Bruce co-author demonstrating that the this group of children where they basically lost the ability to recognize HIV by virtue of a mutant that occurred that turned out to be the anchor residue and this was just demonstrating the fact that if you now suck the wild-type peptide you can basically demonstrate that that peptide gets presented however if you now take the mutant peptide and this this is one mutated at this residue in this residue with the blue triangle even though you add the appropriate this does this peptide now will no longer bind to a2 and now in this case if you look for the ability to control infection the wild type virus you're able to basically you're basically able to so the mutant intimacy work and the whole a second the mutant virus here you can't control the infection and p24 levels go up whereas the wild type virus you're basically able to control with cytotoxic t-cells that are added quite well again demonstrating that lost of those residues prevents expression of this viral peptide by a a - and that's how HIV you can avoid a cytotoxic t cell response okay if you recall I mentioned in a previous lecture Neff and one of the activities that HIV Neff has is it condemn regulate the expression of service molecules when one molecule down regulates is cd4 well that's important because that prevents the cell from being super infected with another HIV virus however and also down regulates expression of class one MHC so now what happens is that and this is a from another paper by Bruce showing that if you infect with a net negative mutant and now you stain for expression of HLA too and this plot is basically a marker that's in the virus you could show that you have a population of HIV infected cells but they expressed HLA a tooth the same degree as the uninfected cells right is that clear however now if you take wild type virus that has nefesh pressed what you see is the uninfected cells Express the same level of HLA to here but now the virally infected cells dramatically lower their expression of class one MHC so now math is down regulating MHC class 1 expression if Neph down regulates class 1 MHC expression what happens to the ability of cytotoxic T cells to kill these infected cells it's compromised and in fact here you see that if you add cytotoxic T cells the ones that lack Neph they are nicely inhibited the ones that have functional Neb they basically are not killed by the CTL s because the CT else can see peptide in MHC class 1 for them to target and kill the cells ok that's clear any questions and this is the paper by by bruce with david baltimore describing up nap down regulates class one MHC and protects from killing the cytotoxic t-cells ok another molecule like earlier today we discussed ctla4 as an inhibitory molecule that down regulates t-cell function there's another molecule that's been described it that similarly inhibits t-cell function after a while to turn it off and it's called pd-1 and basically what is showing is that a naive t-cell MHC + peptide TCR co-stimulatory signals cd28 cytokines are being made this naive T cell now becomes an active it inactivated effector cell in this case we can call it a cytotoxic T cell cd8 positive and if everything is okay you clear infection and now you become a memory t-cell an important concept to think about is that the immune system is designed to fight sure focused Wars because the classic infection think about you get infected you're sick for a couple of days maybe a week but then your new system kicks in you eliminate the infectious agent and you get all better that's what a check that's what our immune system is designed to do our immune system is not designed to fight a chronic war why do you think our immune system isn't designed to fight a chronic war because what's the most common antigen we're going to see day in and day out South's antigen so there's an inbuilt fear that if we're responding immunologically to something for a long period of time maybe itself so therefore we should turn off that immune response so whenever an immune response goes for too long we have this kind of built-in process wherein we down regulate our immune response well what happens if we can't clear the infection then what we do is we basically call up our immune system too early before we've eradicated the infection and now we have a chronic infection so there's one mechanism by which we turn off the immune system during chronic infection is by expressing this molecule called PD one when it interacts with PD ligand expressed by an antigen presenting cell macrophage or dendritic cell it sent a signal to this T cell which is called exhausted T cell saying no longer be fully armed with all of the appropriate cytotoxic molecules that you need to be effective now is time for you basically to hang up your perforin and basically go home however there's still infection going on and that's why you can't clear the infection so in fact this observation has led to the postulate that maybe if we blockaded pd-1 PDL one interaction with either antibodies against PDL or antibodies against PD one maybe now we could turn off this negative signal and resurrect and generate these t-cells to now allow them to get back to killing HIV infected cells and currently there are clinical trials that are being initiated looking at the ability of this in terms of this intervention to reinvigorate the CTL immune response against HIV okay any questions so this is just one slide to show you a project that we're doing in our lab and we're utilizing molecular engineering actually to make designer cytotoxic t-cells because now as you very well know what gives t-cells specificity in terms of what antigen they recognize what molecule that they express t-cell receptor so if you have a T cell that's expressing one receptor and you now give it a different receptor what are you going to do to a specificity you're going to change it so can we do this for HIV so the approach that we're doing and we're collaborating with Bruce's group doing this we take HIV specific CTL clones we clone out the t-cell alpha and beta chain we put it into a length of viral vector and now this allows us to do is generate lentivirus and now we can take this lentivirus we could transduce peripheral cd8 t-cells that have any specificity but now once this has been transduced with this vector it's now has the genes for the alpha and beta change from this HIV specific CTL clone and now we could transform this peripheral cd8 T lymphocyte into a genetically engineered HIV specific CTL what's really cool about this approach is first of all we could basically take TCR alpha and beta from any CTL clone we can identify very potent high affinity TCR s that are very good at killing HIV infected cells in addition we can also engineer into this vector which is what we're doing other genes that could dramatically increase the capacity of CTL to be functional so we can put shrnas that down regulate pd-1 we could put grant IB to make it a more effective killer so this allows us potentially to make even better CTLs and the kind of vision is that we could harvest peripheral cd8 cells from a patient that and since they have not yet proliferated because the antigen they recognize is not one they've seen they're fresh almost naive t-cells they're not exhausted they're more than happy to go into battle we reprogram them ex vivo then potentially give them back to the patient and then now the patient has reinforcements to allow to reinvigorate its immune response against HIV because one HIV just out mutate your yes it's an excellent point well how is that going to get around the mutation problem the answer is is that one of the reasons why HIV is so successful in doing that is that another philosophy of the immune system is it tends to generate a very narrowly targeted immune response against only a few epitopes in any given pathogen you don't make cytotoxic T cells that recognize a hundred epitopes you tend to make silent types of T cells that recognize what we call immunodominant epitopes so it's a and we would hypothesize to do would be to generate a mixture of lengths of viral vectors each of which recognizes a agenda that encodes a tshirt recognizing a different epitope and what then we would do would be to to transduce these peripheral CD aids with a mixture of lengths of viruses and therefore give the patient a broad array of TC ARS recognizing a broader array of epitope so you may mutate against one but not against another so it conceptually we would call that immunological harm highly active antiretroviral therapy in the same way three drugs were grade one drug doesn't work well at all because a mutation we would postulate multi epitope therapy would work really well whereas an hour target one which is naturally generated doesn't work very well okay now you talked about antibodies neutralizing antibodies can prevent viral infection as you know that the the antibody binds to the virus in this case would prevent gp120 from binding to cd4 or ccr5 well antibodies can have different effects in HIV infection they the good news is they could be neutralizing or they can interfere or they could interfere or with the finding of the virus however there are negatives that antibodies can do and these have been reported and again it's unclear how clinically relevant they are but they clearly have been reported in in vitro studies but some antibodies may actually enhance HIV infection and one hypothesized mechanism is if the antibody binds HIV now the antibody binds to the FC receptor that may provide HIV with another way of getting inside of a cell because as you remember FC receptor antibody complexes get internalized that may be a way that HIV can get inside the cell independent of cd4 ccr5 interaction in addition HIV may also stimulate autoimmunity because some of the antigens present in HIV may resemble those of cells and again some HIV patient says he will no have one o immune diseases so antibodies may be both good and maybe bad well you previously had said that one of the problems with antibodies is they tend to recognize narrow epitopes that really are specific for the strain that is infecting the individual and those can be mutated the holy grail is identifying epitopes that are critical for HIV that it can mutate against and therefore that's why one would want to identify broadly neutralizing antibodies that can neutralize a very wide range of HIV strains typically if I would take neutralizing antibodies from one patient and try them against the virus from a different patient it would work very well because it's only strains specific but a so a very rare number of broiler neutralizing antibodies have been identified and they tend not to come from patient serum but they tend to actually to come from these cells that have been immortalized or formed hybridomas and one region to f5 and 4010 these are the names of the monoclonal antibodies is present at the stalk of the of gp41 another and in here you have for me 10 and there's another other monoclonal identified - g 12 which recognizes a carbohydrate antigen and b12 which recognizes the face of cd4 that GP and 20 interacts with cd4 these are rare broadly neutralizing antibodies and there was a tremendous amount of hope that maybe these may be the Holy Grail antibodies that we could utilize to prevent HIV infection so now what you'd want to do would mean let's make a vaccine and this vaccine would stimulate these antibodies well one pop problem that arose is that it's been reported that these epitopes to f5 and for me 10 actually have similarities towards self proteins in this case cardiolipin so therefore you make a vaccine you immunize the patient but then they're not gonna make antibodies because it looks at the vaccine it says that's self I'm not going to make antibodies against self and that may also explain why patients themselves don't make high levels of these antibodies because it's considered a self epitope again there are I have - there are papers suggesting that may not necessarily be completely true but at least it's this is a possible explanation of why you can't make some antibodies that could be highly neutralizing the - g12 is a carbohydrate antigen those are notoriously difficult to make antibodies against in a vaccine system again people are currently working on that but what I would think of something I find a little bit depressing in this process is this experiment where they basically ask the question let's make these antibodies and let's treat patients with these antibodies and let's see what happens and this experiment they basically took patients who have been on antiretroviral therapy they were able to suppress the level of antibody down to undetectable levels now the a RT was stopped and then they looked at the level of viral virus that came up and in fact even though they were on for two f5 for a ten and two G 12 a cocktail nevertheless they still saw that ultimately the virus came back and if that dead virus turned out to be resistant to these previously broadly neutralising antibodies so that was a little disconcerting because it suggested that even these epitopes can be mutated and nevertheless the virus still able to replicate so that's again what needs to be identified is an epitope that's highly conserved that HIV can mutate because then it makes HIV non-functional yeah these different these different calls represent different patients and have higher that's because for some reason the source of the paper got deleted during making a slide usually like you obviously want to give credit where credit is due for this study okay this this now let's talk about memory t-cells well that's a whole part of memory t-cells in terms of lifespan they're long live for decades so if you would want if you're a virus and you would want to hide inside of a cell for a long period of time what cell would you pick a memory T cell and in fact the fact that HIV in fact resting T cells and live in them has really made a major problem in terms of eradicating HIV infection because you could take patients treat them for years and years and years with high-dose retroactive art therapy not see any detectable virus taking antiretroviral therapy and boom the virus comes back within a few weeks to months because it comes out of these resting memory T cells and then gets reintroduced into the lymphoid system so this capacity of HIV to to live and instead resting memory t-cells which tell me which didn't look for decades has really made it almost impossible at this stage to eradicate HIV infection just by giving antiretroviral therapy what you're going to need to do is come up with the way of identifying what resting memory T cells are infected with HIV and eliminating those cells and as of yet we have not developed techniques and technologies to do that okay but now you have the resting t-cell it's not actively replicating not actively making HIV how does the cell know to turn on when the cell undergoing replication well again keep on asking me for T cell signal transduction slides so I keep on providing them but what I want to focus on is look at these transcription factors you know them well and that Adam Kappa B ap1 these are made when the T cell is activated and turned on now this is a piece of the long terminal repeat of HIV if now you look at the sequence in this regulatory region of the LTR the long terminal repeat you see that there are motifs that bind ap1 and fat and at kappa b sp1 which are all cellular transcription factors so what happens is the cell gets activated it makes its cellular transcription factor to turn on cellular genes at the same time those transcription factors are binding to sites in the HIV a long terminal repeat and activating HIV so the same time the cells being activated this latent lee infected virus is also being activated and now it spews out tons of products to connect along to reintroduce the infection so K this is how to get HIV hijacks our normal regulatory system and immune system for home benefits okay so HIV utilizes two proteins that and rev that approval for HIV function well that is critical because tap is important in terms of permitting elongation the RNA transcript in the absence of tad you get very very low levels of virus being produced but what is the role of round and round basically plays a critical role that's dependent upon the fact that it is splice and structural RNA is not splice if you look at this genome RNA piece you have the sequence called RR e which is red responsive element and now if we look at what happens during transcription you have integrated viral DNA this is the Met nuclear membrane and this is the nucleus and this is the cytosol you have integrated biology nay you have one splice rnase and you have spliced rnase normally during replication this sequence and this sequence is called CR s or sis restrictive site this prevents these RNAs from leaving the nucleus and therefore they ultimately undergo the degradation so these are the structural RNAs making Gaeng Colin M however Rab which is a spliced RNA has this restrictive sequence spliced out and now we could leave the nucleus in order to make its you know what to make the red protein but now what happens is rath protein is made red protein gets back into the nucleus bind to this side which is called an assist acting region or the red responsible element and now this gives the arm the structural RNA have passed allowing it to leave the nucleus and now make the structural proteins so this is the mechanism by which red worse and again you basically have the sister did the this the Syst restrictive sequence keeps the structural RNA inside they get decorated until it rather gets made rather than a is a spliced RNA so those restrictive RNA sequences random comes back in by to the system reactive sis acting region or rare restrictive element now it gives it a passed now these can lead and again Chile lastly wider this happen probably because what this allows you to do is build up a large amount of structural RNA inside the nucleus rather goes in and then in a very short period of time you get a large amount of RNA leave the nucleus go to the ribosomes and make a large amount of HIV proteins in a very very short period of time this basically it's kind of like we do sell the culture and you have proliferation you want to kind of align the replication of the cells exactly to the right time frame and that seems to be what HIV is doing with the red protein ok this is another viral factor called the violent activity factor and this is the last topic I'm gonna gonna touch upon but it's actually real then the fact that a lot didn't make a difference it made buyers took that virus infected the infected permissive psyche kept on making marks so even though the virus lacked them it was still infectious that's not surprising you know that's can be seen if you took a non-permissive self cell infected it with the Delta then it made virus so you can detect virus in the supernatant but if now you took that virus and tried to infect either permissive cells which could have been infected here or non-permissive cells no wires was being produced well that was a little surprising because this is not surprising that the Naga mr. cell couldn't be infected I mean on permissive means it's not permissible but why all the sudden for this permissive cell no longer be infected by the virus after one passage through this non-permissive cell right tech clear if you took wild-type HIV with it infective non-permissive cell perfectly well you took this bar it could go on in fact all over against something was happening that seemed to be changing the actual function of the virus after it's going through a non-permissive cell but that was not happening when it went through a permissive cell okay is that clear it turned out that when they did a microwave analysis and subtractive hybridization to identify what gene was present and permissive cell and non-permissive cells they identified a gene equals C of decibels cm 15 that was present in the non-permissive cells that selectively inhibited HIV replication in the absence of it and if you basically this from a patient prepared by XI if you actually took took cells that previously had been permissive so in this case Delta VIP you would infect them with with Delta Y a very good by reproduction if you took these cells and now and at C and 15 in the absence of of them they no longer were infectious so this was clearly showing that this gene CI 15 was critical and interacting with Philip and his process of permissive and not permissive state but giving it back to a permissive cell you can make it non-permissive what was his protein doing it turned out that this protein was identified as employment at 3g and April def 3G is a cellular protein again West's exact function is aside from this is unclear but what happens is individual gets infected the cell has infected with delta v virus comes in it makes live in front of your bottle RNAs and structural proteins but at the same time somehow able mast VG also finds its way into the virion now this paper bent ring any bells April back turns out is it is similar to a ID which is the enzyme that used to both give somatic mutation and class switching what is the function of a ID what is a ID due to a nucleotide anybody remember it switches right decidedly the admin is a seat to a you and then ultimately that becomes a team that generates mutants and even a bottle of structure which is good this is only those few hotspots it turns out that April affects 3G has exactly the same capacity namely deaminating seed to a you and then introducing mutations into the viral genome however by introducing enough mutations into the viral genome what do you think happens to the function of the virus it changes in form enough mutations you basically make it no longer viable or infectious and in fact that's what April better EEG does so in the theory on as reverse transcriptase and it basically is deaminating it so now when this virus infects the new cell it has one of these mutants and it's a dead buyers so therefore so so therefore in the absence in the absence of them when you infect a non-permissive cell which has April back 3G if therefore basically cause mutations it's no longer a death the cells that are permissive for Delta one molecule do you think they lack April back 3G and that's why even though they like them it doesn't matter because there's no April back 3G to get packaged inside of the virus now what does div do it turns out what this does is virus infects the cell this isn't only on a gene product so it's made very quickly this somehow lies to April death 3G and sequester's it so it can't get into virus so now the virus gets made it doesn't have April back 3G and now infections because no mutations have been introduced so in essence normally we've developed this innate way of fighting retro viral infections that actually probably is very affected that we didn't know about but that works only until HIV has come up with this modality of using v to sequester a probe f3g and that's why it's able now to to replicate very efficiently is that clear okay so basically questions to consider how does a tie to use the efficiency of the immune system to efficiently effect and destroy immune system so it talks about multiple ways it does that both by being very my dendritic cell to the lymph node being disseminated all throughout the lymphoid system features of replication allow it to evade again the high error rate of reverse transcriptase allows HIV to mutate so any epitopes be the antibody undertones or CTL epitopes including anchor residues can be mutated allow you to avoid the immune system how does the replication synchronized to cellular replication again by utilizing cellular transcription factors to regulate its own replication and molecular mechanisms used by HIV to the innate antiviral Center responses again v April back 3G system is used again you know thank you very much for your attention
Medical_Lectures
Hepatocytes_Liver_Histology_Part_37.txt
see then it will go right but some extra amount of plasma or plasma fluid most of it enters into space or DC and then goes back but some which remains extra which is no drain back into sinusoids that will drain into the periphery and that will convert into length right so lymph is moving from the center to the periphery now in between the hepatocytes the parasites are draining on this side secreting on this side what does what is the substance bile and bile is moving from the center to the yes periphery right so movement of the blood is from periphery to the center this is called centripetal flow and movement of the lymph is from the center to the outward and movement of the bile is also from centre to the outward and this is called centrifugal flow is that right now look as their input systems there should be drainage system alpha so drainage system on this side should be yes what should be here what is this lymphatic channels and what should be here what is this bile will be yes I have not made the cells here bile will be coming there this is there right now we can make simply that this is the lymphatic which is a drainage system right and here we can make what is this bile duct so how many things were in present how many channels or systems are present at the corner of this classical Lube you the four systems but unfortunately because early astrologists did not recognize the lymphatics so they thought there are only three system there is a branch of portal or this branch of puerto bean there is a branch important artery and the thought there is what is this bile duct okay I'll make bile duct plaque so you really do not get confused right this is exactly it should show itself so that you do not get confused in the diagram I will make my left with this color but in a double way right this is well done so originally doctors thought that every corner these three things are present and they call it portal triad what they call it portal triad so but later on of course the new it is not portal triad it is portal triad this is it right or simply we call it portal area what we call it plural area so at the outer corners of the hexagonal lobules we have portal areas in every portal area we have portal vein input we have artery hepatic arterial input and we have two drainage system there is lymphatic drainage system and bile duct drainage system is there any question here no now we come back to this diagram if we look at this diagram so how you will mention that blood is moving from the periphery to the center is there right any problem in understanding this right so blood is moving from the periphery to the center into these hepatic classic lobules and of course you should not forget to mention that what is this moving from the center to the periphery yes bile is that right and in the same direction what is moving lymph at Texas lymph this is that right no problem up to this now central wins these three central veins they will come together now they are draining yes these central veins will take the blood from the santur and they will drain into sub lobular beans right what is the screen called sub lobular when and subliminal Bueller veins will come together and mate yeah what is this way who will tell me the name of this way yes hepatic vein and hepatic veins will eventually drain into where and - yes inferior vena hey what so in this way you can look at in circulatory system how the liver is really working is that right that blood is again there are two input system hepatic arterial input and what was this portal venous input right they are making both of them are making their branches at the corner of classical kneubuhl from the corners blood from both system drain into a paddock sinusoids which is moving from the periphery to the center then it collect into central vein central vein come out and they drain into sub level or wane and eventually into a hepatic vein and to the a vena cava coming back to the right heart is that clear no problem into this another thing at the same time we can see this bile system is draining in the periphery so at every corner what is this bile duct is there right and these bile ducts will also come together like this are you understanding me right these bile ducts from every corner and this is left hepatic duct and what is this right hepatic duct and they will come common hepatic duct and of course some of you must be knowing there is something called gall bladder is that right and what is this cystic duct this duct from the gall bladder is this one is cystic duct so again let me explain this is the right hepatic duct this is left of hepatic duct making common hepatic duct common hepatic duct meeting with cystic duct and now converting into what common bile duct and what is this going down is common bile duct I will drain the I will draw the whole structure here so once forever it must be clear hopefully is a side diagram but it has it is a very important anatomical implication and what is this here yes what is the structure thank Ria's great now what really happens this is right hepatic duct what is this left hepatic duct and what is here common hepatic duct and from here which is coming here I will make the gallbladder here so that from the gallbladder what is coming here what is this cystic duct and now all this together is called common bile duct it is coming behind the do denim of course come into pancreas and what is this step pancreatic duct and then common bile duct and pancreatic duct come together and open here in Tudor denim at the employ of weatr here pancreatic juices and bile will come down other right again the relationship right and left the Patek duck and - what is this common hepatic duct meeting with the sister making what is this common bile duct meeting with the pancreatic duct and both of them common bile duct bringing the bile and pancreatic duct bringing pancreatic secretions they come together and open at amplify the second part of duodenum so that their release in the duodenum by lambda Thank You attic juices is that clear let's have a break here
Medical_Lectures
Hyperglycemic_Crises_DKA_and_HHS_Part_1_of_2.txt
[Music] [Applause] hello I'm Eric strong from the Palo Alto veterans hospital and Stanford University today I will be talking to you about hyperglycemic crisis specifically diabetic keto acidosis and the hyperosmolar hyp glycemic State here are the learning objectives of this talk first to be able to Define recognize and discriminate between diabetic keto acidosis and the hyperosmolar hypoglycemic State second to understand the basic pathophysiology of each syndrome finally to understand their General treatment strategies including potential complications from treatment so first what is diabetic keto acidosis or DK a and what is the hyperosmolar hypoglycemic state or HHS these are two of the most serious acute complications of diabetes as we will see shortly they share some pathophysiologic mechanisms and clinical features however on the most basic level dka is a combination of hypoglycemia ketosis and acidemia while HHS is a combination of hyperosmolarity hypoglycemia and alter m al status some clinicians may use alternative terms for HHS such as hyperosmolar non ketosis which may be abbreviated as honk or h n k however my impression is that these alternative terms are starting to become less common in FA favor of HHS occasionally I have been asked whether the diagnoses of dka and HHS are mutually exclusive while they're not mutually exclusive necessarily by definition or pathophysiologic mechanisms I have personally never seen a patient in whom I was convinced both syndromes were occurring simultaneously while my personal experience certainly does not rule out this possibility it does suggest that if it can occur it is probably a rare event let's look at the epidemiology of these two syndromes as with many medical conditions the traditional teaching does not exactly match reality the traditional teaching is that dka is seen predominantly in type 1 diabetics and those under 65 years of age while HHS is seen predominantly in type 2 diabetics and those over 65 years of age in reality however most patients with either dka or HHS have type 2 diabetes and many patients with dka are older than 65 this diagram demonstrates from where some of these misconceptions originate first you can see that among all patients both sick and healthy there are fewer type 1 diabetics than type 2 diabetics type 1 patients are more likely to develop a hypoglycemic crisis and that hypoglycemic crisis is more likely to be dka than HHS however even though a risk of acute crisis is lower in type two patients than in type one because there are many more of them most patients with these complications are type two as a side note you can also see that HHS is overall slightly less common than dka though this largely depends on your patient population for example anecdotally at my home institution of the paloa VA hospital where the average patient is older and more chronically ill than at most other hospitals I find HHS to be equally prevalent if not more so than dka regarding the mortality of dka and HHS in current times dka actually has a remarkably good prognosis with a mortality of less than 1% while the short-term mortality in patients with HHS is around 5 to 10% this surprises many people since patients with dka generally have significantly more electrolyte and acid-based Arrangements the explanation is that the fatalities in both conditions are usually due to comorbid conditions or the triggering event for the hypoglycemic crisis since patients with HHS tend to be older and more chronically ill than those with dka their mortality rate is higher let's move on to talk about the pathogenesis of these conditions as with the epidemiology traditional teaching does not match reality that traditional teaching usually explains the difference between these conditions by stating that dka is due predominantly to too little insulin while HHS is due to too much glucose this is a tremendous oversimplification in reality there's a great amount of overlap in the pathogenesis of the two syndromes I want to talk a bit about these overlapping pathophysiologic mechanisms because it will lead to a better understanding of the triggers of each syndrome as well as their distinct clinical features first some patients will develop worsened insulin deficiency maybe this is due to medication non-compliance with insulin or maybe it's because we're seeing a first presentation of type 1 diabetes in a patient whatever the reason insulin deficiency will lead to increased lipolysis which then leads to increased delivery of free fatty acids to to the liver in the liver these excess free fatty acids get converted into keto acids next as a separate problem some patients may have excess glucagon catacol means Andor cortisol all of which may be due to an acute infection or excessive physiologic stress such as that seen in the myocardial infarction these excess hormone levels will upregulate the pathways leading to keto acids they will also lead to decreased protein synthesis and increased protein breakdown which which will provide more substrates for gluconeogenesis at the same time both insulin deficiency and excess glucagon will directly lead to decreased glucose utilization by peripheral tissues this combination of decreased glucose utilization and increased gluconeogenesis by the liver have an additive effect to produce hypoglycemia the hyperglycemia leads to an osmotic diuresis and subsequent dehydration and if the patient has has limited thirst or access to water as might be seen with someone who is either ill and or confused this dehydration will quickly lead to hyperosmolarity so we now have the three important pathophysiologic endpoints keto acidosis dehydration and hyperosmolarity these first two end points are key features of dka while the second and third end points are key features of HHS the fact that the hyperosmolarity required for the formation of HHS is a consequence of poor water intake helps to explain why this condition is more often seen in older chronically ill patients let's move on to discussing how one can recognize a hypoglycemic crisis and how one can distinguish dka from HHS at the bedside the major symptoms of dka include polyurea and polydipsia dispnea abdominal pain and nausea and vomiting the polyurea and polydipsia are due to the osmotic diuresis from excessive glucose spilling into the urine the dmia is from the body's attempt to blow off carbon dioxide to compensate for the buildup of Keto acids and to restore pH closer to normal the mechanism of the abdominal pain nausea and vomiting is actually not well understood the symptoms of dka typically develop over less than the day while HHS also has prominent polyurea with or without polydipsia the other distinguishing feature is Alter mental status which is a consequence specifically of hyperosmolarity the symptoms of HHS typically develop over greater than a day closer to 2 to four days regarding exam findings in these patients both will have generalized signs of dehydration and as just mentioned patients with HHS will always have some degree of alter mental status there are two interesting signs unique to dka First is theonomous named cous respirations these are respirations that are of normal rates but excessively deep such that the patients will have an elevated minute ventilation the mechanism for this is presumably increased respiratory drive as a consequence of low arterial pH however it's not clear why this finding is much more commonly described in dka than in other forms of metabolic acidosis the the other unique sign in dka is a fruity odor to the patient's breath which is a consequence of high levels of acetone which is one of the three Ketone bodies formed during eka as an interesting side note apparently not every person has the ability to smell acetone but while I have personally heard Physicians describe a specific percentage to the fraction of the population lacking this ability such numbers are not based on reliable scientific studies along those lines there's no significant literature evidence regarding the sensitivity and specificity of either cous small respirations or acetone breath in practice the major way to distinguish between dka and HHS is actually on Labs here's a table that highlights the most important values to look for first plasma glucose in dka it could be anywhere from a relatively low 250 to about 800 in HHS the glucose is almost always at least above 600 and there really isn't a known specific upper bound the highest sugar I've personally observed in such a patient is about 1,200 second arterial pH with mild dka it may be only modestly acidotic while in moderate to severe dka it will be more so in HHS pH is most commonly normal though may be ever so slightly acidotic next serum bicarbonate levels in M dka tend to be around 15 to 18 lower than 15 in severe dka and higher than 18 in HHS urine and serum ketones are usually but not always positive in mild dka but much more consistently positive in more severe cases ketones are usually not present in HHS we'll talk more in a minute about why mild dka can have no detectable ketones despite my original statement that keto acidosis is a required feature of the syndrome the an Gap in all forms of dka is elevated while is usually normal in HHS serum osmolarity in dka can be very variable while it is almost always increased in HHS usually above 320 milliosmoles per kilogram mental status and M dka is usually normal on severe dka and HHS it's usually altered the alteration in Consciousness seen in moderate to severe dka is generally from either acidosis or dehydration more so than from hyperosmolarity which as I've stated before is the cause in HHS an important question about these lab findings why is the degree of hypoglycemia typically lower in dka than HHS after all a blood sugar of 250 can be relatively close to typical for some uh poorly controlled diabetics there are two reasons for this first the acidosis in dka leads to early ier symptoms and thus an earlier presentation than HHS second patients with dka tend to be younger with more preserved GFR and ability to excrete excess serum glucose now a few words about keto acids since they are such an important feature of dka I think it's important to understand what exactly they are and where they come from if you recall the prior slide outlining the pathogenesis of dka and HHS you may remember that insulin deficiency leads to increased delivery of free fatty acids to the liver there they undergo beta oxidation which generates acetyl COA this is the same acetyl COA that is the critical component of the kreb cycle in cellular respiration here however the acetyl COA is transformed in the liver into acetoacetate which is one of the three Ketone bodies some acetoacetate will undergo spontaneous decarbox into accione which is actually not an acid and therefore does not directly contribute to the acidosis of dka another portion of the aceto acetate will be reduced to Beta hydroxy berate using nadh as a proton donor while acetone is not actually an acid beta hydroxy berate is not technically a ketone but rather just a carboxilic acid thus as mostly a point of trivia while there are three classically described Ketone bodies acetoacetate is the only one which is actually a true keto acid now we get to the problem with measuring Ketone bodies in dka the standard Nitro proside assay that is used for their detection only detects acetoacetate and acetone and not beta hydroxy berate unfortunately beta hydroxy berate is the predominant Ketone formed during dka thus in mild cases of dka there may be insufficient levels of vetto acetate and acet to be picked up and the urine and serum Ketone assays may be falsely negative while beta hydroxy berate can be measured directly this is a send out test at most hospitals including my own the results of which do not come back in a clinically useful time frame the last point to make about keto acids is that as part of the process of treatment of dka with fluids and Insulin the beta hydroxy beate will be oxidized back into acetoacetate as a consequence even though the patient may be getting better and their acidosis resolving their Ketone levels as measured by nitroxide assay May counterintuitively worsen but this worsening is just a misleading artifact and should be ignored for this reason serial Ketone levels should never be ordered during treatment of dka
Medical_Lectures
Inpatient_Diabetes_Management.txt
hello this is Eric strong and today I will be discussing inpatient diabetes management the learning objectives will be to understand the goals and challenges of inpatient diabetes management to be able to create an initial treatment regimen for a diabetic patient admitted to the hospital and to be familiar with basic principles of adjusting and insulin regimen in response to persistent hyperglycemia or NPO status that is nothing by mouth be aware that the management of diabetes is an an extremely detailed and nuanced topic and this video will just be providing a general overview always consult an experienced healthcare professional when making any adjustments to a diabetic treatment regimen the goals of inpatient diabetes management are first and foremost to avoid hypoglycemia which in the short-term is much more dangerous than hyperglycemia second to avoid hyperglycemia next to assess outpatient glycemic control and consider adjustments in consultation with outpatient providers and finally to assess the need for diabetes education which can be performed in the hospital while the patient is a captive audience with plenty of time the inpatient management of diabetes can be very challenging due to a large number of factors which can disrupt the metabolic processes keeping a patient in you glycaemia balance between hyper and hypoglycemia one of the major factors that contribute to hyperglycemia are many hormones and other mediators that are increased during acute illness these include cortisol catecholamines like epinephrine glucagon and various pro-inflammatory cytokines others are the use of IV dextrose along with carbohydrate-rich enteral and parenteral nutrition hyperglycemia can also be exacerbated by the use of exogenous steroids which can be part of the treatment of a number of conditions such as a COPD exacerbation or elevated intracranial pressure and finally there may be new contraindications to a patient's previous outpatient oral medications for example acute kidney injury and NPO status are contraindications to Stefano ureas volume overload is a contraindication to viola lighting Dione's more commonly known as t cds or glitter zones and kidney injury and reason or anticipated IO donated IV contrast are contraindications to metformin in addition to the contributing factors to hyperglycemia factors contributing to hypoglycemia include poor or unpredictable P o intake while a patient is acutely ill also outpatient non adherence can be problematic as well if a patient appears to be well controlled when he or she is secretly only taking medication some of the time or only taking some of the medications the medical staff believes he or she is taking the consequences of hyper and hypoglycemia can be severe hyperglycemia can lead to an osmotic diuresis and volume depletion electrolyte loss and general immune dysregulation while hypoglycemia can lead to ultra Mental Status seizures and if not immediately corrected permanent CNS injury another factor that complicates inpatient diabetes management is the uncertainty regarding what the appropriate glycemic target should be with the passage of time and accumulation of more scientific data opinion on appropriate target glucose levels has changed up until about 10 years ago the prevailing opinion had been that providers should be attempting to get all diabetic patients to you glycemia while in the hospital however newer and larger trials in a variety of patient populations have demonstrated either no benefit to tight glucose control or in some cases even worse outcomes with tight control thus in 2009 a joint task force with representatives from the American Association of Clinical endocrinologists and the American Diabetes Association released a consensus statement in which they recommended a target glucose in most ICU patients of 140 to 180 milligrams per deciliter in non ICU patients they recommended a target pre prandial glucose of 100 to 140 and they target random glucose of 100 to 180 there remains small subsets of ICU patients who may benefit from tighter control than 140 to 180 such as post cardiac surgery patients keep in mind that these are only guidelines and that in practice targets should be individualized to the patient and to the situation one situation in which this is definitely true is for patients nearing the end of life we're a target glucose range might be whatever keeps the patient asymptomatic a provider might tolerate glucose is up in the 300s or even higher without intervention in this specific circumstance depending upon the patient's goals of care to understand how to decide upon a diabetes regimen for an inpatient I will first need to briefly review the different categories of insulin categorized by pharmacokinetics there are four general types of insulin first is the very short acting insulin into which list bro and as part are placed these have a very quick onset of action of 5 to 15 minutes have a peak effect at 45 to 75 minutes and lasts a total of 2 to 4 hours so called regular insulin is considered short acting takes effect at 30 minutes peaking at 2 to 4 hours and lasting 5 to 8 NPH is an intermediate acting form of insulin with an onset of 2 hours peak at 4 to 12 hours and duration of 18 to 28 hours finally long-acting insulin which includes glargine along with some less common others also as an onset of 2 hours has no particular peak time of action and lasts more than a day insulin can also be categorized based on function that is what role does a particular form of insulin play in a patient's management strategy for example there is basal insulin the function of basal insulin is to cover the body's insulin needs to maintain basal metabolic activity that is it reflects the fact that our bodies need insulin even when we are not eating or doing much of anything common regimens a basal insulin are glargine every 24 hours usually at bedtime or NPH every 12 hours in critically ill patients a continuous IV infusion of regular insulin can also be used for this purpose next is prandial insulin sometimes referred to as bolus or nutritional insulin this prevents the hyperglycemia that would otherwise occur as a consequence of eating a meal in patients who are eating common regimens include regular a sport or lisp row before each meal in patients who are NPO this should obviously be held finally there is corrective insulin which is more commonly referred to as the sliding scale the sliding scale is intended to correct hyperglycemia that is already present before the meal starts this can be done with regular aspirin or listro given before each meal and at bedtime in patients eating and given every six hours in patients who are not eating the insulin sliding-scale itself can be a little confusing at first each Hospital typically has its own scale pre-approved by committee of pharmacists and physicians here is just one example of such the only decisions the ordering physician needs to make for an individual patient is which form of insulin to use and which of the three levels to use the mild scale is generally reserved for insulin sensitive patients such as type 1 diabetics the moderate scale is used for most type 2 diabetics and the aggressive scale is for patients with unusually high levels of insulin resistance which are a minority of type 2 diabetics so how do we use this information about insulin to choose a treatment strategy in a specific patient when a diabetic patient is to be admitted to the hospital the first question to ask is does the patient have an indication for a continuous infusion of insulin well agreed upon indications include septic shock post cardiac surgery moderate to severe diabetic ketoacidosis and the hyperosmolar hyperglycemic syndrome some clinicians prefer to use an insulin infusion in any diabetic who is critically ill insulin infusions are a complicated topic the details of which are beyond the scope of this discussion but most hospitals have well-established protocols in place so a few decisions are needed of the provider once the decision to use the infusion itself has been made in patients who do not have an indication for continuous infusion ask whether the patient meets all of the following is he or she a type-2 diabetic is his or her diabetes well controlled as an outpatient with diet and/or oral medications is it anticipated that the patient will be eating normally and are there are no contraindications to his or her regular outpatient oral medications if all these criteria are met then the most appropriate treatment strategy is to continue the outpatient regimen and a day four times a day fingerstick that is a bedside glucose check along with a sliding scale if the patient is on a sofa no urea such as glipizide or glyburide the provider should consider a modest dose reduction as these meds can lead to hypoglycemia if the patient's pio intake is not equal to that as an outpatient which is common in the hospital if on the other hand the patient is well controlled as an outpatient with insulin including those on an insulin pump and you anticipate that they will be eating normally then continue the outpatient regimen plus the finger sticks and sliding scale as with chiffon or urea is considered a modest dose reduction in either the basal and/or the bowls insulin dose what if the patient does not fit into any of these three categories everyone else which actually ends up being the majority of patients should be placed on what is known as a basal bolus regimen creating the basal bolus regimen is a multi-step process that begins with estimating the total daily insulin requirement abbreviated TDD for total daily dose there are several very simple equations described in literature to do this some of which are more conservative or liberal with insulin and others the following is a rough average of the different approaches that are out there for most type 2 diabetics the estimated total daily dose is about 0.4 units per kilogram of body weight in type 1 diabetics the elderly and in patients with renal insufficiency using a value of 0.3 units per kilogram is safer with less risk of hypoglycemia 50% of this total daily dose will be devoted to basal insulin which can either be further divided into twice daily NPH or as once daily glargine or other long-acting insulin I usually prefer using NPH for hospitalized patients who aren't already on glargine because NPH can be titrated more rapidly in response to changing insulin needs as an acutely ill patient however some clinicians are reasonably concerned that NPH can lead to unfit logic peaks and valleys in blood glucose that can actually make in patients eye tration even more challenging the other 50% of the total daily dose will be for bolus insulin divided into three equal doses for each meal regular a sport or lisp pro insulin can be used with regular given 30 minutes prior to the meal and others given immediately prior finally although not part of the total daily dose distribution patients should also be placed on a corrective sliding scale using the same form of insulin as used in the bolus dosing given the concern over in hospital hypoglycemia whenever rounding a dose of insulin one should always round downwards now how should the provider adjust insulin in response to sub optimal glycemic control first the regimen should be reassessed every 1 to 2 days with glargine adjusted more frequently than once every two days one should generally not react to a single high reading if others are within the target range however even a single hypoglycemic episode should prompt an adjustment what are the specifics for up titration of insulin in persistent hyperglycemia if the glucose levels are above target at all times of the day one should increase the basal insulin if the glucose is above target only at certain times of the day adjust according to this chart if the am fasting glucose is high one should increase the p.m. basal dose of NPH if the pre lunch glucose is high one should increase the a.m. bolus of insulin if the pre-dinner is too high increased the a.m. basal and if the bedtime glucose is too high one should increase the p.m. bolus dose for patients on glargine who only have a consistently elevated glucose in the morning or before dinner or some other very specific time some creativity may be required as glargine does not have peaks and valleys of activity like other forms of insulin do for example imagine a patient on glargine for basal insulin and list bro for bolus insulin whose glucose is are all fine except for persistently elevated pre-dinner glucoses in the 300s the provider may need to increase the glargine dose but in order to decrease the risk that hypoglycemia will develop at other times of the day he or she may need to simultaneously decrease all of the bolus doses such that the total daily dose of insulin is increased very modestly if glucose elevations are at random times with no discernible pattern things to ask include is the patient on a consistent carb diet is the patient eating each meal consistently and is the patient sneaking carbohydrate rich snacks that's a good lead-in to discuss proper nutrition for diabetics this is another huge topic but let me discuss it in extreme brief the previously popular term for the inpatient diabetic diet the so called a da diets never really meant anything specific and at many institutions this referred to a calorie controlled diet low in simple carbohydrates without necessarily paying close attention to consistency in Toba carb content each day for patients taking oral nutrition the recommended diet is now the consistent carb diet the components of this diet include equal total grams of carbohydrate each day equal grams of carbohydrates in the same meal from one day to the next approximately equal grams of carbohydrate each meal compared to others in the same day there is no specific calorie level and surprisingly it does not necessarily restrict the use of sucrose provided it's done in small amounts and in a consistent manner how does the typical basal bolus regimen change if the patient is NPO first consider a dose reduction in basal insulin type 2 diabetics on NPH should have a closer to 50% reduction while in type 1 diabetics and type 2 diabetics on glargine a reduction of 25% is more appropriate next eliminate the bolus insulin finally keep the sliding scale there but change the regimen from before each meal and at bedtime to every six hours and in addition the sliding scale should generally be downgraded for example an aggressive sliding scale should be switched to moderate and a moderate sliding scale should be switched to mild finally there are some considerations before discharging a diabetic patient from the hospital should the hemoglobin a1c be checked should the patient's outpatient regimen be adjusted the decision to adjust should be based on the hemoglobin a1c and not based on inpatient glycemic control generally avoid changing the outpatient regimen if the a1c is less than 8% unless there is a new contraindication to a previously used med occasion and only adjust the outpatient regimen in consultation with the patient's outpatient provider finally does the patient need diabetes education three common misconceptions about inpatient diabetes management which I've already gone over but would like to just highlight here towards the end oral hypoglycemic drugs should never be used as an inpatient the sliding scale as the sole means of glucose control is appropriate for most patients and the patients who are NPO do not need insulin all of these statements are completely false let me conclude this lecture by working through a multi-step example a 68 year old man with type 2 diabetes on metformin and glyburide as an outpatient is admitted to the hospital with pneumonia a recent hemoglobin a1c was 9.3 percent the patient is 105 kilograms in weight what is an appropriate diabetic regimen for him at this time step 1 the side of this patient is appropriate for an insulin infusion or for orals in this case he has no indication for an insulin infusion per se and his poor outpatient control indicates that oral medication is not appropriate step 2 estimate the total daily dose of insulin this is 0.4 times his weight in kilograms which in this case is 42 units step 3 calculate the basal dose as 1/2 the total daily dose which is 21 units this can be provided as 10 units and pH twice a day or alternatively as 20 units glargine once a day usually done at bedtime step 4 calculate the bolus dose also 21 units which is then divided into three equal parts rounding down for convenience and safety this becomes 5 minutes of regular insulin three times a day before meals step 5 don't forget to include corrective insulin so in summary an appropriate initial regimen for this patient would be NPH 10 units sub QB idac which is medical shorthand for twice a day before breakfast and dinner a perfectly fine alternative would be glargine 20 units sub q q HS which is shorthand for bedtime regular insulin 5 units sub q tid AC that's three times a day before each meal and finally a regular insulin sliding-scale I would choose a moderate scale as the patient is type 2 and without evidence of unusually high insulin resistance here are the patient's blood sugars after the first 48 hours in the hospital his oral intake has been poor but consistent what if any changes are appropriate to his insulin regimen you can see that all the blood sugars are within goal or close to goal with the exception of 2 pre-dinner measurements both of which are above 300 therefore the appropriate change to make would be to increase the AM and pH dose there is no magic formula for how much to increase it other than avoiding any changes that are too drastic therefore the only change would be an increase in the a.m. and pH dose from 10 to 15 units continuing on his blood sugars are subsequently well controlled but due to the non resolving nature of his pneumonia a bronchoscopy is planned for the following afternoon and the pulmonologist requests that he be made NPO from midnight how should this regimen be changed here's the current regimen again while NPO remember that the patient should have the basal insulin reduced by 25 to 50% the bolus insulin should be deseed and these sliding-scale should generally be downgraded therefore the patient's regimen while NPO should be NPH 7 units each morning before breakfast and 5 minutes each evening before dinner with a mild regular insulin sliding-scale that concludes the summary of inpatient diabetes management's if you found it interesting please remember to like or comment on the video and please subscribe if you are interested in additional lectures on a variety of inpatient medical topics you
Medical_Lectures
Immunology_Lecture_MiniCourse_9_of_14_Tcell_Mediated_Immunity.txt
to day three of the Immunology Marathon uh any questions or any points anybody wants to raise anything you want to do differently some students were complaining that we're going too slowly so if anyone thinks that just raise your hand then I'll uh speed things up okay there's always one right there's always one student like that you know the one who's memor ized all the slides already you know okay so uh today's lecture and you know now as the course is going on we're going to be getting into a lot more pragmatic aspects of Immunology in terms of functional activity and I think really starting to get into some of the subtleties of the regulation of the immune system and today and uh uh we doing uh four lectures uh the first lecture is going to be t- cell immunity the second is a continuation of how t- cell anth and how they kill cells the third lecture is going to be mucosal immunity and mucosal immunity I think is really an important lecture because it's extremely relevant to understanding mucosal transmission of HIV and other infectious agents and how the mucosal system is able to deal with it and one thing that is important to understand is that the immune system is very very different in different locations so the immune system in the mucosa is really its own system that's very different and you can't apply the rules of the immune system when we think about lymphoid tissues lymph nodes to what's going on in the mucosal tissue because it has a very different activity and the fourth lecture is going to be a lecture on HIV and the lecture it may go a little bit longer than an hour because I really wanted to pack a lot of information into it to really kind of give a broad overview of HIV infection and how it interacts with the immune system so it really should be uh an action-packed day but one I think that is going to really start being very relevant in terms of your own understanding and your own work that you're doing right now so questions to consider uh first question and because of the fact that these pointers have a very short halflife uh uh I I have to use them very sparingly so uh the first question is how do te- cells know where to go and that is really an amazing question when you think about it because te- cells have to go all over the body and clearly you want the appropriate t-s to go to the appropriate locations t uh clearly for example pre cells coming from the bone marrow have to know to go to the thymus if they go to the lymph node before they've undergone VJ recombination and selection you know who knows what would happen for also once they are in the thymus they have to know they got to leave they have to go to lymph nodes after they've been activated they have to know they they have to go to home to the site of the infection we've already touched upon how that happens in the first lecture using how adhesion molecules play that role but again I think it's important to understand uh you know and I it's it's amazing concept to me so so in order to illustrate how hard it is for te- cells as an example I'm basically showing you a map and does anybody know what this map is this is a New York City Underground or subway system and uh it's a little bit more you know it it's really an amazing system but to figure out how to get for example I'm at the Albert Einstein College of Medicine which is pretty much located right about here in the Bronx and we frequently have visitors who are coming from from all the way in Brooklyn so you basically have to travel all the way here the different colors or different Subway Lines so you have to know to switch from sub one Subway to a different Subway to go in the right direction and it's an amazing process to do and it's very complicated and even with this detailed map knowing where things are frequently people make wrong turns and get lost which is not surprising so imagine what it's like for a te- cell being in the body with no map you know the t- cell doesn't you know pull out a map you know Hearts there's no signs like you know you know an arrow pointing no heart you know uh 15 in to the left uh and and in addition the bloodstreams one way you know if you miss your your turn to get into the lymph node you can't like kind of back up because then you'll be getting hitting by oncoming red blood cells and uh there's no insurance in the immune system so it's clearly an incredibly complex process and the fact that it works is mind goling and the fact that things could potentially go wrong is actually uh not surprising but it's amazing how well it works and so therefore again how does that happen how does antigen get targeted to a te- cell expressing the appropriate te- cell receptor that's another amazing concept you have a a te- cell mixed in with millions and millions of other te- cells in a lymph node and you have an antigen that's on the other end of the lymph node how do those two come together I mean you think about it the odds of that happening are minuscule and yet not only does it happen it happens in a very efficient way so how how is that possible again to think about that and finally again this is always a recurring theme in the immune system on one hand you want to fight infection but on the other hand you don't want to destroy your own cells you don't want collateral damage happening in in our body so again how is t- cell respon to self antigens prevented yesterday we spoke about deletion in the thymus deletion in the bone marrow but any but anybody here I'm sure realizes they can only be so efficient and you know that if you make a mistake and you let out a self-reactive T Cell B cell the impact is devastating so you want to have a failsafe mechanism in order to allow you to delete self-reactive cells after they've left the th so how does that happen Okay so this is a uh again a kind of cartoon of the process and and I'm going to go into it in more detail as the lecture progresses the this basically involves Three Steps From the time of anth representation until the time that the cell is actually doing something and this particular case it's illustrating a cytotoxic te- cell Behavior so if we start with the recognition phase it's a very obvious what's going on this is when the antigen specific receptor recognizes the antigen that it's specific for and as you now are all very familiar with MHC class one antigen presenting cell peptides being presented to t- cell receptor and now you all know very clearly the signal transduction pathway one signal is T antigen presenting was from the tsol receptor but you also appreciate there's a second signal and I'm going to be discussing in a little bit detail later exactly what that second signal is but just to introduce you to it it's in it's it's basically conveyed by two molecules on the surface of the antigen presenting cell it's B7 and on the surface of the T Cell it's cd28 now now again the way that I remember that is that B7 what cell do you think this was first described in a b lymphocyte they detected and what kind of in addition to making antibodies what else do B cells do an they they present the antigen as well so B so B7 think of an an presenting cell is the one that expresses it and then cd28 you know want the and T Cell start with the letter t cuz otherwise you always confuse the two of them so you always know just know you think you know B7 c28 but now you think B7 B cell antigen presenting cell cd28 t- cell that's what's on the t- cell okay so we'll go into a little more detail later but these two signals Now activate the t- cell it undergoes proliferation because a single t- cell can't do much especially a satox a t- cell a single B cell can actually do a lot because a single B cell can make tons and tons of antibodies however because of the fact that the t- cell the cell itself is the effor portion of that limb you need a lot of cells so therefore it has to go large amount of proliferation and again as we'll discuss as the lecture progresses the major growth factor responsible for t- cell proliferation is Incan 2 and in the second lecture this morning I'm going to be discussing all the different t- cell subsets th1 th2 th17 and t-reg cells and discussing the specific cyto kindes that make but for now think of I2 as the generic factor that stimulates te- cell proliferation once you have enough te- cells available now you want them to differentiate into effector cells clearly in order to have an effect the cell has to have specialized proteins so for example in the case of cytotoxic tea cells they have to have the proteins that are necessary for killing cells so they have to have perant they have to have granzyme granzyme B in order to kill and that happens during the differentiation process it makes sense while the cell's proliferating it doesn't need to be making those specialized molecules yet only after you have all the cells available so again that's why it's kind of a a process first they proliferate and then they start differentiating and finally the cytotoxin T cell has effor function now that it's fully armed and ready to go now it will be uh killing the target cell here you have an infected cell it's presenting peptide from the virus in the MHC molecule being recognized and then the cytotoxic t- cell is killing it so that's the big picture and now we're going to kind of hone in on the details of this process okay any any questions okay so now uh again uh someone someone once said that Anatomy is Destiny it's a uh interesting thought but uh you know just uh take a little sidebar this I have um your question from the pr slide understand the the differences between the proliferation and differentiation okay the question is what is the difference between proliferation and differentiation and the answer I would give is that if you have a single te- cell that recognizes antigen you want to make say a million of them so now it's in a sense you've now made a million of those cells but those cells are not yet expressing the specialized protein that they need to have their effor function so for example for B cell you make a lot of B cells but until you start having extensive GG apparatus endoplasmic reticulum production that you're not going to be making the large amount of antibod that you need to make so for a B cell it it's it now differentiates into a plasma cell that's making antibody for a cd8 cell it you have large numbers of cd8 cells but they don't yet have large amounts of perforin Grine B all the other factors that they utilize in killing cells that's what the French iation is doing okay so uh anyway so if you do you mind if I tell like a little story okay because I know you know it's like uh lectures can be like eating cornflakes with no milk and sugar it's very dry so it's always good to kind of make it a little interesting so so this is a anyone here have kids okay this is so everyone has kids right so so anyone here a medical doctor okay so so did you have you taught your kids medical stuff you know like one thing doctors do is they they teach their kids medical stuff because you know so you want you want to show off your kids and you want to kind of surprise people so what a lot of Physicians do is they teach their kids like Anatomy so the kid doesn't have like a collar bone they say it's a clavicle or a sternum you know and it's you know it's like oh my sternum is killing me and everyone everyone looks at you're like whoa your kid's like a walking you know and this is like a 2-year-old kid so but I want to just kind of highlight to you the danger of teaching kids Anatomy that's why it's relevant to this so so a few years ago when my old I had my two oldest Sons they they were then like five and three so my wife was away it was a Sunday and I was in charge and you have to feed your children because you know if kids are hungry they're cranky and cranky kids are really hard uh so but my culinary skills are extremely minimal so I'm very good at mashing tuna fish but that's about it so so I said to my you know my youngest son I said what would you like for lunch and he said can I have a tuna for sandwich I said fine you I go mash it put the right amount of mayonnaise in it and make him a beautiful sandwich beautiful presentation and I hand it to him and he says thank you Daddy he takes one bite and he says Oh Daddy I'm so full if I take one more bite my stomach is going to explode and I said AA this is ridiculous you know you took one bite you know how could you possibly fall no Daddy if I take another bite my stomach's going to explode I can't take it anymore my oldest son says Daddy come over here so I walk over to the side and I always say it's amazing how kids siblings are more than happy to throw their other sibling under the bus you know they have no problems like giving you know giving them up so he said Daddy I have a great idea I said if AA is really full why don't you offer him a chocolate chip cookie if he says yes then you know he's not really full if he says no well then he's really full and I said no that's a really good idea let me try it out so I go back to AKA and I say AKA do you want a chocolate chip cookie and AKA looked at me says yeah Daddy that would be great and I said wait a minute AKA you told me if you took one more bite your stomach would explode and now you're telling me you have room for a chocolate chip cookie how could that be he looked at me with that look that kids give to parents like the morons and he said daddy you don't understand the stomach is for the main course the esophagus is for dessert my stomach is full but I have plenty of room in my esophagus so again the dangers of teaching your kids Anatomy okay so now go back to Anatomy so this is a anatomical picture I guess the esophagus is over here and the stomach is here but they're not shown but the important point to underline here is the fact that the T cells and B cells are anatomically localized based on their differentiation and activation step so the first place that you're going to be looking at are the primary lymphoid tissues this is what we discussed yesterday the te- cells are going to be differentiating inside the thymus that's where they undergo bdj recombination development of t- cell receptor uh selection of of both MHC recognition and deletion of self-reactive and for the bo for the B cells that occurs in the bone arrow so again clearly they have to be localized there however once the cell has differentiated and now it has the capacity to see antigen and you've eliminated self-reactive cells you obviously wanted to leave the thymus and you wanted to leave the bone marrow and a point that I want want to underline about the thymus that I didn't mention yesterday is the thymus actually is felt to kind of be a sealed environment because you don't want to have infectious agents coming into the thamus because if you had infectious agents in the thamus what kind of te- cells do you think you'd be deleting antigen specific tea cells that recognize whatever that infectious agent is and in fact some of you who take care of kids that have been infected with HIV uh in utero or at Birth know that HIV actually can infect the thymus of these children and in fact what happens now is is that these kids delete their HIV specific tea cells and that's why a lot of these children have a much more devastating and Rapid course of HIV infection than adults do because already you're putting them under a handicap because they're Del leading te- cells that are specific for HIV so that's why for the most part the thymus is a relatively sealed environment that really prevents the uh the entry of pathogens into the thymus however now this differentiate they leave the thymus and now they have to go into an environment where they wait for exposure to the antigen that they pre-programmed to recognize and again one of the aspects of doing this is you want to put the te- cell concentrated in a location where an antigen is very efficiently brought into that environment and that location turns out to be all the lymph nodes in all the different lymph node complexes both mesenteric lymph nodes uh peripheral lymph nodes as well as the spleen there's a where these cells home to waiting for antigen to be presented to them and finally after exposure to antigen they undergo proliferation and the majority of the prol proliferation occurs in the lymph node and that's why for example when you get an infection and your lymph nodes swell why do you think your lymph nodes are swelling for the most part they're not swelling because they themselves are infected they're swelling because all these te- cells and B cells that are proliferating take up a lot more space than a a handful of lymphocytes so there that's what's causing the swelling they are painful because the lymph nodes themselves have pain receptors in their capsule which is relatively inflexible and as it stretches that causes pain you clear the infection what happens to the te- cells and B cells that are proliferating they undergo apotosis they disappear and that's why the lymph nodes shrink back down again so again that's physically demonstrating the high level of proliferation because you could actually see it macroscopically imagine how many cells are involved in that process but they have to now migrate to the area of any infection so in the case of someone who picks their nails the infection potentially is at the tip of their finger so it has to now get out of the lymph node get into the circulation and migrate to the to the to the um uh finger how does that happen how does this occur so a simple kind of way of thinking about it is anybody here ever go on an airplane fly airplane right okay so you go and you check in and you have luggage what do you do with your luggage you hand it over to the people behind the counter and and then you hope it's going to end up in the carousel at the place that you're going even if you're making two or three transfers you hope that happens and actually usually it does and it's an amazing that it actually works when you think about it if you've ever seen what the luggage sorting area looks like it's even more amazing but how does it happen What do they do to allow the luggage to actually make to where it's supposed to go they put a baggage tag on it and on the baggage tag there's a barcode and they continuously read the barcode and they kind of shunt it to different directions based on what the barcode says in essence that's what cells themselves do each of the cells has a barcode what that barcode though turns out to be are these specialized adhesion molecules one of which has a a Lian that's uniquely expressed in different anatomical locations so naive tea cells Express at adhesion molecules that specifically Target them to secondary lymphoid tissues so in the case of lymph nodes that molecule expressed by naive te- cells is L selectin now you may recall that in the case of endothelial uh adhesion what molecule was the adhesion molecule there e selectin remember e endothelial cell L lymphocysts are your friends so the naive cell expresses eltin and this binds to a molecule that's expressed on the surface of the specialized veins in lymph nodes called high endothelial venules and these Express this molecule called cd34 so this naive T cell is going to be in the circulation it'll keep on going until it sees cd34 expressed on the surface of a vual and then it'll stop and again I'll show you in a minute exactly what happens or another molecule some people have reported called gly cam one again but what you need to kind of remember is naive te cells Express eltin and it's going to bind to cd34 on high endothelial venules you also have to have naive te- cells migrate into the mucosal tissue so for mucosal tissue there's another molecule expressed on mucosal endothelium called mad cam well let's think what does MA stand for what do m stand for mucosal add adhesion and then kind of it's a little silly because now it's called cellular adhesion molecule you know it's kind of like almost like a a stuttering but but but again this is really to help you remember what it is you see it you can figure out what it is and it turns out that that cells Express Global adhesion molecules because they do have to be a little sticky in order to stick to endothal cells but they also Express specialized so as I mentioned before te cells for example uh can recognize specialized one this is another one called alpam that's that's expressed by native C cells that'll Target them to mucosal endothelium so madcam again is mucosal endothelial adhesion molecule and alpam has been reported to to Target them to the mucosal tissue once the cell becomes activated it expresses a different molecule and this molecule called V four and again v stands for very late antigen number four and the way this was discovered was they activated te- cells and after a few days these te- cells started expressing this antigen because it took a few days they said oh this is a very late antigen it's taking a while and that's where the name came from so again it just helps to remember when you see vla think very late and then that you know that it's expressed by activated t- cells and V4 binds to a molecule that you may recall called vcam1 what do you think the v stands for vascular cellular adhesion molecule and it's expressed on activated endothelium remember at the sites of infection the the uh endothelium gets activated expresses high levels of iam1 but it also expresses high levels of vcam1 that's a signal that there's an infection going on and what kind of cells would you want to recruit if there's an infection would you want to recruit naive tea cells no what kind would you want to recruit activated te- cells so this is now the kind of barcode that tells activated te- cells there's an infection going on this is where you have to get recruited too okay is that clear now it if you look now at the range of te- cells in circulation it turns out that the some molecules all of them Express and the some that are uniquely expressed by activated versus not activated so now if we start from the first column L selected now you're all familiar with what cell is going Express eltin the naive lymphocytes and again what they're calling them resting and once they get activated they don't express it anymore well why do you think you would want them to stop expressing it after activation what tissue do you want them to leave the lymph node so the the expressing is keeping a lymph node activated you want them to leave it's like you're when you're a teenager or whatever there a certain time your parents decide to kick you out of the house that's what happens with these cells getting kicked out of the lymph node not but now they express vla4 cuz that's going to allow them now to home to areas of infection do you want naive te- cells to go to areas of infection no so do you want them to express V4 no so that's exactly what's going on and other molecules so lfa1 as you recall is the global adhesion molecule and anyone remember what lfa1 binds to I can one and so you still have they still have to be stick so even even resting naive cells express some lfa1 but activated cells you want them to be stickier so therefore they actually upregulate the expression of lfa1 and again the the other molecules like CD4 T Cell receptor those all pretty much stay constant the other one I want to point down to you is cd45 RA and cd45 row anyone familiar with that from Clinical medicine yeah memory and naive uh and cd45 raah is expressed on naive cells and again the way to remember it is RA has the letter a and naive has the letter a and row is memory memory has the O and and R has uh uh uh o so the reason that's important clinically is that individuals who have an activated immune system they may have autoimmune diseases they may have infection they're can to have a lot more memory cells in circulation than naive cells and it's a routine clinical test you do basically a ratio of the cd45 row to raw and if they have elevated cd45 row that's an indication that their immune system has been activated uh and therefore you have to start to analyze what the reason for that is okay any questions okay so now we'll hone in on exactly how these naive te- cells home to the lymph node so to orient you right here is the high endothelial venu so that's really a blood vessel kind of coming out of the screen at you the the lymphocytic circulating in this venu but now it sees cd34 the El selectin binds to it it stops and now the lymphocytes through diapedesis from the vascular system into the lymph node itself now it's in the lymphoid environment and now this T this either Bell t cell has a job because its job now is to see if antigen that it's pre-programmed to recognize is present in the lymph node so this t- cell now will basically move all through the lymph node looking for antigen and in fact if it's a t- cell it can't see free floating antigen so what kind of cell is it going to interact with dendritic cells or maybe macrophages and it's kind of like it's like it frisks the uh dritic cell it rolls up and down the membrane and if you remember the dentritic cell has this stellite membrane the stellite appearance and again think of it as three-dimensional it's like one of these balls that have tons of spikes coming out of it which gives it incredibly large surface area which allows it to have a lot of cells any given time and again the simplistic analogy I think about is people here are familiar with the story of Cinderella right so so Prince Charming dances with her and then she has to leave by midnight and then she leaves behind her slipper and then she disappears and now Prince Charming wants to find her so what does Prince Charming use to find her the slipper he goes all through the kingdom tries to slipper on every girl in in the Kingdom now of course you realize what a great relationship they've developed that he needs to see if the slipper fits if he actually knows it's the right person you know otherwise he wouldn't recognize her but anyway uh te- cells do exactly the same thing they basically have an antigen t- cell specific t- cell receptor it you uniquely recognized as pep a peptide plus MHC the t- cell goes through the lymph node going to dendritic cells seeing if there's for example a foot that fits into this antigen receptive uh slipper so and if but if it doesn't it has it leaves the lymph node and goes on to the next lymph node the next lymph node until it potentially finds the antigen that it's pre-programmed to recognize however if the t- cell sees the antigen that is pre-programmed to recognize now it it gets activated it starts proliferating and makes thousands and thousands maybe millions of copies of itself and starts to to differentiate and now it downregulates its El seleced and now it will leave the lymph node but instead of homing to another lymph node which this t- cell does because it continues to Express El salactin this cell now is going to express for example VA 4 and other markers that will Target it to the peripheral tissue potentially where there's an infection going on and now these activated te- cells differentiate depending upon what their function is so again if they be cytotoxic T cells they would start developing all the proteins they need if they' be CD4 helper cells they start making gam interferin and other cyto that they utilize and now they leave the lymph node to the site of of infection so basically to summarize naive t- cells and B cells localized to lymph nodes or mucosal pyrus patches antigen activation of these cells requires into interaction with anen presenting cells and again now you have to have the flip side the antigen has to get into the lymph node otherwise it's not going to be available for the tea cells you have to have an efficient system to do it and because of the fact that te- cells need antigen presented in the context of antigen presenting cell it makes sense to bring it there with a antigen presenting cell and what cell do you think is going to play a role in that process macrophases and dritic cells and what is a dritic cell called when it's in the skin a langerhan cells again so now we put it together oops wrong way so now if we look at what the antigen presenting cells are available they're basically three antigen presenting cells that are you the most common ones so first is dendritic cells second is macrofibers and third one is B cells each one of them has unique features that are associated with its functional activity so the dendritic cells have aiic capacity they also have penic macro penos catic capacity they could actually take in large molecules and dendritic cells though are not designed to be aidic cells to kill pathogens they're more designed basically to digest them and also to present uh antigen in addition think about presenting antigen in the cont context of class one MHC can typical phagocytosis allow you to present antigen in class one MHC what do you think who here says yes raise your hand who here says no raise your hand let's go come on go for it you got to vote you're not voting okay yes raise your hand that that that you can present class one just been doing cytosis raise your hand okay who says you can't raise your hand yeah then if you're not sure look around if everyone else is raising the hand what the heck right but it's you got to be involved so you have to take a stand somewhere the answer is it's not going to happen so how can you possibly present antigen in the context of class one MHC molecule in order to activate cytotoxic tea cells what do you how do you think you'd have to do it in it'd have to be infected in fact as we say in in America you have to take one for the team you know sometimes you just have to take a sacrifice for the good of the team team and it turns out that the cell that's decided to do that are dendritic cells and dendritic cells frequently become infected with agents but they doing it because they want the Infectious agent now to make the viral proteins and now could present those viral proteins in the context of class one MHC to activate cytotoxic tea cells and again dentritic cells have a lot of other factors that make it extremely resistant itself to being killed by viral infection so it's semi-protected in that process but again that's a major role that dendritic cells play now when dendritic cells are in the peripheral tissue like langerhan cells they have a very high level of of phagocytotic or pinocytotic capacity which makes sense because they want to suck in antigen once they get activate and but on the other hand they have very low expression of co- stimulatory molecules they aren't expressing high levels of B7 it also makes sense because you don't really want to activate t cells in the peripheral tissue very much because what's the most common antigen in the peripheral tissue self or non-self self absolutely so therefore if you have activated dendritic cells in the peripheral tissue what do you think maybe be very likely to happen what kind of te- cells will be activated autoimmune self-reactive te- cells because that's the most peptides are presenting so therefore it's they're not activated at that stage but once they leave and migrate into the lymph node they change their functional activity they become much less fago cytonic because they don't need to pick up antigen anymore in the lymph node but now they become activated and able to present and activate te- cells because that's what they're going to do in the lymph node now macrofagos also present antigen they take up the antigen through phagocytosis but in addition macrofagos also obviously kill so they have effor function in terms of eliminating pathogens at the same time that they're important antigen presenting cells now again similar to dendritic cells when they're not activated they express either no or extremely low levels of cator molecules for exactly the same reason if they're not activated it's because there's no infection going on there's no need therefore for them to be able to activate te- cells because that the danger is they be activating autoimmune responses however once this they detect an infection now how can a macroy detect infection what molecules does it express that allows it to to detect infection any suggestions re what recept right and throw out some receptors think fruit fly fruit fly toll like receptors again that'll activate it glucan receptors Manos receptors when it detects the bacteria it binds to the maccrage activates it and then it starts expressing Co post inflamatory molecules it also both dendritic cells and macres also upregulate MHC class one and Class 2 in the presence of infection same concept if there's no infection going on you don't need to express a large amount of Class 2 or class 1 MHC molecule again because you're more liable to present self peptides when there is infection you want a high level of infection and B cells again as I'll show you again later they take up antigen but they only do it in the context when antigen binds to the antibody molecule if antibody doesn't bind to the anti if if antigen does not bind to its antibody molecule the B cell ignores it again B cells are selfish cells they only want to present antigen if it facilitates its capacity to make antibody okay is that clear okay so these are the three an presenting players and in fact if you look in the lymph nodes these are present at different locations so this is colorcoded this is a dendritic cell and this is an example of the dendritic cell taking one for the team getting infected with a virus in order to enable it to present viral peptides in the context of class 1 MHC it's distributed all throughout the lymph node what do you what cells do you think are in the white zone B cells because these are germinal centers and in fact B cells are predominantly going to be located in the germinal centers and t- cell and macrofagos are distributed all throughout the lymph node including some ACR that also located inside the um uh germinal center now I actually have been ignoring B cells because B cells also need to have antigen brought into into the lymph node B cells see antigen in the lymph node undergo proliferation and in fact they differentiate into plasma cells and make a lot of antibody in the hilum so how does antigen get into the uh you need to have whole antigen in order for B cells to respond so one possibility is that for example dead bacteria May drain into it bacterial pieces May drain into it dentritic cells also under through pinocytosis pick up whole proteins without digesting it can bring it into the lymph node and spit it out into the into the lymph node for B cells to see it and there's another cell that is in the lymph node called follicular dendritic cells and folicular dendritic cells even though they're called dritic cells have a different lineage and their one of their roles is kind of like fly paper anybody here ever see fly paper you know in the states we used to have this where they'd hang these long strips of very very sticky paper do they have that in England fly paper yeah same thing and it's it's actually you know when you're a kid you love it because every morning you'd wake up and count how many flies you have stuck on the thing but so you have a positive you know readout but fic dendritic cells are the same way they actually have antigen that sticks to it like fly paper on the outside of the membrane maintaining the three-dimensional structure of the antigen because that's critical for B cells to recognize it if it would digest it into peptides it wouldn't do any good for the for the Bol so that also helps again efficiently Focus antigen in a way of maintaining threedimensional structure for B cells to interact yes can one cell one at the same time okay the question is can one dritic cell present more than one antigen at the same time what do you think absolutely why not and it's a very efficient way again there thousands of MHC molecules and it could be presenting thousands of peptides from thousands of different proteins the overwhelming if you're not infected first of all you're not going to have a lot of dendritic cells in the lymph node if you're not infected because they're going to be staying in the peripheral tissue but again the overwhelming amount of peptides probably presented by dendri Exel are going to be self just because that's the nature of what gets loaded into the MHC but the more severe the infection the more stuff it's it's fyos then more of the farm peptides it will be presenting but absolutely can present peptid from multiple different infectious agents okay any other questions what's of cell relative to a a tmos what's the size of a dritic cell relative to a t- lymy and the dritic cell is significantly larger and I don't have a picture of it but you know I can't give you an exact number but but when you see pictures you basically will see a t- cell looking about that big on one of the processes okay any other questions okay so again as I said from the the outset you need two signals in order to activate a t- cell antigen specific signal and a co-stimulatory signal and it turns out that as I mentioned before the co- stimulatory signal is conferred by a B7 cd28 interaction B7 expressed on anthen presenting cells cd28 on T cells but in addition is also a third signal which I had also discussed yesterday which are cyto kindes so in order to allow a t- cell to undergo its full differentiation capacity you basically have a single the First Signal antigen specificity activates it you need to have a coast imotor signal in order for that t- cell to survive as I'll show you in a few minutes if you only get activation signal without the co stimulatory signal the t- cell actually gets turned off permanently and becomes anergized and then you need third signal which is the cyto kind and the cyto kind is like the fine-tuning signal because that cyto kind is going to tell the te- cell what type of t- cell to be when it differentiates what proteins it's going to make which determines what its ultimate function is going to be which I'll discuss in more detail in the next lecture th1 th2 th17 and t-regs okay any questions okay now again the immune system always has to have a way of turning itself off so the and again like a car you have an accelerator and you have a brake when you need to go fast and you need to move you turn on the accelerator when you need to stop you hit the brakes so as soon as the immune stimulation of the te- cell started you already are sewing the seeds for turning that t- cell off and the way that that's done is by the te- cells starting to express a another molecule called ctla4 and again ctla4 cytotoxic te late antigen so la late so this so is this expressed early on or late in the activation late it's another way of kind of reminding you exactly what it does and when it's expressed and in contrast to cd28 which provides the activated t- cell with a positive costimulatory signal ctla4 actually turns off the t- cell now let's I'll th based on yesterday's lecture on T cell signal transduction what class of molecules would you predict ctla4 would be turning on okay what okay let's think let's step back what is the very common molecule that's being turned on during t- cell signal transduction and activation a kind okay now so therefore that turns on what class of molecules do you think ctla4 is going to turn on phosphatases because that's exactly the opposite if you want to undo a kinas have a phosphotase and in fact ct4 turns on phosphatases now def phosphor all those signal transduction molecules and turning it off so in essence What's happen and now how is ct4 able to turn off the immune uh response even more potently is it turns out that ctla4 actually has a higher affinity for B7 than cd28 does so again think about you know two people grabbing for something and I'm trying to grab a donut and Shaquille onil is trying to grab a dut you know who has money on Shaquille O'Neal absolutely because he has super high affinity for donuts uh if you've ever seen him you probably know that's the case so so ct4 has a much higher Infinity for B B7 so it's going to grab it get activated shove B7 out of the way and now the T cell is no longer going to be getting that positive signal it's going to be getting turned off and this is how you start deactivating te- cells which is critical again to you you've eliminated the infection you don't need these te- cells anymore and also potentially you don't want to have an autoimmune response and again there are other molecules for example pd1 which you know about plays a role such as that Tim 3 plays a role and in fact people have published and and actually I believe Bruce published a paper showing that ctla4 is overregulation and HIV infected CD4 positive te- cells and that may be uh a mechanism Again by which HIV turns off the immune system prematurely as a way of preventing a full immune response to eliminate HIV infection okay now as he mentioned we mentioned before Langer hand cells play a critical role in bringing an into the lyph node so if there's a cut bacteria enter it basically gets taken up phagocytosed by dendritic cells chewed up these dendritic cells again at this stage have a high level of AIC ability again not to kill bacteria but to be able to present them once they've taken this up probably have as you showed before tolllike receptor act activation of the dendritic cell because a lot of these bacteria have motifs that activate it and now it's it's signal to migrate leave the lymph node and then and then getting draining into the lymph lymphoid the lymphatics it goes into the lyph node now Beering the antigen with it but now it under goes maturation as I'll show you the next slide and now it's able to present antigen very very efficiently to te- cells the te- cells that now have migrated into the lymph node from the thymus now can be presented with the antigen that they pre-programmed to recognize but in addition the dendritic cell now has the capacity to deliver a co- stimulatory signal so and because now they express high levels of B7 and now these naive tea cells can not only see signal number one which is antigen MHC peptide but also signal number two which is B7 cd28 okay is that clear but in the periphery is the Langer hand cell going to be expressing B7 no because again you don't want to stimulate peripheral T cells because you most likely an anen you see there is going to be self anthen now how do uh another way by which te- cells are targeted and dendritic cells are targeted to the lymph node is through expression of ccr7 and right now I'm going to discuss it for dendritic cells but in the second lecture I'll explain the role it plays in memory T cells so CC cocine C receptor 7 is expressed by dendritic cells after they've been activated and that's how they not a home to lymph nodes so again if you recall tlr toll like receptor there's also deck 205 other molecules that basically recognize these pathogen Associated motifs that now when they exposed to the bacteria binds to it it activates it and now once it's been activated it induces it to express ccr7 it also allows it to enhance pathogen uh processing upregulating expression of costimulatory molecules and so now it starts expressing B7 it upregulates MHC both class one and Class 2 expression and it also expresses ccr7 and ccr7 stimulates it to migrate into the lymph node particularly into the t- cell Zone where now it's available again to Prime t- cells so now you see the signals that are happening you have uh in this casee maybe uh CD4 again you see there are two stalks to the MHC molecule so it's Class 2 MHC as well as the co imator signal B7 to cd28 okay any questions now nothing is ever simple and the one of the rules of Immunology is that when a cell is first discovered everybody thinks it's just like this glob of cells that are all the same but then as people or investigators look more and more carefully at the cells it turns out there's a significant level of specialization in the cells so for example years ago everybody thought that there were just te- cells that was all they were but then people said oh CD4 cd8 now within CD4 you have th1 th2 th3 th17 uh t-s same way in dendritic cells originally people thought there was like one group now there's been very well describe that there are at least two different types of dendritic cells conventional dendritic cells again that's a great name because conventional means it's the one you know the most about those are designed to be antigen presenting cells classic so they can express high levels of MHC molecules high levels of IAM because you want them to be sticky also high levels of coory molecules but there's a second type of dentritic cell called plasma cytoid dritic cells they are differentiated through different cyto mines and actually may be uh derived from lymphoid lineage cells but there the job of a plasma cytoid dentritic cell seems to be to be an antiviral uh defense mechanism so in order to be antiviral it has to make antiviral protein so all you're familiar with interferon beta is being a very common antiviral protein and how does the plasma cytoid dentritic cell recognize that it's infected with the virus expresses a toll like receptor 7 which ex recognizes a motif that viruses typically present and I I'm not a it's either I think it's the double stranded RNA tlr7 that it does but again toall like receptors to reinforce not only could be expressed on the surface to recognize bacterial motifs it could also be expressed in the nucleus in order to recognize motifs associated with viral infections cpg groups that are high in in in uh viral infections or double stranded RNA all are recognized by toac receptors when it so toac receptor nine I think is double stranded and to like receptor 7 is cpg but again I'm not 100% and that I don't have it memorized well but now it's infected it binds to this and this turns on the dritic cell telling it Mak a lar mons of interfer in beta okay any questions okay so now again uh I have mentioned before that adhesion mod molecules are like the barcodes of the immune system so telling te- cells where to go but in addition it plays a critical role of t- cell an presentation interaction and the reason for that is the Affinity between the t- cell receptor an MHC peptide who here thinks that's a low Affinity interaction raise your hands so how tightly do you think is The Binding between a t- cell receptor and MHC plus peptide who who here thinks that's a very tight binding raise your hand very tight tight okay who here thinks it's a very weak binding raise your hand and you know just go for it just vote you know voting is fun okay so it actually turns out to be a very weak binding what why do you think it's weak as opposed to an antibody what does an A T Cell receptor undergo to that would make it a higher Affinity binding what doesn't it undergo so IC hyper mutation so therefore that makes an antibody higher Affinity but t- cells receptors don't do that so therefore in order for the cell now the anen resing cell comes next to the t- cell the antigen uh MHC plus peptide TCR bind it's very weak so otherwise they're going to come apart and they're going to come apart before you can have the whole signal transduction pathway and then not going to do any good so you need to have expression of other molecules on the cell surface that spec non-specifically make them stick to each other longer and it turns out these are adhesion molecules this is something that antibodies don't have the luxury of having that's why their receptor has to be such high binding but t- cell receptors can take advantage of this because it's cellto cell interaction and the major player uh that I'll focus on is the interaction lf1 expressed by t- cells and iam1 being expressed by enen presenting cells and again you should be aware that these adhesion molecules are relatively promiscuous they're not expressed only by one type of cell they're expressed by endothelial cells they're expressed by anen presenting cells they're expressed by te- cells and and and because they have the same function making the cells stickier now it makes sense after the cell's been activated it should become even stickier and in fact this is exactly what happens so initially the t- cell comes in contact with the antigen presenting cell it has lowlevel interaction between lfa1 and IAM just to bring them together long enough to query the antigen presenting the TCR MHC peptide to see if it actually binds and matches if it doesn't the T Cell says goodbye goes on to the next an presenting cell or the next molecule so what frequently happens you can a dendritic cell present more than one antigen a lymphocytic cell along the membrane kind of seeing if anywhere is an MHC molecule that it recognize it's like frisking it's like now you go into the airport they all do a pat down right they do it in South Africa you know in America you know I want to tell you do in America but you know so that's what a t- cell is doing to the dendritic cells doing a pat down to say is there a foreign peptide on you that I recognize if it doesn't moves on to the next if it does then you get the whole activation process going on once it's been activated now it signals the LFA one molecule to change its confirmation and the change in the confirmation increases its Affinity makes it bind tighter because now you want the energ presenting cell and the T Cell to stay bound to each other to allow the full signal transduction process to unfold is that clear it's a very elegant nice system okay and so high level expression of B7 by dritic cells permits them to activate NT cells here you have a cd2 here you have a cd8 cell it's infected with the virus peptide MHC TCR cd28 B7 expression the infection potentially upregulated B7 and now this cdat cell is activated it makes its own incant Tu or it may use interin 2 from a neighboring cell and now it gets starts proliferating and differentiating into a killer cell now if you don't see the co stimulatory signal then you undergo something called anergy so if basically you get co- stimulatory signal alone there's no effect on t- cell function just nothing happen happens at all however if you have MHC in this case Class 2 peptide TCR but no second signal because the presenting cell turns out to be a tissue cell not an activated antigen presenting cell then the sella is anergy and this turns out to be a very powerful way of eliminating self-reactive te- cells in the periphery why because in this is a classic infectious situation dendritic cells activates the ctl2 signals it now gets activated once it's activated it basically is Licensed to Kill and this is another critical point to appreciate very important once a cytotoxic te- cell is activated appropriately by a dendritic cell it no longer needs another costimulatory signal which means now if it sees an infected cell presenting faren peptide in MHC class one it doesn't have to also see cd28 it doesn't have to see B7 anymore now it will just kill the cell that makes sense because otherwise You' never be able to kill the peripheral cell which doesn't Express B7 so once it gets the activation signal it's basically now Licensed to Kill even if it only sees MHC plus peptide alone is that clear it's very important concept however if the t- cell sees antigen for the first time T Cell receptor MHC plus peptide and does not get a co- stimulatory signal then it doesn't not only doesn't it get activated but it becomes energic which means even if it sees the signal at the appropriate manner it doesn't get activated anymore it basically becomes castrated for is way of saying it so so why is that important because let's say this te- cell is one that recognizes an albumin protein for example okay it's somehow snuck out at the thymus it like left when no one was looking now it's in the periphery if it would get activated potentially would be a very potent autoimmune response however since it it now sees albu and peptide for the first time in a non-antigen presenting cell which makes sense because there's a lot more of those than there are antigen presenting cells what's going to happen now is it gets this the first signal doesn't get the second signal and now gets anergized or in the absence of underlying infection as I'll show you in in in the next slide again even macro and then cells don't express coor molecules and they'll also energize so and now in terms of uh adhesion molecules they also play a role in CDH CTL immune surveillance and this is really important when you think in terms of HIV infection because the the initial interaction between a CTL and cells is basically made by these non-specific adhesion molecules which enables the cytotoxic te- cell to Roll Along the surface of these epithelium again doing a pat down of the epithelial cells like a security agent saying are there any far peptides in any of the MHC molecules that I recognize this cell has already been activated by two signals by dendritic cell if there's no antigen specific interaction nothing is found on the path down cytotoxic T Cell just keeps rolling along however if it does see a faren peptide during the path down then it immediately provides the appropriate signals to kill the cell because it assumes the cell is infected okay is that clear and once it becomes activated cytotoxic tea cells become serial killers they basically Roll Along the epithelium they kill and as opposed to like a bee for example that can only sting once and then dies cytoxic C- cells are more like Hornets or snakes they could they could sting and sting and sting and keep going which is when you think about an amazingly powerful immune mediating cell because this cytotoxic te- cell potentially could go on infect and kill hundreds of infected cells prevent those cells from releasing more virus so you also appreciate what having a cytotoxic tea cell that recognizes self can be so devastating because a single cell if it recognizes a self peptide could Roll Along epithelium and just wipe out a whole large number of them so again that's why the the need for eliminating these self-reactive cytotoxic tea cells is so critical and as I mentioned before the requirement for costimulation prevents the induction of in this case CD4 response to self so in this situation you have a non-bacterial protein antigen likely itself so this maccrage takes up albumin puts it on its surface presents it in class 2 MHC molecule but this maccrage is not activated there's no bacteria around to activate its toll like receptors or or Manos receptors there's no gamerin being produced it's basically a state of Peace no catatory molecules therefore this T gets energized if this bacteria comes in contact if this if this macroy comes in contact with bacteria that's going to activate it upregulate clator molecules and now when it presents peptide it's able to provide two signals because it upregulates its B7 and now it activates the t- cell is that clear however how can this system go wrong How could a maccrage be fooled into presenting a self-reactive peptide well let's put these two scenarios together because there's a lot of proteins floating around so let's say you have a macro you have a macro that is exposed to bacteria and it gets activated but at the same time it takes up a self protein this is where the system breaks down because now what happens is that the maccrage got activated because it took up the F bacteria but now when it takes up the self at the same time it's expressing coory molecules right so now it could present self reactive pepti tides and provide the appropriate cator signal to a self reactive te- cell and activate it and potentially cause autoimmune disease so that is why frequently autoimmune diseases are precipitated in the face of an infectious process a lot of times they're triggered by an infection because this is how things can get messed up or they get triggered when there's tissue damag when there's tissue damag you also activate macrofagos and then you could get get self peptide presented inadvertently in the context of a coast imator signal is that clear just to make you aware that Incan 2 receptor has two kinds of affinities one's a low-level monate the Incan 2 receptor is three chains the alpha chain beta chain and Gamma chain when it only expresses the beta and gamma it's a moderate Affinity requires a large concentration of I2 to bind it and activate it however once the t- cell is AC activated is it expresses all three chains and now binds inter Lucan 2 with very high affinity and this is just showing that when in the absence of infection it only expresses the moderate Affinity interlukin to receptor and requires high concentrations of interin 2 when there's a rip roaring infection around and a lot of intralin 2 being expressed in order to be activated however when it's in when it gets activated it expresses all three chains and now even responds to low levels of Incan 2 and in the next lecture again I'll go to more detail into what cyto th1 cells Express so it's just like kind of scenes for the next lecture and so now the question is to consider how do te cells know where to go so we discuss that is dependent upon selective expression of adhesion molecules immature naive t- cells Express one group of adhesion molecules when they mature they lose those and express a different group of adhesion molecules and that's and and different tissues Express different adhesion molecule receptors to Target them how does the antigen get targeted to a t- cell again dendritic cells come into the lymph node from the periphery and te- cells come into the lymph node and that's where they meet how is a t- cell response to self antigen presented again we I spoke about how in the absence of getting a cumor signal te- cells are anergized in the periphery and therefore if a t- cell got out C self antigen then now it gets anergized when it's presented by a by a non-antigen presenting cell or an inactivated antigen presenting cell okay